//
you're reading...
Information Security, Information Security Governance, Risk Management

Enterprise risk management strategies for Chief Information Officer (CIO).


Risk management is critical for any enterprises embarking on new IT projects and plans. There’s the risk of offshore outsourcing — how do you ensure your data is safe in the hands of a worker in another country? There are also risks in managing compliance efforts especially in offshore business operation. These include closing down your company or losing your position if the job isn’t done correctly. How do CIOs calculate and management risk? Take a look at the enterprise risk management strategies in this CIO Briefing for insight and advice on this important topic.
This share CIO Briefings series, which is allow to give IT leaders strategic guidance and advice that addresses the management and decision-making aspects of timely topics.

Managing operational risk

The common news headlines continue: systems failures, data breaches, project delays, troubled products, trading failures, money laundering through mobile networks. These are just some of the sinkholes in operational-risk land related to information technology. The question is, why? Why do they keep coming despite efforts to prevent them?
Why can’t I just get a single view of risk to the business, especially a particular business activity or process? What makes this so difficult? Most exasperated CIO asked at an executive briefing held by a chapter of the ISACA IT security organization after I discussed IT-related business risk.

One bad business-IT decision killed our company! Griming reality, right?

Analyzing IT-related risk in silos leaves gaps and frustrates business leaders. Responding to IT risk in silos increases cost, creates prioritization errors and unleashes other gremlins. Silos can lead to both fundamental errors (such as thinking that IT security equals IT risk management, or that IT compliance equals IT risk management) and more complex errors (such as missing the ways risks in a shared infrastructure affect business processes).
Every organization should be able to articulate how IT threats can harm a business. How a five-step risk management strategy, based on a risk management standard like ISO 31000, makes it easier to explain how IT threats become business threats.

How risk management standards can work for enterprise IT

IT security and risk professionals have historically had a hard time articulating how IT threats might negatively impact the business. That needs to change. Attacks on government sites, substantial fraud, and massive privacy breaches continue to expose to the world the high level of risk connected to our corporate and national IT infrastructure. Executives and managers will need to rely more on IT security data and analysis in order to better protect their corporate interests.

As internal and external pressure intensifies, IT professionals must adopt more sophisticated risk management practices so they can better articulate risks, mitigation plans and overall exposure. This means combining both security and risk mentalities, which can be difficult to translate into practical tools and processes.
Rather than start from scratch, security professionals should utilize the standards and guidance available in the enterprise risk management (ERM) domain. The fundamental risk management processes that should be applied to IT risk management, based on the new, streamlined risk management standard from the International Organization for Standardization (ISO): ISO 31000. The following five steps provide guidance for building a formal, ISO 31000-based IT risk management program that communicates well with, and adds value to, the rest of the organization:


Step 1: Establish the context

This step may seem esoteric or even irrelevant, but without clear definitions, there will be organizational confusion and arguments over responsibilities later on. Begin by identifying individuals with risk experience (internally or externally) to help formalize tools and methods for identifying, measuring, and analyzing risk. Once formal roles have been established, risk professionals should document the IT organization’s core objectives and define the ways in which IT risk management supports them.

Establishing risk appetites and tolerance during this first stage will help prioritize risk mitigation efforts later on. Conversations with risk management clients have indicated that most organizations initially choose to rank certain categories of risk for which they have less tolerance, rather than trying to develop quantifiable risk appetites. This is a good first step, but these organizations will eventually need more granular criteria to make informed decisions about which specific risks to focus on.

Step 2: Identify the risks

Risk managers will need to tap into their creativity to create a comprehensive list of potential risks. Risks not identified at this stage will not be analyzed or evaluated later on, so having an overly exhaustive list is preferable to one that is overly limited. Start by conducting workshops with relevant stakeholders, identifying the broad range of issues that could impair their objectives, processes and assets.
Forrester clients that have been using IT control frameworks, such as Control Objectives for Information and related Technology (COBIT) or ISO 27002, often find them to be useful guides for categorizing their risks. Note that risks should be specific to your organization, not a generic list. Plan to reexamine your full list of risks at least on an annual basis to identify any new or emerging risks.

Step 3: Analyze the risks

Security professionals typically have a good understanding of events and issues that might undermine IT processes; however, it’s often harder for them to determine what the impact will be to the IT department or the organization as a whole. Work closely with business stakeholders to understand criticality and impact. It may even be possible to leverage the business impact analysis work done by the business continuity team to fill in some of the gaps.
Many organizations have found it helpful to create a scale by which to approximate the level of likelihood and impact. For example, some companies create a matrix to measure the likelihood of risks based on characteristics such as exposure or attractiveness of target, and impact based on characteristics such as potential financial costs or reputation damage. The result is a “heat map” that helps prioritizes mitigation efforts on the set of risks with the highest combined likelihood and impact ratings.

Step 4: Evaluate the risks

Levels of risk after controls have been accounted for (i.e., residual risk) that fall outside of the organization’s risk tolerance will require treatment decisions. The risk appetite and thresholds previously defined will provide guidelines for when to avoid, accept, share, transfer, or mitigate risks. The decisions themselves should be made by individuals who are granted authority or accountability to manage each risk, with input from others who may be positively or negatively affected.

For some risks, the initial analysis may only allow you to determine that your exposure is potentially high enough to warrant further investigation. Make sure to conduct further analysis when necessary.

Step 5: Treat the risks

If the treatment decision involves the mitigation of risk, organizations need to design and implement controls to reduce threats to the organization’s achievement of objectives. Many risks will require more than one control (i.e. policies, training, prevention measures, etc.) to decrease their expected likelihood and/or impact. Conversely, some controls may mitigate more than one risk. It’s a good idea to consider multiple reevaluations during implementation.
Look out for peripheral effects caused by risk treatments that introduce new risks and/or opportunities. For example, the decision to transfer risk to a business partner may increase risk of that partner becoming disloyal.
Very few organizations have fully adopted risk management standards in any aspect of their business, and IT departments are no exception. Forrester recommends providing common guidance for all risk groups, collaborating with peers in functions such as audit and compliance, and settling on policies and procedures before turning to risk management technologies. These steps should help IT risk management programs improve their ability to work closely with the business and achieve a level of commitment in line with the level of risk they’re expected to address.

Strategic risk management includes risk-based approach to compliance

What is strategic risk management for compliance? and the answer will depend on who’s talking. But the gist is this: Rather than allowing the ever-multiplying regulatory mandates to determine a compliance program, an organization focuses on the threats that really matter to its business — operational, financial, and environmental and so on — and implements the controls and processes required protecting against them.
Focusing on protecting the business will result in a strategic risk management program that, in theory, will answer compliance regulations but in some cases go well beyond the mandate. A risk management approach, say advocates, also saves money by reducing the redundant controls and disparate processes that result when companies take an ad hoc approach.

The scope of protection against threats and degree of compliance depends on an organization’s risk appetite. The appetite for risk can wax and wane, depending on externalities such as a data breach, a global economic crisis or an angry mob of customers outraged by executive pay packages. When companies are making big profits, they can spend their way out of a compliance disaster. In financially rocky times, however, there is much less margin for error.
IT pros like Alexander and a variety of experts suggest that while a risk-based approach to compliance might be the right thing to do, it is also difficult, requiring that the organization:

• Define its risk appetite.
• Inventory the compliance obligations it faces.
• Understand the threats that put the various aspects of the business at risk.
• Identify vulnerabilities.
• Implement the controls and processes that mitigate those threats.
• Measure the residual risk against the organization’s risk appetite.
• Recalibrate its risk appetite to reflect internal and external changes in the threat landscape.

A risk-based approach to compliance requires a certain level of organizational maturity and, some experts hasten to add, is ill-advised for young companies.

Strategic risk management for compliance can be managed manually or by Excel spreadsheets, but vendors promise that sophisticated governance, risk and compliance (GRC) technology platforms will ease the pain. Meantime, those baseline compliance regulations still need to be met to an auditor’s satisfaction.

Do you know what level of risk your organization can tolerate?

The assumption in a risk management approach to compliance is that the business knows best about the risk level it can tolerate.

When it comes to risk management, getting your head around a tolerance level is extremely difficult.
Then there’s the dirty little secret of every organization: For hundreds of years, businesses have been managing risk intuitively: which perceive there’s to be a risk; therefore we build control. But most controls are built to a perception of the risk and a perception of the scope of the risks, without really stopping to consider what is the real risk and is this the right control.

By not doing the risk-benefit analysis, companies get the controls wrong. Spending $1 million control mitigating a $100,000 risk – not making any sense at all.


The short end of the cost-benefit analysis

Back in the 1970s, Ford Motor Co. was sued for allegedly making the callous calculation that it was cheaper to settle with the families of Pinto owners burnt in rear-end collisions than to redesign the gas tank. The case against Ford, as it turns out, was not so cut and dried, but the Pinto lives on in infamy as an example of a company applying a cost-benefit analysis and opting against the public’s welfare.

Regulations introduce externalities that risk management itself would not have brought to bear and Regulations make it a cost of doing business.

A recent example concerns new laws governing data privacy. For many years in the U.S., companies that collected personally identifiable information owned that data. In the past, losing that information didn’t hurt the collector much but could cause great harm to the consumer,  hence the regulations.  But the degree to which a business decides to meet the regulation varies, depending — once again — on its tolerance for risk. Organizations must decide whether they want to follow the letter of the law to get a checkmark from the auditor, Henry said, or more fully embrace the spirit of the law.

Is your philosophy as an organization minimal or maximal? And if it is minimal, you may decide that it is worth it to get a small regulatory fine rather than comply.

Indeed, “businesses now are cutting costs so narrowly that some know their controls are inadequate and are choosing not to spend that $1 million to put the processes, the people and infrastructure in place for that $100,000 fee.  They calculate they’re still $900,000 ahead but don’t expect a business to own up to that. They never let that cat out of the bag.

Sarbanes-Oxley drives risk management strategy

Compliance is expensive. It is hardly surprising that companies are looking for ways to reduce the cost of regulatory compliance or, better yet, use compliance to competitive advantage. According to Boston-based AMR Research Inc.’s 2008 survey of more than 400 business and IT executives, GRC spending totaled more than $32 billion in 2008, a 7.4% increase from the prior year.

The year-over-year growth was actually less than the 8.5% growth from 2006 to 2007, but the data shows that spending among companies is shifting from specific GRC projects to a broad-based support of risk. In addition to risk and regulatory compliance, respondents told AMR they are using GRC budgets to streamline business processes, get better visibility to operations, improve quality and secure the environment.

In prior years, compliance as well as risk of noncompliance was the primary driving force behind investments in GRC technology and services. GRC has emerged as the new compliance.

Folding regulatory mandates into the organization’s holistic risk management strategy gained momentum in the wake of the Sarbanes-Oxley Act of 2002 (SOX), one of the most expensive regulations imposed on companies. SOX was passed as protection for investors after the financial fraud perpetrated by Enron Corp. and other publicly held companies, but it was quickly condemned by critics as a yoke on American business, costing billions of dollars more than projected and handicapping U.S. companies in the global marketplace.

Indeed, the law’s initial lack of guidance on the infamous Section 404 prompted many companies to err on the (expensive) side of caution, treating the law as a laundry list of controls. In 2007, under fire from business groups, the Securities and Exchange Commission and Public Company Accounting Oversight Board issued a new set of rules encouraging a more top down-approach to SOX.

There are certain areas mandated you wouldn’t want to meddle with — it is legal and no exceptions — but instead of checking every little box, companies were advised to take a more risk-based approach.

Risk management frameworks and automated controls

Risk management frameworks are not new, and neither, really, is a risk-based approach to compliance. But the strategy has been gaining ground, driven in large part by IT as well as by IT best practices frameworks such as COBIT and the IT Infrastructure Library.

Fifteen years ago at any well-managed organization, 75% of controls were manual. Today, the industry benchmark is the other way around. IT drives about 90% of the controls and 10% are manual. The endpoint is to move the 10% manual controls to automated controls.

Two fundamental building blocks are essential to adopting a risk-based approach to compliance. A stable systems and processes, and a strong business ethos. If a company has absolutely diverse processes, it is not a good choice it’s more like crisis management than risk management for those guys — compliance Whack-a-Mole.

Formulating a strategic risk management strategy also requires a clear definition of the values and principles that drive the organization’s business — in other words, a certain level of maturity. If the ethos is loosely defined, then it is not safe to take a holistic approach to compliance.

Companies that make the grade, that give consistent guidance to investors indeed any that operate successfully in the SOX arena are probably ready for a risk-based approach.

Navigating social media risks

Developing corporate social media policies is an ongoing experiment akin to the struggle enterprises endured when the Internet and email were introduced as business tools. Enterprises should not assume, however, that the policies they developed over many years for Internet and email use are a perfect fit for social media.

Companies are making a mistake when they say social media is the same as email and chat. there’s enough that is different about social media that you need to be blunt and state the [rules of behavior] again, even if they’re the same words [used for older e-communications polices] — which I doubt they will be.

For starters, e-discovery polices will change, given the free-for-all nature of social networking, according to Stew Sutton, principal scientist for knowledge management at The Aerospace Corp., a federally funded research and development center in El Segundo, Calif. His organization has no limits on email retention, but with “social conversations, wikis, blogs and tweet streams, the mass of data sitting out there becomes a problem,” he said. The issues can make e-discovery “extremely costly.”

CIOs weigh use of social media against security concerns

One of the Medical Center, a private hospital center affiliated with one U.S. University, blocks access to all social media websites using security software from Websense Inc. Users who attempt to use such sites as Facebook, YouTube or Twitter are shown a page indicating that their destination is off-limits. Nevertheless, the debate about whether to open up access to such sites or to keep blocking them remains contentious.

In fact, the discussion comes up “practically on a daily basis,” said Brad Blake, director of IT at BMC. “As you can imagine, we have a lot of users who want access to these sites, but for a variety of reasons we do not feel comfortable opening them.”

If BMC created a Facebook account and asked its patients to be friends, that would constitute a security breach, senior management has felt it easier just to block these sites rather than trying to police and manage them.
CIOs faced with the use of social media as a business tool are hard-pressed to balance that business need against security concerns. Some are so hard-pressed, in fact, that they begged off being interviewed for this story, asserting they are too new to the game to speak knowledgeably about security tools for social media. Other CIOs were pressured by their public relations people not to broadcast their thinking, for security reasons. Even those who agreed to describe their strategy for securing social media were hesitant about providing details about their IT tools. And others were in a position similar to Blake: As their companies wrestled with how the business should use social media, the default position was to simply block access.

We are finding that a lot of these policies are disallowing use of social media, even when there is a business need. Companies have people bringing in social media and using it faster than the policies and the security groups can keep up with.

Not so long ago, the notion seemed absurd that employees would use a social media website like YouTube for business purposes. Now, many marketing departments are putting videos on YouTube, as well as tracking videos that competitors post. But protecting the business from the risks of social media while facilitating a legitimate business need — at least on a proactive basis — remains outside the grasp of many businesses.

People are not there yet. A lot of the tools — access controls being one — are coarse and crude. Implementing nuanced, automated rules that, for example, allow a marketing department to use YouTube as long as it takes up only so much bandwidth, or is used only during a certain time, is “very difficult.

Companies need to monitor their networks and desktops, as well as their social networks, to find out what employees and outsiders are saying about the company. In such situations, however, often the best that can be done with existing technology is to detect problems after the fact.

Most security professional encourages CIS to track company information that shows up on social media sites. There are numerous analytic tools for Twitter, including TweetStats,Twitter Grader and Hootsuite. Such Web and content filtering tools as Websense’s SurfControl cover the Internet and email. Indeed, internal tools for monitoring employees’ Internet use have been in place for a long time.  Most good firewalls will spit out variances — a red light alerting this person is uploading 2 GB of data.

Security tools aren’t that smart, however. “Intrusion prevention systems aren’t smart enough to shut off connections based on the content or syntax of something that people are posting,” Baumgarten said. A clear policy on the use of social media is still the first line of defense against social media threats.

Avoiding cloud computing risks

Following the recent downtime and data breaches at top-tier cloud service providers including Amazon Web Services LLC, Sony Corp. and Epsilon Data Management LLC, the risk deck has been shuffled at enterprises looking to move to hybrid cloud computing. Two risks that lurked in the middle of our top 10 list — liability and identity management — have floated to the top.

Once again, enterprise executives are talking about the need for cloud insurance, or at least a discussion about who is responsible when the cloud goes down. Presently, public clouds offer standardized service-level agreements, or SLAs, that offer remuneration for time — but not for potential business — lost during the downtime. Recent events could be opportunities for providers and CIOs to negotiate premium availability services, according to experts.
Why is cloud computing so hard to understand? It would be an equally fair question to ask why today’s Information Technology is so hard to understand. The answer would be because it covers the entire range of business requirements, from back-office enterprise systems to various ways such systems can be implemented. Cloud computing covers an equal breadth of both technology and, equally important, business requirements. Therefore, many different definitions are acceptable and fall within the overall topic.

But why use the term “cloud computing” at all? It originates from the work to develop easy-to-use consumer IT (Web 2.0) and its differences from existing difficult-to-use enterprise IT systems.

A Web 2.0 site allows its users to interact with other users or to change content, in contrast to non-interactive Web 1.0 sites where users are limited to the passive viewing of information. Although the term Web 2.0 suggests a new version of the World Wide Web, it does not refer to new technology but rather to cumulative changes in the ways software developers and end-users use the Web.

World Wide Web inventor Tim Berners-Lee clarifies, “I think Web 2.0 is, of course, a piece of jargon; nobody even knows what it means. If Web 2.0 for you is blogs and wikis, then that is ‘people to people.’ But that was what the Web was supposed to be all along. The Web was designed to be a collaborative space where people can interact.”

In short, Web 2.0 isn’t new technology; it’s an emerging usage pattern. Ditto for cloud computing; it’s an emerging usage pattern that draws on existing forms of IT resources. Extending Berners-Lee’s definition of Web 2.0, the companion to this book, Dot Cloud: The 21st Century Business Platform, helps clarify that cloud computing isn’t a new technology:

“The cloud is the ‘real Internet’ or what the Internet was really meant to be in the first place, an endless computer made up of networks of networks of computers.”

“For geeks,” it continues, “cloud computing has been used to mean grid computing, utility computing, Software as a Service, virtualization, Internet-based applications, autonomic computing, peer-to-peer computing and remote processing — and various combinations of these terms. For non-geeks, cloud computing is simply a platform where individuals and companies use the Internet to access endless hardware software and data resources for most of their computing needs and people-to-people interactions, leaving the mess to third-party suppliers.”


Cloud’s birth in the new world

Again, cloud computing isn’t new technology; it’s a newly evolved delivery model. The key point is that cloud computing focuses on the end users and their abilities to do what they want to do, singularly or in communities, without the need for specialized IT support. The technology layer is abstracted, or hidden, and is simply represented by a drawing of a “cloud.” This same principle has been used in the past for certain technologies, such as the Internet itself. At the same time, as the Web 2.0 technologists were perfecting their approach to people-centric collaboration, interactions, use of search and so on, traditional IT technologists were working to improve the flexibility and usability of existing IT.

This was the path that led to virtualization, the ability to share computational resources and reduce the barriers of costs and overhead of system administration. Flexibility in computational resources was in fact exactly what was needed to support the Web 2.0 environment. Whereas IT was largely based on a known and limited number of users working on a known and limited number of applications, Web 2.0 is based on any number of users deploying any number of services, as and when required in a totally random dynamic demand model.

The trend toward improving the cost and flexibility of current in-house IT capabilities by using virtualization can be said to be a part of cloud computing as much as shifting to Web-based applications supplied as services from a specialist online provider. Thus it is helpful to define cloud computing in terms of usage patterns or “use cases” for internal cost savings or external human collaboration more than defining the technical aspects.

There are differences in regional emphases on what is driving the adoption of cloud computing. The North American market is more heavily focused on a new wave of IT system upgrades; the European market is more focused on the delivery of new marketplaces and services; and the Asian market is more focused on the ability to jump past on-premise IT and go straight to remote service centers.


How the cloud shift affects front-office activities?

There is a real shift in business requirements that is driving the “use” as a defining issue. IT has done its work of automating back office business processes and improving enterprise efficiency very well, so well that studies show the percentage of an office worker’s time spent on processes has dropped steadily. Put another way, the routine elements of operations have been identified and optimized. But now it’s the front office activities of interacting with customers, suppliers and trading partners that make up the majority of the work.

Traditional IT has done little to address this, as its core technologies and methodologies of tightly-coupled, data-centric applications simply aren’t suitable for the user-driven flexibility that is required in the front office. The needed technology shift can be summarized as one from “supply push” to “demand pull” of data, information and services.

Business requirements are increasingly being focused on the front office around improving revenues, margins, market share and customer services. To address these requirements, a change in the core technologies is needed in order to deliver diversity around the edge of the business where differentiation and real revenue value are created. Web 2.0 user-centric capabilities are seen as a significant part of the answer.

The technology model of flexible combinations of “services” instead of monolithic applications, combined with user-driven orchestration of those services, supports this shifting front office emphasis on the use of technology in business. It’s not even just a technology and requirement match; it’s also a match on the supply side. These new Web 2.0 requirements delivered through the cloud offer fast, even instantaneous, implementations with no capital cost or provisioning time.

This contrasts to the yearly budget and cost recovery models of traditional back office IT. In fact many cloud-based front office services may only have a life of a few weeks or months as business needs continually change to suit the increasingly dynamic nature of global markets. Thus the supply of pay-as-you-go instant provisioning of resources is a core driver in the adoption of cloud computing. This funding model of direct cost attribution to the business user is in stark contrast to the traditional overhead recovery IT model.

While cloud computing can reduce the cost and complexity of provisioning computational capabilities, it also can be used to build new shared service centers operating with greater effectiveness “at the edge” of the business where there’s money to be made. Front office requirements focus on people, expertise and collaboration in any-to-any combinations.
According to Dot Cloud, “There will be many ways in which the cloud will change businesses and the economy, most of them hard to predict, but one theme is already emerging. Businesses are becoming more like the technology itself: more adaptable, more interwoven and more specialized . These developments may not be new, but the advent of cloud computing will speed them up.”

There are many benefits to the various cloud computing models. But for each benefit, such as cost savings, speed to market and scalability, there are just as many risks and gaps in the cloud computing model.
The on-demand computing model in itself is a dilemma. With the on-demand utility model, enterprises often gain a self-service interface so users can self-provision an application, or extra storage from an Infrastructure as a Service provider. This empowers users and speeds up projects.

The flip side: Such services may be too easy to consume. Burton Group Inc. analyst Drue Reeves, speaking at the firm’s Catalyst show last week, shared a story of a CIO receiving bills for 25 different people in his company with 25 different accounts with cloud services providers. Is finance aware of this, or will it be in for a sticker shock?

Lack of governance can thus be a problem. The finance department may have to address users simply putting services on a credit card, and there’s also the issue of signing up for services without following corporate-mandated procedures and policies for security and data privacy. Does the information being put in the cloud by these rogue users contain sensitive data? Does the cloud provider have any regulatory compliance responsibility, and if not, then is it your problem?

There are several other big what-ifs regarding providers. For example, do they have service-level agreements (SLAs)? Can you get an SLA that covers security parameters, data privacy, reliability/availability and uptime, data and infrastructure transparency?

The main issues are you can’t see behind the [cloud providers’] service interface so you don’t know what their storage capabilities really are, what their infrastructure really is … so how can you make SLA guarantees [to users]?
Furthermore, would the provider be able to respond to an e-discovery request? Is that on the SLA, and is that information classified, easily accessible and protected?

For some companies, a lack of an SLA is not an issue. For CNS Response Inc., a psychopharmacology lab service that provides a test for doctors to match the appropriate drug to a behavioral problem, not having an SLA with Saleforce.com Inc. was a moot point.

But is this good enough for a large enterprise? That question remains, and experts said it will be up to customers to push vendors to provide appropriate SLAs.

In fact, a big message at the show was pushing vendors to do such things as:
Have open application programming interfaces (APIs). There is an inability to monitor and manage APIs on many levels. Customers cannot see where their data resides at their cloud provider, and more importantly, there is no application or service management layer to gain visibility into the performance and management of the application.

There has to be a management layer so customers can see what and where their assets are for the cloud, what systems are used by which applications. Just think of the cloud as your own data center.

Create fair licensing schemes. Enterprises should be pushing cloud providers to move away from licensing based on physical hardware and compute resources to licenses based on virtual CPUs, managed or installed instances and user seats.

Which brings up another significant what-if: What happens to your data in a legal entanglement?
What if you miss paying a bill, or decide not to pay a bill for various reasons, like dissatisfaction with the service? Do you lose your data? Is access to your data put on hold?

There are a lot of questions as to who ultimately owns the data for e-discovery purposes, or if you decide to switch providers. Will you have to start all over if you didn’t put the code in escrow, for example?

Cloud computing touts many benefits, but Burton experts at the show said enterprises need to be aware of the what-ifs: What does this really mean for my bottom line, how do I govern this, who really has access to my data and what do the cloud computing providers really have to offer?

Discussion

No comments yet.

Leave a comment

Updates

Topics

Blog Stats

  • 63,228 hits
February 2012
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
272829  
Follow Daniel Vizcayno's Insights on WordPress.com

Member of The Internet Defense League

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,152 other subscribers