Last week, Google threatened to shutter operations in China over an alleged attack. The attack was aimed at "exfiltrating" Google's IP and compromising the email accounts of Chinese human rights activists. Based on Google's inherited role as the global representative of the Cloud, the incident generated renewed discussion on "Cloud" security and its potential hindrance to Cloud services adoption.
Whit Andrews, vice president at leading analyst firm Gartner, stated "This is a pretty public blow against the security of the Cloud." He went on to claim that this could "affect the uptake of Google applications."
However, there’s another way to look at the situation. For starters, let's consider Google's response. By moving to close operations in China, Google shone a bright light on the activities of sophisticated cybercriminals. (The Chinese government denies involvement.) How many corporations on the planet have leverage with the People's Republic of China? A few, perhaps, but what about yours?
Google can't avoid being the poster child for the Cloud vulnerabilities that will undoubtedly emerge over the coming years. Yet the company has also proven itself a defender of security—not only from the point of view of technology but also from the broader perspective of ethics. Regardless of the compromise, perhaps you are, in the end, actually better putting your eggs in that basket rather than taking on the security yourself.
The lesson to learn from this might be that it’s wiser and, in the long run, cheaper to accept some of the risk involved in trusting a Cloud provider than to take on international cybercriminals yourself.
Wednesday, January 20, 2010
Tuesday, December 15, 2009
AmberPoint "Gets" Governance
AmberPoint has had a unique perspective of application governance over the years. As the industry’s leading provider of management solutions for composite applications, we’ve seen our enterprise customers struggle with implementing application governance time and again.
What’s gotten in their way? Well, the biggest problem is often the governance solutions themselves. Existing application governance solutions are typically heavy and inflexible, which means an organization must revamp processes and replace infrastructure to accommodate it. And they require way too much manual effort to be practical. IT staff and managers reject these time-consuming solutions, which seem to be more trouble than they’re worth. On that subject, these solutions call for an unreasonable upfront investment – to the tune of hundreds of thousands of dollars.
As a result, few organizations get what they expect from such application governance solutions.
At AmberPoint, we set out to fix this problem. The result of our efforts is AmberPoint Governance System, which we formally announced just recently. AmberPoint Governance System goes about things quite differently:
• It automates many governance tasks to minimize the manual effort of cataloging and policy enforcement
• It’s lightweight and flexibly accommodates the processes and infrastructure you already have in place.
• It minimizes resistance and promotes coordinated application development and deployment.
• It features an incremental deployment model that enables you to start with just the governance you need, and then add more to accommodate new projects or more users.
You’ll hear more from us about AmberPoint Governance System in the weeks and months ahead. In the meantime, here’s what one of our customers had to say:
"AmberPoint has thought about this the right way. By automating many of the more arduous governance tasks, AmberPoint is making it much easier for us to keep tabs on our application environment. Its automated policy enforcement will give us much better compliance across our complex system. They’ve changed application governance from a chore to a true benefit for our IT staff."
~ Kevin Forbes, Enterprise Architect at Healthways
What’s gotten in their way? Well, the biggest problem is often the governance solutions themselves. Existing application governance solutions are typically heavy and inflexible, which means an organization must revamp processes and replace infrastructure to accommodate it. And they require way too much manual effort to be practical. IT staff and managers reject these time-consuming solutions, which seem to be more trouble than they’re worth. On that subject, these solutions call for an unreasonable upfront investment – to the tune of hundreds of thousands of dollars.
As a result, few organizations get what they expect from such application governance solutions.
At AmberPoint, we set out to fix this problem. The result of our efforts is AmberPoint Governance System, which we formally announced just recently. AmberPoint Governance System goes about things quite differently:
• It automates many governance tasks to minimize the manual effort of cataloging and policy enforcement
• It’s lightweight and flexibly accommodates the processes and infrastructure you already have in place.
• It minimizes resistance and promotes coordinated application development and deployment.
• It features an incremental deployment model that enables you to start with just the governance you need, and then add more to accommodate new projects or more users.
You’ll hear more from us about AmberPoint Governance System in the weeks and months ahead. In the meantime, here’s what one of our customers had to say:
"AmberPoint has thought about this the right way. By automating many of the more arduous governance tasks, AmberPoint is making it much easier for us to keep tabs on our application environment. Its automated policy enforcement will give us much better compliance across our complex system. They’ve changed application governance from a chore to a true benefit for our IT staff."
~ Kevin Forbes, Enterprise Architect at Healthways
Tuesday, November 3, 2009
Who’s Responsible for Sorting Out Failed Transactions?
During our webcast last week on Business Transaction Management, we polled our audience of Architects, Project Managers, IT Executives, Application Developers and Business Managers to see who’s responsible in their organizations for fixing things when transactions start to fail.
We asked them:
When Transactions Fail, Which Group is Responsible for Sorting Things Out?
We got a variety of answers, as you’d expect. Not every organization handles failed transactions the same way. However, by far the most common answer was Application Support Groups. Operations was a distant second, followed closely by Business Units. Here are the results of our poll.
1) Application Support Group - 68%
2) Operations - 13%
3) Business Units - 12%
4) We just muddle thru - 7%
For companies that don’t have Business Transaction Management, it’s typically the Business Units who first hear about the issue—often from irate customers whose transactions did not complete properly. The Business Units then notify Operations and App Support (please note that I’m using the word “notify” here as a very gentle euphemism for the way they actually tell them about the problem). And these unfortunate Application Support and Operations teams are left with the complicated and time-consuming task of sorting out where in their complex application flows the failure took place.
The biggest issue, of course, isn’t necessarily who fixes the issue but how soon they are able to fix it. Unless you’re tracking all the transactions flowing across your distributed applications, you probably won’t hear about failed transactions until they’ve impacted your bottom line.
We asked them:
When Transactions Fail, Which Group is Responsible for Sorting Things Out?
We got a variety of answers, as you’d expect. Not every organization handles failed transactions the same way. However, by far the most common answer was Application Support Groups. Operations was a distant second, followed closely by Business Units. Here are the results of our poll.
1) Application Support Group - 68%
2) Operations - 13%
3) Business Units - 12%
4) We just muddle thru - 7%
For companies that don’t have Business Transaction Management, it’s typically the Business Units who first hear about the issue—often from irate customers whose transactions did not complete properly. The Business Units then notify Operations and App Support (please note that I’m using the word “notify” here as a very gentle euphemism for the way they actually tell them about the problem). And these unfortunate Application Support and Operations teams are left with the complicated and time-consuming task of sorting out where in their complex application flows the failure took place.
The biggest issue, of course, isn’t necessarily who fixes the issue but how soon they are able to fix it. Unless you’re tracking all the transactions flowing across your distributed applications, you probably won’t hear about failed transactions until they’ve impacted your bottom line.
Friday, August 7, 2009
The Thing about Going Green…
Going green is the new mantra for both business and IT. In this economic climate, one might wonder why now’s the right time for businesses to up their investments in the environment, rather than squirreling away their money. Truth be told, being green ultimately boils down to reducing IT costs. Remember, corporate data centers require massive amounts of power to upkeep and maintain.
It takes careful planning to shave excesses, however. That means you need good data on which to base your decisions. So datacenters need insight into the actual resource requirements for their applications, not just estimates. More often than not, datacenters over-provision the application environment to ensure they can meet peak loads. Of course, application loads don’t remain at their peak for the better part of a 24 hour operation. The solution lies in provisioning capacity on-demand.
Dynamically scaling capacity to meet application needs remains a holy grail in the world of IT management. Virtualization and cloud computing both promise solutions to this challenge. However, these technologies bring only the mechanics for on-demand provisioning, not the control system for allocating or removing capacity. Building intelligent procedures for dynamic provisioning requires close monitoring of the quality of service your systems are delivering to your customers. Should performance breach your service level objectives, on-demand measures should kick-in.
The better your grasp of service quality, the better optimized your control system will be. What’s the best way to measure quality of service, you ask? Well, we think its best measured by monitoring the transaction performance, service usage and fault rates experienced by your end users, not the CPU and memory consumption of your servers.
It takes careful planning to shave excesses, however. That means you need good data on which to base your decisions. So datacenters need insight into the actual resource requirements for their applications, not just estimates. More often than not, datacenters over-provision the application environment to ensure they can meet peak loads. Of course, application loads don’t remain at their peak for the better part of a 24 hour operation. The solution lies in provisioning capacity on-demand.
Dynamically scaling capacity to meet application needs remains a holy grail in the world of IT management. Virtualization and cloud computing both promise solutions to this challenge. However, these technologies bring only the mechanics for on-demand provisioning, not the control system for allocating or removing capacity. Building intelligent procedures for dynamic provisioning requires close monitoring of the quality of service your systems are delivering to your customers. Should performance breach your service level objectives, on-demand measures should kick-in.
The better your grasp of service quality, the better optimized your control system will be. What’s the best way to measure quality of service, you ask? Well, we think its best measured by monitoring the transaction performance, service usage and fault rates experienced by your end users, not the CPU and memory consumption of your servers.
Tuesday, July 14, 2009
Services in the Cloud, Security in the House
Reading through the Cloud Security Alliance (CSA), I was struck by a basic theme that addresses some architectural reservations that have emerged elsewhere. As we pointed out in an earlier post, the cloud fear factor is ramping up significantly. A recent blogger has even claimed that cloud "mega-hubs" will be a favorable terrorist target, and may result in a digital 9/11. Cloud mega-hubs are nothing new. Consider the DNS system -- repository of domain names and network mappings, available over the network as a utility. This mega-hub risk has existed for some time, and thus mega-hubs, in themselves, may not necessarily represent a new risk.
In any case, a relevant theme emerges through the CSA's Guidance. A few choice quotes that illustrate this theme:
"Unencrypted data existent in the cloud may be considered 'lost' by the customer."
"Segregate the key management from the cloud provider hosting the data, creating a chain of separation."
"The key critical success factor to managing identities at cloud providers is to have a robust federated identity management architecture and strategy internal to the organization."
As you can see, the theme is that security starts in the home. So, while the security risks of the Cloud may actually be overhyped, the best solution is draw a distinction between the cloud business service functions, and the governance activities that surround your organization's consumption of those services. Be sure you encrypt your own data before it is sent to the Cloud.Manage your own users internally before you begin federating, and ensure that you have native capabilities in house (for example, your own standalone SAML authorities) before you begin looking outward. Use Cloud services for their business benefits. Keep your hands on the reins when it comes security and governance.
In any case, a relevant theme emerges through the CSA's Guidance. A few choice quotes that illustrate this theme:
"Unencrypted data existent in the cloud may be considered 'lost' by the customer."
"Segregate the key management from the cloud provider hosting the data, creating a chain of separation."
"The key critical success factor to managing identities at cloud providers is to have a robust federated identity management architecture and strategy internal to the organization."
As you can see, the theme is that security starts in the home. So, while the security risks of the Cloud may actually be overhyped, the best solution is draw a distinction between the cloud business service functions, and the governance activities that surround your organization's consumption of those services. Be sure you encrypt your own data before it is sent to the Cloud.Manage your own users internally before you begin federating, and ensure that you have native capabilities in house (for example, your own standalone SAML authorities) before you begin looking outward. Use Cloud services for their business benefits. Keep your hands on the reins when it comes security and governance.
Thursday, July 9, 2009
Cloud Computing... Meet Mafiaboy
Today, security in the SOA/Web services arena is usually more about risk and compliance than it is about crime prevention--although thwarting criminal activity is certainly a major aim of governance, risk and compliance (GRC) in the first place.
Enter Mafiaboy.
What better quote generator can you find than "a reformed black-hat hacker better known as the 15-year-old 'mafiaboy' who, in 2000, took down Websites CNN, Yahoo, E*Trade, Dell, Amazon, and eBay"?
And what's Mafiaboy back to tell us?
"[Cloud computing] will be the fall of the Internet as we know it.... You're basically putting everything in one little sandbox...it's going to be a lot more easy to access." Mafiaboy concluded that "cloud computing will be extremely dangerous."
One may quibble with Mafiaboy's basic assertion, or question his motives for making such newsworthy sound bites. However, it may be time to pause and realize that, even if cloud computing will not be the 'fall of the Internet as we know it,' there are millions of Mafiaboys out there who will attack cloud services. They may fire up a botnet to instigate a denial of service/extortion scheme. Or they may poke around your Cloud APIs and find a WSDL or two laying around that let them start 'playing' with your services.
All the more reason to evaluate governance solutions very early in any initiative that includes the Cloud.
Enter Mafiaboy.
What better quote generator can you find than "a reformed black-hat hacker better known as the 15-year-old 'mafiaboy' who, in 2000, took down Websites CNN, Yahoo, E*Trade, Dell, Amazon, and eBay"?
And what's Mafiaboy back to tell us?
"[Cloud computing] will be the fall of the Internet as we know it.... You're basically putting everything in one little sandbox...it's going to be a lot more easy to access." Mafiaboy concluded that "cloud computing will be extremely dangerous."
One may quibble with Mafiaboy's basic assertion, or question his motives for making such newsworthy sound bites. However, it may be time to pause and realize that, even if cloud computing will not be the 'fall of the Internet as we know it,' there are millions of Mafiaboys out there who will attack cloud services. They may fire up a botnet to instigate a denial of service/extortion scheme. Or they may poke around your Cloud APIs and find a WSDL or two laying around that let them start 'playing' with your services.
All the more reason to evaluate governance solutions very early in any initiative that includes the Cloud.
Monday, June 15, 2009
When Lightning Strikes
Wed June 10th, 6:30 PM PST: A lightning strike damages Power Distribution Units serving a set of racks hosting Amazon’s EC2 service.
6:30:05 PM PST: Your business transactions start failing.
7 PM PST: Your iPhone rings.
You thought that since your engineering teams were moving to "THE CLOUD," Your systems were finally going to be more reliable, more trustworthy. Finally, the much needed relief in your already over-extended workday!
But the reality is that no matter where you run your business systems, what underlying technology you use or what controls you put in place to ensure reliable business, there will always be incidents and unforeseen events that are out of your control.
Moving pieces of your application and infrastructure to a third-party hosting environment or leveraging third-party services directly within your business applications will mean even less control. A quick glance at http://status.aws.amazon.com tells you that even the best Cloud providers are only human. Service disruptions remain commonplace, no matter if they result from freak weather conditions or good old-fashioned configuration errors.
As your application evolves and as your data center turns into an amorphous cloud (no pun intended), you need to be prepared for damage control.
From a transactions standpoint, you need watch every single transaction and make sure that your iPhone rings within seconds of a disruption, not minutes or hours. In the real-time economy, every second lost equates to lost revenue.
You’ve got to be able to immediately identify which transactions failed, how many transactions failed, which consumers were affected and more, so that corrective procedures can be put into action. It will no longer suffice to simply let the business know that their transactions were disrupted!
Finally, it would help your case to negotiate strict SLAs with your service provider and establish a strategy for monitoring and documenting real-time compliance. In the event of a disruption--even if it’s too minor to be counted as a disruption by your provider--be prepared to furnish evidence and hold them accountable for the losses your business incurs.
6:30:05 PM PST: Your business transactions start failing.
7 PM PST: Your iPhone rings.
You thought that since your engineering teams were moving to "THE CLOUD," Your systems were finally going to be more reliable, more trustworthy. Finally, the much needed relief in your already over-extended workday!
But the reality is that no matter where you run your business systems, what underlying technology you use or what controls you put in place to ensure reliable business, there will always be incidents and unforeseen events that are out of your control.
Moving pieces of your application and infrastructure to a third-party hosting environment or leveraging third-party services directly within your business applications will mean even less control. A quick glance at http://status.aws.amazon.com tells you that even the best Cloud providers are only human. Service disruptions remain commonplace, no matter if they result from freak weather conditions or good old-fashioned configuration errors.
As your application evolves and as your data center turns into an amorphous cloud (no pun intended), you need to be prepared for damage control.
From a transactions standpoint, you need watch every single transaction and make sure that your iPhone rings within seconds of a disruption, not minutes or hours. In the real-time economy, every second lost equates to lost revenue.
You’ve got to be able to immediately identify which transactions failed, how many transactions failed, which consumers were affected and more, so that corrective procedures can be put into action. It will no longer suffice to simply let the business know that their transactions were disrupted!
Finally, it would help your case to negotiate strict SLAs with your service provider and establish a strategy for monitoring and documenting real-time compliance. In the event of a disruption--even if it’s too minor to be counted as a disruption by your provider--be prepared to furnish evidence and hold them accountable for the losses your business incurs.
Subscribe to:
Posts (Atom)