Removing Deployment Friction With Push-To-Deploy

appengineAt work we use CloudFoundry as our PaaS, but I also like to keep informed about what other platforms do.

Google AppEngine Introduces Push-To-Deploy

Google AppEngine recently added an interesting feature: Push-to-Deploy through Git.

With Push-To-Deploy, you can simply push your code to a Git repository to get your code deployed on AppEngine.

This Git repository is maintained by Google and tied to your cloud account. I guess this is implemented using the post-receive Git server hook.

Push-To-Deploy Removes Friction

What I like about this feature is that it removes some friction from the deployment process: you no longer need to know about how to deploy your application on AppEngine.

Push-To-Deploy inches us closer to a Frictionless Development Environment (FDE). The two most likely candidates to become the FDE of choice both support Git, so it’s easy to use Push-To-Deploy in both Orion and Cloud9.

More Friction Remains

LubricationOf course, this is only a small step and a lot more work needs to be done before we really have an FDE.

In my ideal world, for any change that I make the FDE would automatically run the tests and code checkers in the background and, when successful, push the changes to a development branch to make them available for my co-workers.

To make this efficient, only tests that could potentially have been impacted by the changes would run, and they would run in parallel in the cloud. When specified criteria are matched, changes on the development branch would propagate to master and, using Push-To-Deploy, to production.

Although this is all far far away, every step is to be applauded, and I hope other PaaS providers will follow Google’s example.

What Do You Think?

Do you use Google AppEngine? Git? Would you use Push-To-Deploy? Would you like to see a similar feature in CloudFoundry or another PaaS?

Please leave a comment.

Advertisement

Is XACML Dead?

ripXACML is dead. Or so writes Forrester’s Andras Cser.

Before I take a critical look at the reasons underlying this claim, let me disclose that I’m a member of the OASIS committee that defines the XACML specification. So I may be a little biased.

Lack of broad adoption

The first reason for claiming XACML dead is the lack of adoption. Being a techie, I don’t see a lot of customers, so I have to assume Forrester knows better than me.

At last year’s XACML Seminar in the Netherlands, there were indeed not many people who actually used XACML, but the room was filled with people who were at least interested enough to pay to hear about practical experiences with XACML.

I also know that XACML is in use at large enterprises like Bank of America, Bell Helicopter, and Boeing, to name just some Bs. And the supplier side is certainly not the problem.

So there is some adoption, buI grant that it’s not broad.

Inability to serve the federated, extended enterprise

XACML was designed to meet the authorization needs of the monolithic enterprise where all users are managed centrally in AD.

extended-enterpriseI don’t understand this statement at all, as there is nothing in the XACML spec that depends on centrally managed users.

Especially in combination with SAML, XACML can handle federated scenarios perfectly fine.

In my current project, we’re using XACML in a multi-tenant environment where each tenant uses their own identity provider. No problem.

PDP does a lot of complex things that it does not inform the PEP about

The PDP is apparently supposed to tell the PEP why access is denied. I don’t get that: I’ve never seen an application that greyed out a button and included the text “You need the admin role to perform this operation”.

Maybe this is about testing access control policies. Or maybe I just don’t understand the problem. I’d love to learn more about this.

Not suitable for cloud and distributed deployment

CloudSecurityI guess what they mean is that fine-grained access control doesn’t work well in high latency environments. If so, sure.

XACML doesn’t prescribe how fine-grained your policies have to be, however, so I can’t see how this could be XACML’s fault. That’s like blaming my keyboard for allowing me to type more characters than fit in a tweet.

Actually, I’d say that XACML works very well in the cloud. And with the recently approved REST profile and the upcoming JSON profile, XACML will be even better suited for cloud solutions.

Commercial support is non-existent

This is lack of adoption again.

BTW, absolute claims like “there is no software library with PEP support” turn you into an easy target. All it takes is one counter example to prove you wrong.

Refactoring and rebuilding existing in-house applications is not an option

This, I think, is the main reason for slow adoption: legacy applications create inertia. We see the same thing with SSO. Even today, there are EMC internal applications that require me to maintain separate credentials.

The problem is worse for authorization. Authentication is a one-time thing at the start of a session, but authorization happens all the time. There are simply more places in an application that require modification.

There may be some light at the end of the tunnel, however.

Under constant attackHistory shows that inertia can be overcome by a large enough force.

That force might be the changing threat landscape. We’ll see.

OAuth supports the mobile application endpoint in a lightweight manner

OAuth does well in the mobile space. One reason is that mobile apps usually provide focused functionality that doesn’t require fine-grained access control decisions. It remains to be seen whether that continues to be true as mobile apps get more advanced.

Of course, if all your access control needs can be implemented with one yes/no question, then using XACML is overkill. That doesn’t, however, mean there is no place for XACML is the many, many places where life is not that simple.

What do you think?

All in all, I’m certainly not convinced by Forrester’s claim that XACML is dead. Are you? If XACML were buried, what would you use instead?

Update: Others have joined in the discussion and confirmed that XACML is not dead:

  • Gary from XACML vendor Axiomatics
  • Danny from XACML vendor Dell
  • Anil from open source XACML implementation JBoss PicketBox
  • Ian from analyst Gartner

Update 2: More people joined the discussion. One is confused, one is confusing, and Forrester’s Eva Mahler (of SGML and UMA fame) backs her colleague.

Update 3: Another analyst joins the discussion: KuppingerCole doesn’t think XACML is dead either.

Update 4: CA keeps supporting XACML in their SiteMinder product.

Bridging the Client-Server Divide

webapp-architectureMost software these days is delivered in the form of web applications, and the move towards cloud computing will only emphasize this trend.

Web apps consist of client and server parts, where the client part has been getting bigger lately to deliver a richer user experience.

This split has implications for developers, because the technologies used on the client and server parts are often different.

The client is ruled by HTML, CSS, and JavaScript, while the server is most often developed using JVM or .NET based languages like Java and C#.

Disadvantages of Different Client and Server Technologies

Developers of web applications risk becoming either specialists confined to a single part of the stack or polyglot programmers.

Polyglot programming is the practice of knowing and using many programming languages. There are both advantages and disadvantages associated with polyglot programming. I believe the overriding disadvantage is the context switching involved, which degrades productivity and opens the doors to extra bugs.

Being a specialist has advantages and disadvantages as well. A big disadvantage I see is the “us versus them”, or “not my problem” culture that can arise. In general, Agile teams prefer generalists.

Bringing Server Technologies to the Client

Many attempts have been made at bridging the gap between client and server. Most of these attempts were about bringing server-side technologies to the client.

GWTJava on the client has failed to reached widespread adoption, and now that many people advice to disable Java applets altogether because of security reasons it seems increasingly unlikely that it ever will.

Bringing .NET to the client has likewise failed as Silverlight adoption continues to drop.

Another idea is to translate from server to client technologies. Many languages can now be compiled to JavaScript. The most mature effort is Google Web Toolkit (GWT), which translates from Java. The main problem with GWT is that it supports only a small subset of Java.

All in all I don’t feel there currently is a satisfactory way of using server technologies on the client.

Bringing Client Technologies to the Server

So what about the reverse? There is really only one client-side technology worth looking at today: JavaScript. The only other rival, Flash, is losing out quickly due to lack of support from Apple and the rise of HTML5.

Node.jsJavaScript on the server is starting to make inroads, thanks to the Node.js platform.

It is used by the Cloud9 IDE, for example, and supported by Platform-as-a-Service providers like CloudFoundry and Heroku.

What do you think?

If I had to put my money on any unification approach, it would be Node.js.

Do you agree? What needs to happen to make this a common way of developing web apps? Please let me know your thoughts in the comments.

Data Classification In the Cloud

Whenever a bug report comes in, I subconsciously classify it according to how it impacts the customer’s ability to derive value from the product.

Many software development companies have policies that formalize such classifications, e.g. into critical, high, medium, and low priority.

One can take that very far, like the Common Weakness Scoring System (CWSS) for classifying security vulnerabilities.

Data classification

Classifications are useful, because they compress a vast set of possibilities into a small set of categories. This makes it easier to decide what to do.

Classification applied to data stored in computer systems is called data classification. There are different reasons for classifying data.

One is to determine appropriate access control policies. It is wasteful to protect all your information at the highest level, so you want to divide up your data into a small number of buckets and take measures that are appropriate for each bucket.

Another important use case of data classification is to drive compliance efforts. If you process health care data, for instance, you may have to comply with the Health Insurance Portability and Accountability Act (HIPAA). This data requires different controls to be put in place than credit card data that is covered by PCI DSS.

Data in the Cloud

Things get more interesting in the cloud.

As a cloud user, you are still subject to the same laws and regulations as before, but now you’ve given away part of the control to your cloud provider. This means you have to make sure that they implement the required controls.

If the regulations you must comply with come with assessments, then those must extend to the cloud provider. Many cloud providers will not allow you to come in and do such assessments yourself, but they may allow assessments from third parties, like TRUSTe for a Safe Harbor assessment.

As a cloud provider, you will want to implement as many controls as possible, to support the maximum number of laws and regulations that your customers must comply with.

Both parties benefit from clear contracts. Part of such a contract may be a Data Protection Agreement that lists the duties of both parties in classifying and properly protecting data to meet security requirements and regulations.

If you’re unsure how to do all of this right, then you may want to look for guidance from the Cloud Security Alliance (CSA).

Likely Candidates for Frictionless Development Environments

Last time I reviewed the book on Consumption Economics, which explains how technology companies and their products will have to change to survive the brave new world that we’re entering.

So what would we find if we take the lessons from the book and apply them to our own software development environment? I think the answer would be surprisingly close to what I’ve called a Frictionless Development Environment (FDE) before.

To be honest, I’ve only started thinking more systematically about FDEs after reading Consumption Economics. In Five Essential Components of a Frictionless Development Environment, I’ve laid out the major building blocks of an FDE: cloud computing, big data analytics, recommendation engines, plug-in architecture, and open source.

It may be to soon to expect existing solutions to have all of those, but let’s see where we stand. There are already some cloud development environments. Most of these are geared towards web developers, and offer limited languages (mostly JavaScript). Some offer a big enough range to be interesting to a wide range of developers.

Big data analytics and recommendation engines are big features that are probably not there yet, but could always be added later. What’s more important is to look for a plug-in architecture and particularly for open source. These are fundamental architectural and business decisions.

Using open source as a criterion reduces our list to Cloud9 and Orion. Both have a plug-in architecture. The latter is an Eclipse project, but the former seems more mature. Be sure to follow both Cloud9 and Orion.

So what do you think? Would any of these cloud IDEs work for you? What other open source cloud IDEs are out there?

XACML In The Cloud

The eXtensible Access Control Markup Language (XACML) is the de facto standard for authorization.

The specification defines an architecture (see image on the right) that relates the different components that make up an XACML-based system.

This post explores a variation on the standard architecture that is better suitable for use in the cloud.

Authorization in the Cloud

In cloud computing, multiple tenants share the same resources that they reach over a network. The entry point into the cloud must, of course, be protected using a Policy Enforcement Point (PEP).

Since XACML implements Attribute-Based Access Control (ABAC), we can use an attribute to indicate the tenant, and use that attribute in our policies.

We could, for instance, use the following standard attribute, which is defined in the core XACML specification: urn:oasis:names:tc:xacml:1.0:subject:subject-id-qualifier.

This identifier indicates the security domain of the subject. It identifies the administrator and policy that manages the name-space in which the subject id is administered.

Using this attribute, we can target policies to the right tenant.

Keeping Policies For Different Tenants Separate

We don’t want to mix policies for different tenants.

First of all, we don’t want a change in policy for one tenant to ever be able to affect a different tenant. Keeping those policies separate is one way to ensure that can never happen.

We can achieve the same goal by keeping all policies together and carefully writing top-level policy sets. But we are better off employing the security best practice of segmentation and keeping policies for different tenants separate in case there was a problem with those top-level policies or with the Policy Decision Point (PDP) evaluating them (defense in depth).

Multi-tenant XACML Architecture

We can use the composite pattern to implement a PDP that our cloud PEP can call.

This composite PDP will extract the tenant attribute from the request, and forward the request to a tenant-specific Context Handler/PDP/PIP/PAP system based on the value of the tenant attribute.

In the figure on the right, the composite PDP is called Multi-tenant PDP. It uses a component called Tenant-PDP Provider that is responsible for looking up the correct PDP based on the tenant attribute.

Outbound Passwords

Much has been written on how to securely store passwords. This sort of advice deals with the common situation where your users present their passwords to your application in order to gain access.

But what if the roles are reversed, and your application is the one that needs to present a password to another application? For instance, your web application must authenticate with the database server before it can retrieve data.

Such credentials are called outbound passwords.

Outbound Passwords Must Be Stored Somewhere

Outbound passwords must be treated like any other password. For instance, they must be as strong as any password.

But there is one exception to the usual advice about passwords: outbound passwords must be written down somehow. You can’t expect a human to type in a password every time your web application connects to the database server.

This begs the question of how we’re supposed to write the outbound password down.

Storing Outbound Passwords In Code Is A Bad Idea

The first thing that may come to mind is to simply store the outbound password in the source code. This is a bad idea.

With access to source code, the password can easily be found using a tool like grep. But even access to binary code gives an attacker a good chance of finding the password. Tools like javap produce output that makes it easy to go through all strings. And since the password must be sufficiently strong, an attacker can just concentrate on the strings with the highest entropy and try those as passwords.

To add insult to injury, once a hard-coded password is compromised, there is no way to recover from the breach without patching the code!

Solution #1: Store Encrypted Outbound Passwords In Configuration Files

So the outbound password must be stored outside of the code, and the code must be able to read it. The most logical place then, is to store it in a configuration file.

To prevent an attacker from reading the outbound password, it must be encrypted using a strong encryption algorithm, like AES. But now we’re faced with a different version of the same problem: how does the application store the encryption key?

One option is to store the encryption key in a separate configuration file, with stricter permissions set on it. That way, most administrators will not be able to access it. This scheme is certainly not 100% safe, but at least it will keep casual attackers out.

A more secure option is to use key management services, perhaps based on the Key Management Interoperability Protocol (KMIP).

In this case, the encryption key is not stored with the application, but in a separate key store. KMIP also supports revoking keys in case of a breach.

Solution #2: Provide Outbound Passwords During Start Up

An even more secure solution is to only store the outbound password in memory. This requires that administrators provide the password when the application starts up.

You can even go a step further and use a split-key approach, where multiple administrators each provide part of a key, while nobody knows the whole key. This approach is promoted in the PCI DSS standard.

Providing keys at start up may be more secure than storing the encryption key in a configuration file, but it has a big drawback: it prevents automatic restarts. The fact that humans are involved at all makes this approach impractical in a cloud environment.

Creating Outbound Passwords

If your application has some control over the external system that it needs to connect to, it may be able to determine the outbound password, just like your users define their passwords for your application.

For instance, in a multi-tenant environment, data for the tenants might be stored in separate databases, and your application may be able to pick an outbound password for each one of those as it creates them.

Created outbound passwords must be sufficiently strong. One way to accomplish that is to use random strings, with characters from different character classes. Another approach is Diceware.

Make sure to use a good random number generator. In Java, for example, prefer SecureRandom over plain old Random.

LinkedIn Incident Shows Need for SecaaS

Security is a negative feature.

What I mean by that is that you will never get kudos for implementing a secure system, but you certainly will get a lot of flak for an insecure system, as the recent LinkedIn incident shows. Therefore, security is a distraction for most developers; they’d rather focus on their core business of implementing features.

Where have we heard that before?

Cloud Computing to the Rescue: SecaaS

Cloud computing promises to free businesses from having to buy, install, and maintain their own software and hardware, so they can focus on their core business.

We can apply this idea to security as well. The Cloud Security Alliance (CSA) calls this Security as a Service, or SecaaS.

For instance, instead of learning how to store passwords securely, we could just use an authentication service and not store passwords ourselves at all.

The alternative of using libraries, while infinitely better than rolling our own, is less attractive than the utility model. We’d still have to update the libraries ourselves. Also, with libraries, we depend on a language level API, which segments the market, making it less efficient.

There are some hurdles to tackle before SecaaS will go mainstream. Let’s take a look at one issue close to my heart.

Security as a Service Requires Standards

The utility model of cloud computing requires standards. This is also true for SecaaS.

Each type of security service should have one or at most a few standards, to level the playing field for vendors and to allow for easy switching between offerings of different vendors.

For authorization, I think we already have such a standard: XACML.

We also need a more general standard that applies to all security services, so that we have a simple programming model for integrating different security services into our applications. I believe that standard should be REST.
Update: The market seems to agree with this. Just see the figures in this presentation.

This is one of the reasons why we’re working on a REST profile for XACML. Stay tuned for more information on that.

So what do you think about SecaaS? Let me know in the comments.