Five Essential Components of a Frictionless Development Environment

One of the challenges of maintaining a consistent programming style in a team is for everyone to have the same workspace settings, especially in the area of compiler warnings.

Every time a new member joins the team, an existing member sets up a new environment, or a new version of the compiler comes along, you havebook.e to synchronize settings.

My team recently started using Workspace Mechanic, an Eclipse plug-in that allows you to save those settings in an XML file that you put under source control.

The plug-in periodically compares the workspace settings with the contents of that file. It notifies you in case of differences, and allows you to update your environment with a couple of clicks.

Towards a Frictionless Development Environment

Workspace Mechanic is a good example of a lubricant, a tool that lubricates the development process to reduce friction.

LubricationMy ideal is to take this to the extreme with a Frictionless Development Environment (FDE) in which all software development activities go very smoothly.

Let’s see what we would likely need to make such an FDE a reality.

In this post, I will look at a very small example that uncovers some of the basic components of an FDE.

Example: Creating the Class Under Test

In Test-Driven Development, we start out with a test and there is no class under test yet. Eclipse has a Quick Fix to create the class, but we still have to manually invoke it and select a source folder to store it in (assuming you have different source folders for main and test code).

It would be nicer if the IDE would understand what you’re trying to do and automatically create the skeleton for the class under test for you and save it in the right place.

Big DataThe crux is for the tool to understand what you are doing, or else it could easily draw the wrong conclusion and create all kinds of artifacts that you don’t want.

This kind of knowledge is highly user and potentially even project specific. It is therefore imperative that the tool collects usage data and uses that to optimize its assistance. We’re likely talking about big data here.

Given the fact that it’s expensive in terms of storage and computing power to collect and analyze these statistics, it makes sense to do this in a cloud environment.

That will also allow for quicker learning of usage patterns when working on different machines, like in the office and at home. More importantly, it allows building on usage patterns of other people.

What this example also shows, is that we’ll need many small, very focused lubricants. This makes it unlikely for one organization to provide all lubricants for an FDE that suits everybody, even for a specific language.

Open Source SoftwareThe only practical way of assembling an FDE is through a plug-in architecture for lubricants.

Building an FDE will be a huge effort. To realize it on the short term, we’ll probably need an open source model. No one company could put in the resource required to pull this off in even a couple of years.

The Essential Components of a Frictionless Development Environment

This small example uncovered the following building blocks for a Frictionless Development Environment:

  1. Cloud Computing will provide economies of scale and access from anywhere
  2. Big Data Analytics will discern usage patterns
  3. Recommendation Engines will convert usage patterns into context-aware lubricants
  4. A Plug-in architecture will allow different parties to contribute lubricants and usage analysis tools
  5. An Open Source model will allow many organizations and individuals to collaborate

What do you think?

Do you agree with the proposed components of an FDE? Did I miss something?

Please share your thoughts in the comments.

Advertisements

How Friction Slows Us Down

FrictionI once joined a project where running the “unit” tests took three and a half hours.

As you may have guessed, the developers didn’t run the tests before they checked in code, resulting in a frequently red build.

Running the tests just gave too much friction for the developers.

I define friction as anything that resist the developer while she is producing software.

Since then, I’ve spotted friction in numerous places while developing software.

Friction in Software Development

Since friction impacts productivity negatively, it’s important that we understand it. Here are some of my observations:

  • Friction can come from different sources.
    It can result from your tool set, like when you have to wait for Perforce to check out a file over the network before you can edit it.
    Friction can also result from your development process, for example when you have to wait for the QA department to test your code before it can be released.
  • Friction can operate on different time scales.
    Some friction slows you down a lot, while others are much more benign. For instance, waiting for the next set of requirements might keep you from writing valuable software for weeks.
    On the other hand, waiting for someone to review your code changes may take only a couple of minutes.
  • Friction can be more than simple delays.
    It also rears its ugly head when things are more difficult then they ought to be.
    In the vi editor, for example, you must switch between command and insert modes. Seasoned vi users are just as fast as with editors that don’t have that separation. Yet they do have to keep track of which mode they are in, which gives them a higher cognitive load.

Lubricating Software Development

LubricationThere has been a trend to decrease friction in software development.

Tools like Integrated Development Environments have eliminated many sources of friction.

For instance, Eclipse will automatically compile your code when you save it.

Automated refactorings decrease both the time and the cognitive load required to make certain code changes.

On the process side, things like Agile development methodologies and the DevOps movement have eliminated or reduced friction. For instance, continuous deployment automates the release of software into production.

These lubricants have given us a fighting chance in a world of increasing complexity.

Frictionless Software Development

It’s fun to think about how far we could take these improvements, and what the ultimate Frictionless Development Environment (FDE) might look like.

My guess is that it would call for the combination of some of the same trends we already see in consumer and enterprise software products. Cloud computing will play a big role, as will simplification of the user interaction, and access from anywhere.

What do you think?

What frictions have you encountered? Do you think frictions are the same as waste in Lean?

What have you done to lubricate the frictions away? What would your perfect FDE look like?

Please let me know your thoughts in the comments.

XACML In The Cloud

The eXtensible Access Control Markup Language (XACML) is the de facto standard for authorization.

The specification defines an architecture (see image on the right) that relates the different components that make up an XACML-based system.

This post explores a variation on the standard architecture that is better suitable for use in the cloud.

Authorization in the Cloud

In cloud computing, multiple tenants share the same resources that they reach over a network. The entry point into the cloud must, of course, be protected using a Policy Enforcement Point (PEP).

Since XACML implements Attribute-Based Access Control (ABAC), we can use an attribute to indicate the tenant, and use that attribute in our policies.

We could, for instance, use the following standard attribute, which is defined in the core XACML specification: urn:oasis:names:tc:xacml:1.0:subject:subject-id-qualifier.

This identifier indicates the security domain of the subject. It identifies the administrator and policy that manages the name-space in which the subject id is administered.

Using this attribute, we can target policies to the right tenant.

Keeping Policies For Different Tenants Separate

We don’t want to mix policies for different tenants.

First of all, we don’t want a change in policy for one tenant to ever be able to affect a different tenant. Keeping those policies separate is one way to ensure that can never happen.

We can achieve the same goal by keeping all policies together and carefully writing top-level policy sets. But we are better off employing the security best practice of segmentation and keeping policies for different tenants separate in case there was a problem with those top-level policies or with the Policy Decision Point (PDP) evaluating them (defense in depth).

Multi-tenant XACML Architecture

We can use the composite pattern to implement a PDP that our cloud PEP can call.

This composite PDP will extract the tenant attribute from the request, and forward the request to a tenant-specific Context Handler/PDP/PIP/PAP system based on the value of the tenant attribute.

In the figure on the right, the composite PDP is called Multi-tenant PDP. It uses a component called Tenant-PDP Provider that is responsible for looking up the correct PDP based on the tenant attribute.

LinkedIn Incident Shows Need for SecaaS

Security is a negative feature.

What I mean by that is that you will never get kudos for implementing a secure system, but you certainly will get a lot of flak for an insecure system, as the recent LinkedIn incident shows. Therefore, security is a distraction for most developers; they’d rather focus on their core business of implementing features.

Where have we heard that before?

Cloud Computing to the Rescue: SecaaS

Cloud computing promises to free businesses from having to buy, install, and maintain their own software and hardware, so they can focus on their core business.

We can apply this idea to security as well. The Cloud Security Alliance (CSA) calls this Security as a Service, or SecaaS.

For instance, instead of learning how to store passwords securely, we could just use an authentication service and not store passwords ourselves at all.

The alternative of using libraries, while infinitely better than rolling our own, is less attractive than the utility model. We’d still have to update the libraries ourselves. Also, with libraries, we depend on a language level API, which segments the market, making it less efficient.

There are some hurdles to tackle before SecaaS will go mainstream. Let’s take a look at one issue close to my heart.

Security as a Service Requires Standards

The utility model of cloud computing requires standards. This is also true for SecaaS.

Each type of security service should have one or at most a few standards, to level the playing field for vendors and to allow for easy switching between offerings of different vendors.

For authorization, I think we already have such a standard: XACML.

We also need a more general standard that applies to all security services, so that we have a simple programming model for integrating different security services into our applications. I believe that standard should be REST.
Update: The market seems to agree with this. Just see the figures in this presentation.

This is one of the reasons why we’re working on a REST profile for XACML. Stay tuned for more information on that.

So what do you think about SecaaS? Let me know in the comments.