Software Development as an Economic-Cooperative Game

I was reading “The End of Software Engineering and the Start of Economic-Cooperative Gaming” by Alistair Cockburn. These two paragraphs frame software development in a different way as an economic-cooperative game, rather than the commonplace software engineering frame.

“Software development is a resource-limited, goal-directed, cooperative game, whose moves consist of invention and communication. The people, who are inventing, manipulating and communicating information across multiple heads, must share their information in order to produce the solution.

This means the speed of the project is proportional to the speed at which information moves between people’s heads. Every obstacle to detecting and moving information between heads slows the project. Understanding and attending to this issue is essential to playing the game effectively.”

Software Development is more than coding.

There are definitely coders out there. But the act of coding or writing syntax to create a program does not make you a software developer. A true software developer is playing this game and as a professional, they’re asking what do I need to become to play the game at the next level. The software developer’s abilities include programming, but also include learning to make these time and capability tradeoffs and communicating more effectively for shared understanding.

One of the main thesis in the post is that invention and communication increase proportional to the speed information moves between people. More specifically than just information, how fast shared understanding moves between peoples heads. This explains why:

  • Delivering the system soon and inexpensively competes with creating an advantageous position for the next game. [Alistair]
  • Creating inexpensive markers competes with creating them to work for a wider range of new people. [Alistair]
  • Keeping the team intact competes with introducing new people. [Alistair]
  • Using a smaller number of highly qualified people (with lower communication costs) competes with using more people of more average capability. [Alistair]

Why I enjoy playing the game of software development.

People will usually equate me saying I love software development to me being a “tech guy” and liking to “write code” or being a “programmer”. These are parts of the craft, but I view myself as participating in the business as a whole. Not all businesses think of software developers as integrated with the business as a whole. Some view software development as a manufacturing task, so it would not be integrated and could be delegated to an outsourcing company.

I have drawn a graphic of how I view Alistair’s observation at the bottom of the page. I think with this understanding of software development, it helped pull back the curtain of why I like certain aspects of software development, mainly those experienced through the frame of an economic-cooperative game. When software development is framed only about engineering or just the syntax, or the machine, then I lose interest. When I want to learn a new programming language or pattern, I typically create a project for myself to work through so I can go through the game with the new technology.

Software development is not experienced by an individual as programming, although programming languages are part of the skillset. To me, it is like the game of basketball is not experienced as players running up and down a court for 2 hours, although running is a skillset required. My enthusiasm for the game, is more about playing the economic-cooperative game and not so much about a coding language. Another analogy is how an author is telling a story (software), not spending all day typing.

A software developer’s contribution to the business is beyond manufacturing code.

Looking at software development through the lens of this type of economic cooperative game also introduces a pathway for everyone contributing to a business to understand how software developers participate, beyond code. Meaning how software developers participate in business beyond production, beyond manufacturing code. Many businesses are going through a digital transformation, where they are increasingly becoming software businesses. Software businesses need good software developers to contribute to the business. They will influence how the business is experienced by customers. A digitally transformed business looking for good coders, is like a basketball team looking for good runners. Finding a good runner might not make your basketball team any better.


The diagram shows the span of delivering multiple software goals (releases) across time (Time 1, 2 and 3) and the tradeoffs between being goal oriented during Time 1 while having to balance what to keep in mind for Time 2, but still delivering during Time 1 to meet your goal and you always have limited resources, which makes the tradeoff necessary. So you can put in a better platform in Time 1 to support Time 2, but it will keep you from doing other things in Time 1. Or you can create “throw away code” in Time 1 and realize your Time 1 goal faster, but that decision might make your Time 2 goal smaller or stretch out longer because you have to rework it instead of continuing to build on top of something already useful in the stack. Then you have the coding patterns like the Strategy Pattern to help, but you also have meta patterns, something that’s not in code, but still considered part of the software you are creating, like the domain concepts, that can help or hurt with this game too.

Software Development.png


Communicating Between Service Fabric Micro-Services

When you are building a micro-service architecture then you have many internal services and a service implementing the API Gateway pattern acts as the one API surface for outsiders communicating with your application. The Gateway API needs to forward requests to your internal micro-services. To do this in Microsoft’s Service Fabric, you can use the Service Fabric Reverse Proxy or ServicePartitionResolver. Under the hood, Service Fabric Naming Service and other services are monitoring each instance and endpoint. Using the Reverse Proxy, you can communicate with other services without hard coding the endpoints. It is important not to hard code endpoints because when you start to scale the application you will want it to dynamically use different instances of the same service, like a load balancer.

This is an example of a ASP.NET Core Web API controller Post() method in the Gateway API that is resolving an endpoint using the ServicePartitionResolver to get an endpoint to call for the SampleOAuthService micro-service. Then it forwards the request and returns the response. In a production app, you can do more validation and rate limiting in the Gateway API Post() method.

public async Task<IActionResult> Post()
var resolver = ServicePartitionResolver.GetDefault();
var cancellationToken = new System.Threading.CancellationToken();
var p = await resolver.ResolveAsync(new Uri(“fabric:/SampleOAuthService/IdentityWebApi”), new ServicePartitionKey(), cancellationToken);
     var reader = new StreamReader(this.Request.Body);
var body = reader.ReadToEnd();
var content = new StringContent(body, System.Text.Encoding.UTF8, “application/x-www-form-urlencoded”);
     var http = new HttpClient();
JObject addresses = JObject.Parse(p.GetEndpoint().Address);
string primaryReplicaAddress = (string)addresses[“Endpoints”].First();
     var url = primaryReplicaAddress + “/oauth/token”;
var response = await http.PostAsync(url, content);
var responseContent = await response.Content.ReadAsStringAsync();
JObject responseObject = JObject.Parse(responseContent);
     return new OkObjectResult(responseObject);

The Philosophy of World Class Commercial Software Devs

What’s the difference between software developers that are great and those that are ordinary? One of the things that I have noticed and tried to emulate, as if some of these great software developers and other problem solvers were my coaches, is to have an understanding and philosophy behind the code. This is one reason I think I always gravitated toward learning a new technology or coding technique by having a project to use it in. I wanted to apply a new technique as quickly as I could.

Feynman is a famous physicist that has a great video from long ago on the difference between knowing and understanding. A lot of developers are at the stage where they know what to do. They “can” solve a problem using X technique. To get to the next level, they might know 3 or 4 techniques of solving the problem. Then there’s a level beyond this that doesn’t just respond to a problem with a range of solutions, but brings understanding to the table.

What I have observed world class software developers do differently when making commercial software is they have a philosophy that helps frame their toolbox and communication. One part of the philosophy is world class software developers design the problem and solution.

A second part of what makes world class software developers different is they ask two sets of questions, instead of the one most common set of questions. The common thing to do is be given a problem or task and you start right away by breaking it down in to parts, working on each part and then integrating the parts back in to a whole. The uncommon world class thing to do is first ask two questions, instead of one. Ask “what are the parts?”, but also ask “what is this a part of?” You need to manage up as much as you manage down. Taking a US car apart will show you that it is built for the driver to sit on the left side. But the car won’t tell you why the driver is sitting on the left. Ordinary engineers need to build the driver seat and steering wheel on the left to sell their car in the US, but they don’t need to know why. World class engineers need to know why and with that understanding they can approach making a car in a different way. That different understanding might lead to the same result in some cases and different results in others. Don’t deny the problem solver their ability to make sense of things and understand things. Unfortunately, this is what a lot of management and compliant software developers do when they regulate engineers to the “back-offices” so they can be worked without interuption. They don’t want to be interrupted by the customer as John Seddon would say.

A third part of what makes world class software developers different is they see events where other people see nouns/attributes. An example of this is where one person sees a thing or focuses on the main interaction and the world class software developer sees a lifecycle. He might understand you don’t just start moving things around in the system, but you have to first enter things in to the system and when the system becomes too full or noisy you might need to remove items. Adding, moving and removing items would be an example of a lifecycle. I’ve personally seen many times where a large project started with certain assumptions and one or two parts of a lifecycle were completely overlooked by everyone on the project. Take the time and do the rigorous work to understand the lifecycle of things. Otherwise you might have a super productive way of editing items, but have to insert them through a completely ad-hoc way you develop at the last minute, which makes your whole system horrible.

There’s more differences, but those are three obvious ones I’ve seen over the years.

Watch the Feynman video below and see how he describes the concept for physicists, but it just as readily applies to software developers and software entrepreneurs.


B4B – Reinventing the Customer-Supplier Relationship – Book Review

I just finished reading B4b: How Technology and Big Data Are Reinventing the Customer-Supplier Relationship and it helped me understand the company I’ve been working for and others in the industry.

One of the models the book puts forth is one of 4 levels of companies. A company can stay at one level or try and progress to next levels. The authors put emphasis on the need for adding levels with AND, instead of OR.

The four levels are organized by the complexity of the offer and the result the customer is expecting.

  • Level 1: Simple Offer
    • This is like a simple product company, where the manufacturer distributes product to re-sellers that distribute to end users. An example might be a company that sells toasters.
  • Level 2: Complex Offer
    • This is like an IT or enterprise software company that sells a complex system. This offer requires someone to manage the system after purchase. This level of company will sell the product to a customer, like a hospital, and then leave and let the hospital engineers maintain the system.
  • Level 3: Optimize
    • This level of company offers complex products, but adds additional services. An example might be a software vendor that sells a product with an annual maintenance contract. This vendor might remotely monitor the software or perform routine maintenance or other programs to make sure the customer is able to use the product.
  • Level 4: Outcome
    • This level of company offers customers outcomes. An example is a copier machine company that does not sell copiers, but puts copiers in office buildings and sells per page. People are not buying copiers, not buying maintence contracts, but are buying a copy. Another example is Amazon Web Services offering servers by the minute.

The book takes these categorizations of companies and spells out useful advice for each. The Level 4 companies require more automation and not just per-outcome pricing. The authors warn that if a Level 4 company tries to sell using Software-as-a-Service (SaaS) pricing without the necessary level of automation, then it will cost the company a lot more. This is because the company will have guaranteed an outcome, but without automation, the delivery of each unit will vary. We know this in IT because depending on the employee and depending on different factors, the time required to set up a server can vary.

A lot of small and medium size traditional IT and software vendors are doing Level 2 and Level 3 offers, where they offer complex products and selling annual maintenance contracts. The book makes the argument that because customers are getting introduced to Level 4 companies in the consumer space, that they will increasingly expect Level 4 companies, like Salesforce and Amazon in the commercial space.

B4B makes the case that many software vendors products offer way more features than are actually used by any individual customer. This gap between the customer’s usage and available capabilities is something that software vendors should take notice of. What ratio of your developers work is going to adding new features, that might not actually contribute to value-add for the customer? With this feature/usage gap, those developers have cover to use an increasingly larger portion of their work time to implement the underlying platform needs, to bring their company to Level 4 automated offers. This requires deep instrumentation and understanding how customers use the product.

Analysis of The Rising Value of APIs

I read MuleSoft’s whitepaper The Rising Value of APIs: Predictions for 2016 today. Below are some of my thoughts from my own experiences in commercial software development.

IoT in 2016

I agree APIs will be the link between devices and digital services.

There is still a huge opportunity for the right developers to get involved in developing their business’s services with cloud and elasticity in mind. A lot of the software lifecycle in traditional companies I work with is still not taking advantage of cloud concepts. Cloud is a host container to them, not the lifecycle of the software they’re developing. These traditional companies will never be able to support IoT’s sheer magnitude unless they adopt the entire cloud lifecycle. Without modernizing the lifecycle they might breeze through development to fail when it is time to scale or when it’s time to deploy an update.

There’s very little testing or planning on how the scaling will actually take place. Does your business have a written plan for the process of scaling? For example, does an administrator need to adjust config files or is your software interacting with the cloud host platform and adjusting scale automatically? Whether you do it by hand or automatically, does your company have a written description of the specific order services should be scaled? Is your company writing software designed for failure? Many traditional companies still have failover environments that are probably half reliable because no one actively tests them, instead of individual nodes that can fail and the entire service keep running. Wrapping a traditional business IT system in a façade of APIs and running it on-prem or Azure won’t mean it has the reliability expected of cloud services.

One other example of how traditional companies deploy software vs how cloud business do is traditional companies deploy software by running servers and overwriting the website files on their existing server with the new copies. Cloud businesses use immutable deployments where they spin up a new instance of the VM or host container with the new version of software and then do a IP swap.

All of these practices are important to mature your organization to get to peak performance for delivering cloud services.

Rise of the API Economy

I agree with the whitepaper that companies will continue to open up APIs internally, but I’m not sure traditional companies will be the main ones opening APIs. I think companies should create more APIs and think about how to programmatically offer their capabilities and data instead of so heavily relying on only offering visual solutions like applications and reports.

It takes a certain digital-native type of company or a traditional company that has transformed to think deeply digital to understand how APIs are business related and not a developer rabbit hole. The current-state pattern I see for traditional companies is a heavy focus on UI, applications and ETL processes to import or export data. For example, traditional companies tend to have “report writers” to write SQL that shows results on a web page or PDF, but not many traditional companies have “API developers”. The smart traditional companies will move on to become innovative companies that make the digital transformation.


Are APIs Semantics?

There is a blog post Thinking Outside-In: How APIs Fulfill the Original Promise of Service Oriented Architecture by Anders Jensen-Waud. A comment on LinkedIn referencing this article asked if “APIs by themselves begin to address semantic interoperability?” I don’t think so and my reason for thinking semantics is not defined in the API itself is as follows.

I have been creating APIs for hospitals for years and have found an API by itself doesn’t make something more likely to be semantically interoperable. I have found it’s more important to get the community that builds and uses the APIs to use the same vocabulary with the same context and out of that understanding APIs can be developed that are semantically congruent. Without those people sharing understanding then they each go off and develop their parts and after they have developed each part they come back to integrate it with the whole and found that even though they used the same property names or class structure, they used them with different intentions. For example, this really happened once. One team of API creators inside a company started with the same product goal as a second team in the same company, to track locations of things on a map. They each knew that they were going to use polygon data structures and each point in the polygon was going to have a X and Y property. Three months after each building their components, they came together to integrate their parts and they didn’t work, because they operated with different assumptions. One team’s processing logic used Cartesian points where the X,Y origin is in the bottom-left and the other team used Raster points, where the origin is in the top-left. No one identified semantics as a deliverable, because the teams ionly thought the code/API was the deliverable. The semantics should have also been a deliverable and occurred before the production of an API.

The insight I want to share is that semantics is a shared context and understanding. APIs and code itself are just symbolic processing. The symbols themselves do not inherently carry the meaning, but the common understanding among people can use the symbols in the same way.

OAuth Server and Bearer Token Size Limit

I was building an OAuth Server using the Microsoft stack of OWIN components and learned that it is not good to keep adding an indefinite number of claims to the bearer token returned by the OAuth Server. There is no hard limit, but if you create a bearer token over 2KB then you might start to see problems when using different tools. This started to happen in a software project I was working on when the number of claims created a bearer token over 4KB. A tool the QA team was using for testing started to have issues.

As a rule of thumb, I try to limit bearer tokens to under 2KB now.