When Agility Breeds Entropy: The Hidden Cost of the AGILE Process

Agile has transformed software development. It replaced rigid waterfall structures with flexible sprints, rapid feedback, and constant iteration. Teams move faster, deliver sooner, and adapt to change with ease. But beneath this celebrated adaptability lies a subtle and growing problem – entropy. The very qualities that make Agile powerful can also accelerate the disorder of software code.

In an Agile environment, developers are encouraged to prioritize delivering working software over exhaustive documentation and long-term architectural planning. This is great for short-term progress – but over multiple sprints, it often leads to shortcuts, fragmented designs, and inconsistent coding patterns. Each sprint adds a new layer of functionality, sometimes built on incomplete refactors or temporary fixes. Because the focus is on “delivering value now,” deeper architectural integrity can be deferred indefinitely. Over time, the codebase starts to resemble a geological formation – with layers of old design decisions, patched logic, and duplicated functionality. The result? Increased entropy: a system that works today but resists adaptation tomorrow.

Agile also introduces entropy through team dynamics. As teams rotate, priorities shift, and user stories evolve, institutional memory weakens. The rationale for design decisions fades, documentation lags behind, and technical debt accumulates quietly sprint after sprint. Ironically, the very agility that enables rapid evolution also erodes long-term stability. The cumulative effect is a codebase that becomes progressively harder to maintain, extend, or even understand.

To counter this entropy, Agile teams must embrace sustainable agility – balancing iteration speed with architectural stewardship. This means treating refactoring and technical debt reduction as core deliverables, not optional chores. It means embedding architecture reviews, code quality metrics, and documentation updates into the sprint rhythm. Agile, at its best, is not chaos – it’s disciplined flexibility. But without mindful engineering, it can devolve into a cycle of entropy masked by velocity.

In essence, Agile doesn’t create entropy — people do, when they mistake iteration for improvisation. The key is to use Agile not just to build fast, but to build well, ensuring that with each sprint, the system grows in both functionality and structural coherence.

CP Jois

Quick Method Software Estimation and Project Planning

Earlier this week, I conducted a simple and what I believe, was an engaging and effective workshop on estimation and ROM sizing - a critical aspect of proposing software projects. The audience was a group of project managers. 
 
Any such invite on the calendar asking for participation in a project management activity is viewed with some skepticism. ‘One more meeting…” is the thought that must have run through the minds of those that were invited to the session. 

Regardless, they all showed up. The end result was in my opinion a very engaging session. They all participated. We all learnt from the session. 
 
This post is to share some of the learnings from the session. 
What did we set out to do - Essentially, I wanted to communicate the power of separating “Effort' estimation, from 'Elapsed Time' computation; while further separating both of those from ‘Resourcing'.  

My sole focus was to engage the team in a hands-on activity that would convey the reality around the wide variance of software estimates even when conducted by different individuals in the same room against the same spec. It would tell us us what clients experience when they see proposals created by different members in the same group. 
 
With the exercise around "Effort" estimation completed, my intent was to use their estimate and have them perform an "Elapsed Time" computation. After those two steps were completed, we went on to put together a "Resourcing Plan" with the right skills and corresponding capacity.
 
Before we got started, we needed a simple but complete business need. We just picked a sample requirement for this purpose. 
 
Outputs from the exercise: 
There were 15 people in the workshop.  We received 15 estimates. In any such exercise, there is bound to be a spread. All that we can strive for is narrow variance. The data gathered is charted below for your reference. The data, even being a small sample, when charted followed a near-normal distribution. That is a good sign. However, the data was also skewed to the left of center, indicating the general bias to estimate lower.  
Using these estimates we then moved on to computing Elapsed Time.  
But before we did this, we needed to set some contextual parameters: 
A) we adopted the AGILE process model for this task 
B) we time-locked the Sprints at 2 calendar weeks
C) we capacity-locked the Sprints to 80 capacity hours  
D) this left 'Scope' to be the variable element

To get a little more room to compute the Elapsed Time computation, we asked each participant to take their estimates and multiply it by 10. 

For the example shown below, the PM had his effort at 410 hours.  
This computation indicates a 12 calendar week timeline. 

Now for the important thing... note that till now, we haven’t spoken about resources at all. And thats for a reason. 

The resource staffing matrix or the ’staffing plan’ as it is called, is not relevant till this point. 

The important and critical learning here is that the Effort is the effort. It has nothing to do with elapsed time or the staffing plan. There may be some influencers such as team experience levels, productivity, tech stack, architecture construct, programming environment, tools etc., but Effort remains the effort. 

Like wise the delivery model has little or nothing to do with Effort. The work remains the same. Whether we deliver via iterations or sprints, Effort does not change. If work breakdown changes, then due to granularity, effort may appear to change but it really hasn’t. 

Building out a resource model involved composing a team to have a total capacity of 80 hours per Sprint - aka - over a 2 calendar-week period (remember earlier on, that is the number we locked our Sprints to - 2 calendar weeks).
These 3 steps provide a reliable, repeatable method to arriving at Effort, Elapsed Time and the Staffing Plan.

Until next time…
CP Jois 

Technology Portfolio Choices

Technology portfolios grow out of control rapidly. More so in today’s environment than before. Additions to and deletions from portfolios are not casual or trivial activities. Unfortunately, time pressures on technology projects turn them into casual decisions.

Recently I was helping a team decide between a newly acquired technology and the use of native legacy technology. All facts indicated that the new acquisition, while more sophisticated technology, had a track record of delaying projects. There was clear evidence that the team was having a hard time finding people with skills in the new tech. Despite that the team had persisted for 8 months, without making any progress.

Indeed, its hard to disband a new foundational technology. However, as technologists, we must always keep the facts in view and allow data to speak the story. All facts indicated that we were being slowed down the technology rather gaining any momentum. The right decision was to disband the choice and move forward. Thats what we finally concluded.

Always remember, let the facts guide the decision. Its never too late to remedy a situation.

CP

Requirements Engineering

Everything in the world (almost everything) starts with a need. Likewise with software engineering… it begins with requirements.  However, with time, despite the various process models, lifecycle descriptions and artifact templates, the ‘good requirements’ conundrum has remained an elusive goal. Over 3 decades of my career, not much has changed, other than the amount of time spent in the industry debating this subject and/or finding a way to short-cut the process.

Even today (in fact, lesser today than ever before!), software professionals cant distinguish between process models and associated artifacts. Ask a software engineer today what process model his organization uses and my bet is that most will say they don’t know. Many others yet, will look at you as though you asked them the azimuth angle to the moon.  If there is this much disregard for the process models in use, it is best not to ask what artifacts go with which process model. What I mean is that process models were set up for a purpose. They demanded certain roles and allowed the production of certain deliverables. For example, the Waterfall model used the software requirement specifications as its requirements artifact. Unified Process (UP) introduced and used the concept of ‘Use Cases’. Most methods within the Agile umbrella leverage the construct of User Stories. These were not just made up on the fly. The construct and templates for each of these took a certain meaning.

Its one problem not to have any process at all. It can cause some pain. However, its worse to have people mixing and matching process models and process artifacts at random. It causes chaos. In a certain instance not too long ago, during a routine assessment, i came across a situation, where the team spoke of ‘Agile’ all the time, but was using an “SRS” (the software requirement specification) as the requirements artifact. The SRS was 384 pages long written 7 months prior. This is not uncommon. In fact, its becoming increasingly common. There is no practical way for any effort to use this SRS and be truly agile in their work.

The industry has even tolerated and perpetuated the confusion between business analyst and a requirements engineer. Lets just conclude this minute – A business analyst and a requirements engineer are two different roles. They do very different things. They produce two very different outputs. To use one for the other can only to problems. Far too often, information technology departments have called for and used the BA for requirements collection. Defining ‘need’ and engineering good software requirements are two VERY different outcomes. It worked for several years because the construct of the SRS. The SRS typically grew into a fiction novel anyway – one that not many read – at least, not the programmer that wrote some code anyway. The SRS hid the differences between well engineered atomic requirements and a long story book.

That is no more the case. UP and use cases demand succinct engineering. They are meant to be atomic in nature. They absolutely need to be engineered if one were to be looking for a useful requirements artifact.

In a time, where the velocity of change is far outpacing our ability to innovate, its time to pay more a little more respect to requirements, and truly engineer them – especially if an organization wants to remain relevant.

CP Jois

Technology Transformations

The subject of technology transformations has been on my mind the past few months. It is absolutely evident that the rate of change of technology has increased multifold. With that, the frequency of technology transforms has also increased. There used to be a time when major changes to tech stacks would be an event – a not so frequent event. However that no more is the case. Innovation occurs a lot more than it ever used to – and in unexpected forms. disruption is not a rare thing anymore. Yes, if technology stacks were built for agility (see my prior post on architectural constructs here) then when ideation leads to disruption, a plug and play architecture should handle it really well. But is rarely the case, yet. 
In this scenario, technology transformations are the still the only answer. There are major reasons for such transformations today. One of ten is when an enterprise is transforming it tech stack to achieve readiness for the SMAC revolution. Another is where an already nimble enterprise is transforming its stack to push the envelope.
In either case, these are not for the faint hearted. There is as much of the culture element involved in this as there is technology. These are long journeys and must never be embarke upon without a medium to long term trajectory first being established. As an avid aviator, I liken it to taking off without a plan of some kind – a plan for a destination, a plan for diversion, a plan for worsening weather, or higher then expected headwinds thereby increasing fuel burn, or arriving at an airport where the winds are not favorable for the available runways. Note that I have not even mentioned encountering other problems such as equipment failures, lost communications or navies not working, to name a few.

Clearly, tenacity is key. Staying the course is important. Deviating for weather is fine, however turning back on course is critical. Staying focused on key measures and more importantly, metrics is absolutely essential. Remediating as soon as practical is necessary. Sounds like common sense – easy? Indeed, IF only we were to follow these principles. All I will say is – not many do.

My sincere recommendation to anyone who cares to listen to me is ‘don’t start a technology transformations’ if there is even the slightest doubt about some of being able to comply with the above principles. A botched transformation is worse than putting with current state, as poor as it may be. 

CPJ

Analytics

The topic of analytics is all over the industry today somewhat bordering on overuse. It no more surprises me to hear of organizations wanting to be in the analytics realm when, quite honestly, they can’t even keep their databases up and running. Having lofty goals or aspiring for value additive outcomes is not a bad thing. However, to believe that one can run a marathon when a mile seems to much to keep up with is no more a question of being unrealistic, it is far beyond that.
There are many simple prerequisites to this aspiration.
Firstly, data is only valuable when it can be transformed to information. Information that is meaningful; that can lead to intelligence; taking that further, actionable intelligence.
Analytics only becomes a factor of discussion when actionable intelligence can be leveraged to preempt action. Seeking more about what we know can lead to preemptive action – predictive analytics. On the other hand, mining for patterns in volumes of information leads to Pattern based analytics – this leads to knowing more what didn’t know could have even been found.
Data when put through the stages of acquisition, aggregation, curation and dissemination leads to very compelling decision capabilities.
However getting there takes a series of ‘readiness’ activities – steps that many seeking this value are not willing to invest their time or effort into. In my mind there is short cut to this process. It is the price of reliable action-worthy intelligence – it has to be paid.

CP Jois