Quick Method Software Estimation and Project Planning

Earlier this week, I conducted a simple and what I believe, was an engaging and effective workshop on estimation and ROM sizing - a critical aspect of proposing software projects. The audience was a group of project managers. 
 
Any such invite on the calendar asking for participation in a project management activity is viewed with some skepticism. ‘One more meeting…” is the thought that must have run through the minds of those that were invited to the session. 

Regardless, they all showed up. The end result was in my opinion a very engaging session. They all participated. We all learnt from the session. 
 
This post is to share some of the learnings from the session. 
What did we set out to do - Essentially, I wanted to communicate the power of separating “Effort' estimation, from 'Elapsed Time' computation; while further separating both of those from ‘Resourcing'.  

My sole focus was to engage the team in a hands-on activity that would convey the reality around the wide variance of software estimates even when conducted by different individuals in the same room against the same spec. It would tell us us what clients experience when they see proposals created by different members in the same group. 
 
With the exercise around "Effort" estimation completed, my intent was to use their estimate and have them perform an "Elapsed Time" computation. After those two steps were completed, we went on to put together a "Resourcing Plan" with the right skills and corresponding capacity.
 
Before we got started, we needed a simple but complete business need. We just picked a sample requirement for this purpose. 
 
Outputs from the exercise: 
There were 15 people in the workshop.  We received 15 estimates. In any such exercise, there is bound to be a spread. All that we can strive for is narrow variance. The data gathered is charted below for your reference. The data, even being a small sample, when charted followed a near-normal distribution. That is a good sign. However, the data was also skewed to the left of center, indicating the general bias to estimate lower.  
Using these estimates we then moved on to computing Elapsed Time.  
But before we did this, we needed to set some contextual parameters: 
A) we adopted the AGILE process model for this task 
B) we time-locked the Sprints at 2 calendar weeks
C) we capacity-locked the Sprints to 80 capacity hours  
D) this left 'Scope' to be the variable element

To get a little more room to compute the Elapsed Time computation, we asked each participant to take their estimates and multiply it by 10. 

For the example shown below, the PM had his effort at 410 hours.  
This computation indicates a 12 calendar week timeline. 

Now for the important thing... note that till now, we haven’t spoken about resources at all. And thats for a reason. 

The resource staffing matrix or the ’staffing plan’ as it is called, is not relevant till this point. 

The important and critical learning here is that the Effort is the effort. It has nothing to do with elapsed time or the staffing plan. There may be some influencers such as team experience levels, productivity, tech stack, architecture construct, programming environment, tools etc., but Effort remains the effort. 

Like wise the delivery model has little or nothing to do with Effort. The work remains the same. Whether we deliver via iterations or sprints, Effort does not change. If work breakdown changes, then due to granularity, effort may appear to change but it really hasn’t. 

Building out a resource model involved composing a team to have a total capacity of 80 hours per Sprint - aka - over a 2 calendar-week period (remember earlier on, that is the number we locked our Sprints to - 2 calendar weeks).
These 3 steps provide a reliable, repeatable method to arriving at Effort, Elapsed Time and the Staffing Plan.

Until next time…
CP Jois 

Quick Method Software Estimation and Project Planning

Earlier this week, I conducted a simple and what I believe, was an engaging and effective workshop on estimation and ROM sizing - a critical aspect of proposing software projects. The audience was a group of project managers. 
 
Any such invite on the calendar asking for participation in a project management activity is viewed with some skepticism. ‘One more meeting…” is the thought that must have run through the minds of those that were invited to the session. 

Regardless, they all showed up. The end result was in my opinion a very engaging session. They all participated. We all learnt from the session. 
 
This post is to share some of the learnings from the session. 
What did we set out to do - Essentially, I wanted to communicate the power of separating “Effort' estimation, from 'Elapsed Time' computation; while further separating both of those from ‘Resourcing'.  

My sole focus was to engage the team in a hands-on activity that would convey the reality around the wide variance of software estimates even when conducted by different individuals in the same room against the same spec. It would tell us us what clients experience when they see proposals created by different members in the same group. 
 
With the exercise around "Effort" estimation completed, my intent was to use their estimate and have them perform an "Elapsed Time" computation. After those two steps were completed, we went on to put together a "Resourcing Plan" with the right skills and corresponding capacity.
 
Before we got started, we needed a simple but complete business need. We just picked a sample requirement for this purpose. 
 
Outputs from the exercise: 
There were 15 people in the workshop.  We received 15 estimates. In any such exercise, there is bound to be a spread. All that we can strive for is narrow variance. The data gathered is charted below for your reference. The data, even being a small sample, when charted followed a near-normal distribution. That is a good sign. However, the data was also skewed to the left of center, indicating the general bias to estimate lower.  
Using these estimates we then moved on to computing Elapsed Time.  
But before we did this, we needed to set some contextual parameters: 
A) we adopted the AGILE process model for this task 
B) we time-locked the Sprints at 2 calendar weeks
C) we capacity-locked the Sprints to 80 capacity hours  
D) this left 'Scope' to be the variable element

To get a little more room to compute the Elapsed Time computation, we asked each participant to take their estimates and multiply it by 10. 

For the example shown below, the PM had his effort at 410 hours.  
This computation indicates a 12 calendar week timeline. 

Now for the important thing... note that till now, we haven’t spoken about resources at all. And thats for a reason. 

The resource staffing matrix or the ’staffing plan’ as it is called, is not relevant till this point. 

The important and critical learning here is that the Effort is the effort. It has nothing to do with elapsed time or the staffing plan. There may be some influencers such as team experience levels, productivity, tech stack, architecture construct, programming environment, tools etc., but Effort remains the effort. 

Like wise the delivery model has little or nothing to do with Effort. The work remains the same. Whether we deliver via iterations or sprints, Effort does not change. If work breakdown changes, then due to granularity, effort may appear to change but it really hasn’t. 

Building out a resource model involved composing a team to have a total capacity of 80 hours per Sprint - aka - over a 2 calendar-week period (remember earlier on, that is the number we locked our Sprints to - 2 calendar weeks).
These 3 steps provide a reliable, repeatable method to arriving at Effort, Elapsed Time and the Staffing Plan.

Until next time…
CP Jois 

Winter Afternoon Flight

It’s felt great to get up in the air again after hibernating in sub-zero temperatures for over a month. It was crystal clear day. A few bumps here and there but overall a swell day to fly.

The snow had more or less cleared out after a couple above-zero days last weekend.

Traffic was heavy. Everyone wanted to fly, I guess. The Garmin 530, prompted by ADS-B technology was indicating traffic objects constantly. I wished I could have stayed up in the air for viewing the sunset.

Overflew the field at 2500ft before turning downwind for Runway 20.

The DJI Mavic Air

The DJI Mavic Air is one of many DJI UAV products. The Mavic Air is best known for its portability and serves the high-end hobbyist and serious enthusiast range of users. DJI has implemented some very unique design ideas to make the drone portable.

The Mavic Air folds up for storage, is very well built and looks aesthetic. The Mavic Air weighs just under 1 lb. and is very easy to carry around. The Mavic Air shoots 4K videos at 30 fps and still picture capture is performed at 12 megapixels. This works very well for its intended audience. The Mavic Air has a battery endurance range that results in flight times between 18-21 minutes. Strong winds alter battery endurance ranges. In terms of line of sight range the Mavic Air has a 2.5-mile control range using the remote. The drone comes equipped with internal and supplemental storage, the Mavic Air has 8GB of internal memory. The USB-C port allows for transfer of files. The supplemental microSD slot has support for microSDHC and microSDXC media. For power charging, the remote requires Micro USB and the drone has a USB Type-C port to transfer footage.

The Mavic Air is equipped with GPS and GLONASS satellite positioning. The GPS sensors are accurate and reliably enable automated and semi-automated flight modes. The Mavic Air performs well in steady hovering. Its GPS sensors make the ‘return-to-home’ safety feature very reliable. Location detection enforces no-fly zones and is once again very reliable. For example, the system will alert you to get authorization before flying at an airshow location with a TFR around it. There are a number of warning levels. Some warning levels can be overridden with necessary authorization and there are others that can’t be overridden.

The drone supports QuickShots. These automated camera shots move the drone through the air in a predetermined pattern such as a helix or spherical shot and allow for quick capture of the surrounding. This improves productivity and reduces the amount of manual programming needed to get the footage. Even with forward and rear obstacle detection, QuickShots must be used with care. In the QuickShot modes, the drone flies itself, and there is always a risk of collision.

The Mavic Air will fly at 17.9 miles per hour with obstacle avoidance enabled, or at up to 42.5 miles per hour in Sport mode, a mode in which the obstacle detection system is disabled. With a climb rate of 13 feet per second in Sports mode and 5 feet per second in Positioning mode (both using the Remote Controller), the Mavic Air is found be very useful in most situations.

The maximum service ceiling for the Mavic Air is 3.1 miles above sea level. One of the important considerations with regard to UAVs or drones is their wind resistance capability. The DJI Mavic Air wind limit is 22 miles per hour. Beyond this number, the Mavic Air will generate a warning for high winds. This can be somewhat limiting in certain circumstances. The Mavic Air’s obstacle detection and avoidance system is very much reliable. The Air has forward, backward and downward sensors. The Advanced Pilot Awareness System (APAS) leverages all of these sensors. Coupled with this intelligence, instead of simply hovering in place when it detects an obstacle blocking the drone’s path, the Mavic Air explores the situation and automatically adjusts flight to avoid it, either by flying to the side or rising above it.
Burdziakowski, P. (2018). UAV IN TODAYS PHOTOGRAMMETRY–APPLICATION AREAS AND CHALLENGES. International Multidisciplinary Scientific GeoConference: SGEM: Surveying Geology & mining Ecology Management, 18, 241-248.
DJI Mavic Air. (2019). Retrieved May 5, 2020, from PCMAG website: https://www.pcmag.com/reviews/dji-mavic-air
Fintan Corrigan. (2020, January 13). DJI Mavic Air Features Review, Specifications and FAQs Answered. Retrieved May 5, 2020, from DroneZon website: https://www.dronezon.com/drone-reviews/dji-mavic-air-review-features-specifications-faqs-answered/
Yousef, M., Iqbal, F., & Hussain, M. (2020, April). Drone Forensics: A Detailed Analysis of Emerging DJI Models. In 2020 11th International Conference on Information and Communication Systems (ICICS) (pp. 066-071). IEEE.

Why AGILE fails to be agile?

The term ‘agile’ is much used today in many different contexts – so used that its bordering on overuse or misuse. The software industry is perhaps the one that uses it the most – so much so that it coined an entirely new process model known as AGILE hoping to jumpstart a new revolution in software engineering. While there has been a lot of talk around it, this revolution has barely provided the uptick in software project success it was meant to.

Why?

There are many reason for this. One of the reasons is that while labels have changed, basic behavioral aspects have hardly changed. Software engineers don’t do much different today, software managers hardly understand the nuances between process models/methodologies. Even more importantly, customers cant seem to change their modalities as participants in the process. Transformative change requires change at all levels, in every stakeholder role. For example, AGILE requires that customers/clients become participants in the daily activities of an AGILE effort. This calls for deep commitment and necessary time adjustments. While easy to state, this isn easy to achieve. Funding models have to change. The term ‘project’ and AGILE don’t go together. AGILE efforts are on going efforts, burning down a backlog of requirements. Projects have fixed scope, time and budget – the triad. AGILE efforts – as per the very AGILE manifesto – are meant to take on change late in the process. The typical ‘Project’ attempts to control scope and rigidly guard its execution. Looked at from a high level, these are counter to each other.

The majority of software engineers hardly understand these nuances. The typical software manager struggled to deal with traditional project execution, let alone have to deal with brand new terminology and safeguards. The very definition of a requirements changed with time. From writing 100s of pages of software requirements, the ‘Iterative’ process cycle asks for use-cases in an Actor-Ability syntax. On the other hand, AGILE demands User Stories. Speak to the average software developer and it becomes apparent that the nuances are barely even understood. The situation is analogous to the time when object oriented programming models came along but it took a long time before the industry truly wrote any real object oriented code. Even today, it isn’t unusual to see engineers writing long segments of procedural code while using advanced object oriented programming languages. Software is still a nascent industry. Success is still a matter of striving till it gets done. The ‘soft’ nature of the outcome makes it very hard to measure. Traditional practices have yet to stabilize. Complexity has been on the rise. In fact inter-operable systems and cooperating systems have become the norm.

One of the fundamental goals of AGILE was to get more features out, quicker. Getting a quick Sprint 0 done has become an obsession – most AGILE projects don’t care to build solid foundations. This concept of a MVP may be very lucrative however when observed closely, most MVP start off wanting to be something and end up being barely one-tenth of that vision. I have performed many due diligence exercises and one observation is common – most AGILE sprints are more about cleaning up the mess left behind by the previous sprint and less about feature roll outs. Hardly are there teams that truly measure that basic goal of “getting more features out, quicker”

For AGILE to truly delivery on its promise, core behavior must change. Until then, it shall remain a fancy label. Core behavior change begins by educating teams (software engineers and customer) to understand the basic tenets of AGILE – starting with the manifesto, the process itself, and the definition of an outcome in AGILE.

Until next time…

CPJ

Technology Portfolio Choices

Technology portfolios grow out of control rapidly. More so in today’s environment than before. Additions to and deletions from portfolios are not casual or trivial activities. Unfortunately, time pressures on technology projects turn them into casual decisions.

Recently I was helping a team decide between a newly acquired technology and the use of native legacy technology. All facts indicated that the new acquisition, while more sophisticated technology, had a track record of delaying projects. There was clear evidence that the team was having a hard time finding people with skills in the new tech. Despite that the team had persisted for 8 months, without making any progress.

Indeed, its hard to disband a new foundational technology. However, as technologists, we must always keep the facts in view and allow data to speak the story. All facts indicated that we were being slowed down the technology rather gaining any momentum. The right decision was to disband the choice and move forward. Thats what we finally concluded.

Always remember, let the facts guide the decision. Its never too late to remedy a situation.

CP

Requirements Engineering

Everything in the world (almost everything) starts with a need. Likewise with software engineering… it begins with requirements.  However, with time, despite the various process models, lifecycle descriptions and artifact templates, the ‘good requirements’ conundrum has remained an elusive goal. Over 3 decades of my career, not much has changed, other than the amount of time spent in the industry debating this subject and/or finding a way to short-cut the process.

Even today (in fact, lesser today than ever before!), software professionals cant distinguish between process models and associated artifacts. Ask a software engineer today what process model his organization uses and my bet is that most will say they don’t know. Many others yet, will look at you as though you asked them the azimuth angle to the moon.  If there is this much disregard for the process models in use, it is best not to ask what artifacts go with which process model. What I mean is that process models were set up for a purpose. They demanded certain roles and allowed the production of certain deliverables. For example, the Waterfall model used the software requirement specifications as its requirements artifact. Unified Process (UP) introduced and used the concept of ‘Use Cases’. Most methods within the Agile umbrella leverage the construct of User Stories. These were not just made up on the fly. The construct and templates for each of these took a certain meaning.

Its one problem not to have any process at all. It can cause some pain. However, its worse to have people mixing and matching process models and process artifacts at random. It causes chaos. In a certain instance not too long ago, during a routine assessment, i came across a situation, where the team spoke of ‘Agile’ all the time, but was using an “SRS” (the software requirement specification) as the requirements artifact. The SRS was 384 pages long written 7 months prior. This is not uncommon. In fact, its becoming increasingly common. There is no practical way for any effort to use this SRS and be truly agile in their work.

The industry has even tolerated and perpetuated the confusion between business analyst and a requirements engineer. Lets just conclude this minute – A business analyst and a requirements engineer are two different roles. They do very different things. They produce two very different outputs. To use one for the other can only to problems. Far too often, information technology departments have called for and used the BA for requirements collection. Defining ‘need’ and engineering good software requirements are two VERY different outcomes. It worked for several years because the construct of the SRS. The SRS typically grew into a fiction novel anyway – one that not many read – at least, not the programmer that wrote some code anyway. The SRS hid the differences between well engineered atomic requirements and a long story book.

That is no more the case. UP and use cases demand succinct engineering. They are meant to be atomic in nature. They absolutely need to be engineered if one were to be looking for a useful requirements artifact.

In a time, where the velocity of change is far outpacing our ability to innovate, its time to pay more a little more respect to requirements, and truly engineer them – especially if an organization wants to remain relevant.

CP Jois

Another Vintage Restored

A 1937 DC-3 restored and in the air with its large radial engines and steel fuselage shining… Sharing this link…

A wonderful restoration effort.

A bit of trivia… the first built DC-3s carried Wright Cyclone R-1820 radial engines. From what I have read, each of those weighed a 1000 pounds each and consumed a 110 gallons per hour!

 

Evans VP-1

Sharing a short video clip about the Evans VP-1 that i came across while reading the EAA newsletter. Another inspiring story of aviation passion. Just as the concept of lift – the ‘wind beneath the wing’ – never ceases to amaze us, I am forever inspired by inventors who start from the drawing board and sketch out home-built airplanes. The Volksplane is exactly that…

 

Designing the machines that build the machine

Innovating new concepts and creating new products has been a common and consistent theme in the industry. It is interesting to note that when such innovation occurs in an new industry, many of the corresponding methods, mechanisms, equipment do not exist. For example, when Boeing struck an agreement with the chief of PanAm back in the 60s to build a bigger jet than was available at the time, apart from the design of a new aircraft, they had to evolve, build and validate all other components that led to the delivery of the 747. They pretty much put the company on the line in doing so bringing them close to bankruptcy at one point.

There are several such examples in the history of aviation. Indeed, such innovation has been cyclical and the industry has gone through many such cycles of peaks of intense innovation and then periods when they have basically struggled to stay afloat. This discussion is important because the evolution of the simulator is one such innovation. The simulator was an outcome of need – the need to train people on what was built. With time, it turned into a tool – a tool to help address the need to test what was built. In both cases, minimize risk, then minimize cost and then provide a platform to scale operations.

Among the various examples we have seen/read about, I find FAA’s NextGen use of simulators to be a comprehensive example. I find it comprehensive because of various, multi-faceted elements that NextGen reaches into. there are changes to aircraft, airports, traffic control, navigation, communications, crew roles, training processes and a whole lot more. There have been many who have questioned if such a wide impact program is even safe to implement as one program. FAA’s thinking has been that there comes an inflection point when multi-path changes are required to be performed in tandem rather than piecemeal.

Come to think of it, simulators have changed character over the past century. They have gone from helping test/train the machine they model TO helping with modeling (designing) the machine itself.
In the case of NextGen, the future machine is a redesigned USNAS.

Designing simulators that help design the future airspace system is a complex endeavor – fraught with risk. Often, its harder to design the simulator than it is to implement the model in the real world. More importantly, validating such simulators to ensure that they are accurate enough to model the real thing is a complicated exercise. Simulator-related research over the past 5 decades is a mix of successes on one side; and criticisms and warnings on the other side. There are many studies providing us data that simulator design is an evolving science – and that an over-reliance on simulators can lead to problems. In the light of persisting concerns, the use of simulators to design an overhaul of the USNAS can actually be questioned.

Are these simulators able to adequately model and predict behavior in the real world. Are we leaving something out of the model that is in fact a part of the real world environment? Is the simulator violating one of the core principle of learning design, i.e. modeling of identical elements?…
While being a passionate advocate of simulators, I find some of these persisting concerns problematic and in need of expeditious study.
CP