Profile

Cover photo
Clay Caldwell
Lives in Seattle
631 followers|86,963 views
AboutPostsPhotosVideos

Stream

Clay Caldwell

Shared publicly  - 
 
It will be interesting to see how this alliance gains traction in more static enterprise IT shops with VMware investments but less AWS exposure. Can IT take back hybrind cloud deployments from their erstwhile CMO initiatives?
1
Add a comment...

Clay Caldwell

Shared publicly  - 
 
Time to show the kids some real video games!
Oregon Trail. Commander Keen. Marble Madness. King's Quest. Bust-a-Move. If these names evoke happy memories of your past, then get ready for a nostalgia overload. The Internet ...
1
Add a comment...

Clay Caldwell

Shared publicly  - 
 
Its the little things to get your habits changing this year...
Your goals may be big, but small changes help you get there.
1
Add a comment...
 
Facebook creates a search spotlight for your posts and gives it to all your friends. Welcome to Friendship 2.0 powered by ad networks.
 Yet the news cruised by with analysis focused simply on what Facebook's new keyword post search does today. Yes, any post by you or any of your friends can..
1
Add a comment...

Clay Caldwell

Shared publicly  - 
 
Park and ride with a view this morning.
1
Samuel Warren's profile photo
 
nice
Add a comment...

Clay Caldwell

Shared publicly  - 
 
Insightful post regarding change cost in applications.  The lack of tests, refactoring and abstracted code prompts the need for more design work. But this doesn't solve the problem.  Instead, teams should look at why they need to spend time designing to avoid high change costs.
How much architecture is enough?
1
Add a comment...
In his circles
1,522 people
Have him in circles
631 people
Ryan Hunter's profile photo
Edward Owen's profile photo
Bob Kelly's profile photo
Benjamin Abad's profile photo
Sr Maverick Mitra's profile photo
Zt Hong's profile photo
hesham.lhm Basha's profile photo
Marco Vicario's profile photo
Ian McGee's profile photo

Clay Caldwell

Shared publicly  - 
 
"HP exec: The cloud technology evolution is 'concentrated in Seattle'" http://www.bizjournals.com/seattle/blog/techflash/2015/01/hp-exec-the-cloud-technology-evolution-is.html
Bill Hilf, senior vice president of product and services management for HP Cloud in Seattle, explains why the tech giant opened a cloud center here. The office has 200 employees and is still growing.
1
Add a comment...

Clay Caldwell

Shared publicly  - 
 
For the helicopter parent who has everything...
This may sound like a helicopter parent's dream, but it's not as creepy as it seems.
1
Add a comment...

Clay Caldwell

Shared publicly  - 
 
OpEd - iOS and Android becoming shiny ecosystem jails? Is there still a viable alternative? "Why does the world still need the Mozilla Foundation?" http://venturebeat.com/2015/01/02/why-does-the-world-still-need-the-mozilla-foundation/
With its Firefox browser rapidly losing share, and its financial ties to Google finished, the Mozilla Foundation finds itself facing the most pivotal moment in its history since its founding more t...
1
Add a comment...

Clay Caldwell

Shared publicly  - 
 
Sharp analysis of that Falcon high G turn. Perhaps that's how Han broke his ankle?

"G-Forces in the Millennium Falcon" http://feeds.wired.com/c/35185/f/661370/s/41af4dcf/sc/13/l/0L0Swired0N0C20A140C120Cg0Eforces0Emillennium0Efalcon0C/story01.htm
In the Star Wars VII trailer, we see the Millennium Falcon pulling out of a dive. How many g-forces in this turn?
1
Add a comment...

Clay Caldwell

Shared publicly  - 
 
 
Measure productivity in Agile before it's too late!

By +Felipe Brito , Business Director and Fernando Ostanelli, Head of Delivery.

Part II - How to measure software size and complexity

“All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.”

Arthur Schopenhauer

In Part I of this article [goo.gl/A6YleB] we explored false dichotomies that prevent agile teams from measuring productivity and shared the reasons why the industry should take the next step.

In this second part we will share a method on how to determine complexity (or size) of software in a clear, standardized and objective way.


Thinking outside the box

People have always had the dream of simplifying and standardizing software development with the ultimate goal of maximizing the chances of success of their projects. Different schools of thought have chosen distinct paths and used different processes in their ultimate goal of improving business outcomes. Software measurement is undoubtedly a relevant part of this story and a huge challenge in this area is how to measure size and complexity, given the nature of software.

To track productivity in software development, it is necessary to solve the challenge of determining the functional size of the software delivered. Throughout our software development history, we have extensively used all of them - firstly LOCs and function points, during our RUP days, and then story points, after the organization embraced agile in 2006. But we were not satisfied with the drawbacks and we kept experimenting and pushing ourselves to get to a model that would allow us to calculate the complexity/functional size of software and that would be:

1) practical/workable and simple to understand (engaging multiple teams and avoiding miscommunication);

2) standardized and stable enough to be used throughout different sprints/different projects (compare evolution/changes during projects/programs);

3) not related to technical aspects, programing languages, development platforms... (otherwise it would be an apples-to-bananas comparison);

4) based on business requirements and universal software engineer practices;

5) not compromising in terms of customer experience nor technical quality.

We started by creating a complexity rule that allowed us to break features/stories down to basic elements that should be coded. This first version was very related to technical aspects and we soon noticed its fragility: it was totally technology dependent. This would be too much of a compromise, because our intention was to foster organizational learning by comparison and cross-pollination of best practices between different programs.

The second weakness was the fact that a story or piece of functional requirement needed to be matched to one rule category instead of being a composition of some of them. This led us to cases where the velocity could change due to the nature of the requirement, given a false impression that someone was gaining velocity or even productivity when she was just lucky in terms of scope selection.

The third drawback of this first version was the fact that it could not isolate functional complexity from other aspects such as technology, process, interdependencies of other teams, infrastructure tasks.... Using different teams and different technologies, we would get different estimates. Teams would misunderstand effort as complexity. Uncertainties and accelerators would be factored in, making stories more or less complex respectively. The same story would then have different estimates depending on the platform used and this presented us with a serious problem. We believe that anything that could speed up the development of working software (accelerators, reusability, built-in modules) should contribute to productivity improvement but not affect the sizing of the software being developed (making it smaller or larger). At the end of the day we would only be able to use the rule for an specific technology scenario and this made us create several versions of the rule for different context and different teams.


The Eureka Moment

After several attempts and a good amount of frustration - with challenges coming from different directions - we were finally able to create a powerful tool. The “Eureka” moment came when we realized that we should try to determine the functional commonalities of different stories on different projects and technologies. We put ourselves in the shoes of a hypothetical common Product Owner to all the different projects and asked the following question: "If I were a PO for all these different projects and needed to explain all these stories to new agile teams, how would I do it?" The answer: functional/business aspects. This is the key element that allows us to normalize complexities of different stories.

We then selected a large and very comprehensive sample of projects (with different complexities and sizes, diverse contexts and business verticals and multiple technologies and platforms). This sample encompassed projects from creation of very simple digital marketing brochure sites to highly complex iron ore logistics projects to mission critical oil and gas transportation engagements. And we studied each backlog confirming that the storytelling process was always done according to business rules. The PO was really not worried about technical aspects, framework versions or architectural mechanisms. There are obviously POs with strong technical backgrounds that end up adopting a hybrid (requirements + technical design) approach when detailing stories. In spite of that, it suffices to have the story detailed according to business needs, deferring to the specialists the responsibility to provide the best technical solution.

We realized that the stories always included basic elements representing functional aspects, such as:

- business rules (from formula usage through multi-step iterative processes with many decision points);

- user interface elements (from adding "x simple elements" to a specific form, creating a new form with "x simple elements" to creating a new complex form with several dynamic and sophisticated elements);

- new business entities that would need to be created/handled or existing ones that would need to be improved/handled;

- interface to different entities

We then included these functional complexity items in a complexity rule (you may request access to the spreadsheet here [https://docs.google.com/a/ciandt.com/spreadsheet/ccc?key=0ApLLTqlWeEv6dGEwQzd0NWRYd2RwT0VrdTRzdGxnOXc#gid=15] ).

In each line we have the basic functional elements (business rules, user interface entities, permissions). We call them "complexity items".

In each column we have different sizes (XS, S, M, L, XL) to provide relative complexity. We provide different points to each size according to the Fibonacci sequence.

For each line (complexity item) we provide a description for each column (sizing parameter) so we are able to keep compatibility and coherence between estimations of the same size, regardless of the complexity item that is being estimated. So, for example, for the complexity item "Business Rules", the content of the column related to the XS size reads "direct application of formulas" while the content of the column related to the XL size reads "multi-step iterative processes with many decision points". It's interesting to mention that different complexity items (comparison of different lines) may have the same size. For instance, same permissions for all users is compatible with same solution for all scenarios in terms of business complexity and they both are under the XS column because of that.

This complexity rule has 10 complexity items and each one has 5 sizes. And it is interesting to say that it is comprehensive enough to help us estimate diverse and very complex projects of multiple types and in different business verticals.

The complexity points of a given story are the weighed sum of all the complexity items that a given story encompasses.

We also created a guide with several different examples on how to use the complexity rule. Its goal is to guarantee homogeneity and to help the different teams in their usage of the tool. As of April 30th, we have 200 people trained and using the tool. By the end of June we plan to have 600 people trained and using it.

In the third part of this article we will share interesting findings on this measurement program. We will also discuss how to use this complexity rule as a stepping stone to manage productivity. Stay tuned!

Note: Thanks to +Gílson Gaseorowski for his great comments and ideas on this article.

#agile   #enterpriseagile   #ciandt   #productivity
1 comment on original post
1
Add a comment...
 
"Continuous integration is not running Jenkins on your feature branches and ignoring the build when it goes red."@jezhumble  Good point and some snark from #ChefConf 2014.  To me illustrates how it seems many dev teams jump into complicated build patterns that fail to allow the team to fix the build efficiently.  Get the code out the door reliably, then get complicated.
1
Add a comment...
People
In his circles
1,522 people
Have him in circles
631 people
Ryan Hunter's profile photo
Edward Owen's profile photo
Bob Kelly's profile photo
Benjamin Abad's profile photo
Sr Maverick Mitra's profile photo
Zt Hong's profile photo
hesham.lhm Basha's profile photo
Marco Vicario's profile photo
Ian McGee's profile photo
Work
Occupation
Explore and try to leave things better than I find them.
Basic Information
Gender
Male
Story
Tagline
Tech business analyst and consultant. Interested in web technology applied to business process efficiency, enterprise architecture and SDLC.
Introduction
Military kid, lived a little bit of everywhere.  Been to 4 continents and all 50 U.S. states.  Started using computer tech and the internet to stay connected to friends and became a GenX digital native.  Became a passion and vocation.  Now have been enjoying this for 15 years in enterprise Information Technology with a focus on web systems, commerce, software development operations and support.
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
Seattle
Previously
Military kid - Cadiz - Washington D.C - Anchorage - Lexington - Seattle