Category Archives: Software Engineering

New Extreme Programming Adoption Record – 30.5 practices in just 6 weeks!

Careers Coding Company Software Engineering

The latest (disclosed) PHZ customer project working on-site at customer premises hit on the last week the record of fastest Extreme Programming (XP)-process adoption seen this far. The previous scores have been:

The latest record is to adopt 30 and a half practices in just 6 weeks! Our current Extreme Programming list of practices include for example Pair Programming, Test Driven Development, 100% Code Coverage and Continous Integration (to test and production environments). There are currently total of 37 practices, with the Lean 5S being the latest addition.

The benefit of having such a high XP -process maturity is that roughly after 25/37 practices the software engineering team starts to break even for the customer, i.e. they cost the same as they produce. At 30 practices the team typically produces more than they cost, and near 37 practices the team hits so called super-productivity enabling the delivery of world-class products at very high efficiency. This is the way how we claim our main sales proposition of delivering Sustainable IT Services where the life-time of the systems we deliver can reach decades compared to the few years or months when using more traditional practices.

Published by:

Lean vs. Agile

Coding Software Engineering

There is a heated up debate going on among software development professionals about what is the difference between Agile and Lean software development? Are they rooted in the same principle or is there something fundamentally different between the two methods?

James O. Coplien explains the difference:
http://www.slideshare.net/jcoplien/20090513resund-agile

At PHZ.fi we are embracing both the methodologies, in particular Extreme Programming by the book, and eliminating the waste by reflecting on Kanban, Total Quality Management, Six Sigma and Shigeo Shingo’s Zero Quality Control.

Published by:

Rule of Two for Pair Programming

Video

Coding Software Engineering Work Psychology

SidiousVaderPromo“Always two there are. No more, no less. A master and an apprentice.” – Yoda

See a nice video on Pair Programming and Rule of Two. Pair work is not only a programming practice, but is a standard military practice ( https://en.wikipedia.org/wiki/Wingman and https://fi.wikipedia.org/wiki/Taistelijapari ) can be applied to any knowledge work such as sales and accounting, too.

Pair Programming is a agile practice with 2 people in front of 1 computer and keyboard. There are 2 roles: the Navigator one tells what to do, the Driver holds the keyboard and writes (“one police man can read, the other can write”). You switch the roles frequently. Seniors should be paired with Juniors, Customers with Coders and Sysadmins with Frontend Developers.

Pair Programming is a controversial practice where doing pair programming wrong is worse than not doing it at all. There is a vast amounts of scientific findings related to pair programming:

Programmers working in pairs usually produce shorter programs, with better designs and fewer bugs, than programmers working alone. See http://en.wikipedia.org/wiki/Pair_programming

  • Pairs typically consider more design alternatives than programmers working solo, and arrive at simpler, more-maintainable designs; they also catch design defects early.
  • Pairs usually complete work faster than one programmer assigned to the same task (but it takes 2x the effort, but this is better than well compensated by the improved productivity)
  • Programmers are less likely to skip writing unit tests, spend time web-surfing or on personal email
  • Additional benefits reported include increased morale of the team.

While Pair Programming works for the Sith, it is known to be kryptonite for

  • incompetent introverts
  • control freaks
  • super hero programmers
  • cowboy coders

See http://blogs.atlassian.com/2009/06/pair_programming_is_kryptonite/

More pair programming information is available from

Published by:

Call for Goal DSL

Frontend Software Engineering Usability

We found out that we don’t yet know a good way to write down our Personas and their Goals in a coherent way. There is also a problem in teaching the ways of Goal Oriented User Interface Design (GUIDe) to new interaction designers.

On the Internet other people than myself have also been considering the process how GUIDe can be

http://blog.extremeplanner.com/2006/01/goal-driven-user-interface-design.html

which are very similar to my thoughts about integrating GUIDe and Extreme Programming: http://pharazon.org/publications/GO-XP.pdf

However, the Extremeplanner’s article didn’t mention any way how to describe the Personas and Goals. I think we should create a Domain Specific Language (DSL)  to make it easier to write realistic goals that leave the design (workflow) open for designer to re-invent.

The power of Goal Driven UI Design comes from freedom to redesign the technical solution within the limits of current technological possibilities – the designer should be as open minded as possible to find out the what possibilities there are to use “teleportation”, “magic” or “zen” in creating a design that employs 0 steps to achieve the Goal.

Published by:

Jeff Sutherland on Scrum – if you follow the flight plan you will be taken down

Video

Coding Management Software Engineering

Jeff Sutherland explains how Scrum was originated.

Key to the success: Make work visible

Every morning, there’s a bullet coming at you with your project’s name on it. If you follow the plan, you will be taken down. Most of the project managers don’t get out of the way. 84% of IT projects are failures.

In the daily meeting we need to debate what is the next item in the sprint backlog that we would implement that touch which component that would cause the biggest impact in the system that would emerge a new capability. The minimal change to push the capability forward.

The whole team needs to know the architecture of the system and they all needs to argue about where they touch the system to systematically produce the feature in the shortest time possible.

Conway’s Law: the culture of the organization reflects in the system architecture – you need to create an object oriented organization

Jeff Sutherland

Jeff Sutherland on Scrum

 

 

 

Published by:

Tee.do new frontend released

Software Engineering Start-up

teedo-newfrontteedo-fullscreenToday we released a new front-end for our Lean Task Management Tool Tee.do! The new front-end is utilizing the latest and most modern technologies such as AngularJS, Twitter Bootstrap, HTML5 and semantic mark-up. The front-end is tested by using the Jasmine BDD-test suite. Check it out from

http://tee.do

There are some new features, too:

  • There are two new detail views. First the “Detail beside the overview” is opened on the right side of the display every time you click on some task.
  • We moved the editing of the task description and other information to the detail (previously they were direct-manipulated on the task, but not anymore)
  • If you double-click on a task, a full screen detail opens.
  • The add-pane hovers is always visible and hovers at the bottom of the page.
  • As a bug fix, the description -field is now very large, so you can actually easily read & edit it.
  • As a bug fix, you can now read and edit the Test Plan, too

The idea of Tee.do is to provide pure Lean task and work management tool-set for agile and lean organizations!

Published by:

Editor Fascism to promote Pair Programming

Coding Software Engineering

At PHZ.fi we have been recently listing our software engineering practices to find out a better overview how we should improve our Extreme Programming process adoption. In 2008 we managed to reach a near perfect XP process by having 28.5/29 practices in use, evaluated by a bi-weekly self-assessment. On this week I added yet six more practices to the list, which were previously not listed or regarded as a practice, but were actively used.

Editor Fascism to promote Pair Programming

This is our latest addition to our process description, but it was actually taken in to use already in 2005. The idea of Editor Fascism is to force all developers to use the same development environment and especially text editor. When we started to use XP in 2004, the main obstacle for adopting pair programming was that each and every developer used his own text editor of choice. For example we had coders who preferred vim, others who were emacs -fanatics, I like pico/nano and Textpad (it’s the only editor that can open 1GB file in a second and not to crash that I know). Today we have people using Aptana studio, Netbeans etc. Anyway, the picture is clear: it is difficult to pass the keyboard to your pair if he doesn’t know vim commands, or if the vim -coder doesn’t know Netbeans shortcuts. While it would nice to learn how to use all editors, I thought that we receive a better ROI by investing in standardization of the routine tasks so that we can focus our learning energy to more complicated and value adding activities (such as test automation).

Our Editor Fascism currently means that all office development machines that are used for Pair Programming should have Eclipse installed and properly configured with all required plugins. We should have debugger working, a common configuration loaded with coding convention settings and auto-format, templates, common keyboard shortcuts etc. All programming must be done by using Eclipse, there are no alternatives. Period.

If you want to use another editor, you can not. That’s it. If you don’t like that it’s the reason why we call the practice Editor Fascism :)

In 2005 we quickly learned that this was a very quick way to promote pair programming, at least it abolished most of the technical and practical obstacles. Secondly I have noticed as a manager that the productivity of the team has simultaneously increased since everybody are using an advanced IDE instead of a basic text editor. Recently I have been also thinking about Zero Quality Control and Eclipse is a very good to provide the quickest possible Feedback cycle to prevent defects by providing immediate in-line syntax error warnings. By using a text editor it takes a minute or two to get the same feedback from the compiler, the browser or the server, which grinds down the development productivity on the small scale.

Published by:

Jidoka Error Recovery

Software Engineering Usability

Recently I’ve been working on two legacy projects, which contain a substantial amount of stinking code. Actually the reason why I’ve been assigned the projects might have been the fact that the code bases have become unmaintainable for the previous developers (who seemingly have been selected for their low cost, and consequently, low skill). However, there is never a project that you couldn’t learn more on, nevertheless the otherwise poor coding practices in use.

On the second project I have been wondering about the overall error management philosophy utilized. Although being full of duplicated code, long methods and security vulnerabilities, the error management seems to have been written by someone who wants to avoid errors from being communicated to the user up to the last possibility. Although that under the hood there might be (and are) multiple exceptions raising for example from missing server connectivity or some errorous SQL, the system uses multi-layered try-catch structure to prevent any errors from being directly shown to the user (however, in the most serious bugs even that is not enough). It makes me to wonder the nature of the organizational culture of the software company that forces the low skilled coders to improve their skills up to the extreme on error management rather than keeping up the quality in the first place…

Anyway, since the approach of error management in this project is so different (I’ve been always happily throwing errors to the users), I thought there might be something to learn about the approach. From the user experience (UX) point of view, things seem to be actually going rather smoothly, since system seems to almost never have any problems, at least on the surface. When you learn the system more deeply, you start to notice that the results produced are not quite right, but it takes months and deep domain level expertise until you notice the problems. I thought that this would be also generally a better approach for error management than just throwing everything to the user, who might not know (unless he is a programmer and a system admin) what to do about them. In fact, the error messages I’ve been traditionally writing are more related to debugging than giving useful information to the user, how to recover from the problem.

In the end there are many non-debugging related problems that might need user involvement. Typically these kind of exceptions are more related to the external connections and environment of the software rather than internal operations (where you need debugging messages). For example your internet connection or database might be down, which cannot be fixed by the developers or software alone. What then would be a better approach to manage error messaging than the extremes of just throwing errors directly to the user, or suppressing them in all cases giving no information of the problems?

Jidoka 23 Steps of Autonomation

Shigeo Shingo has described the 23 stages of autonomation, how a system can manage errors. On the first level the system does not detect or react to any errors, but needs a human operator to constantly monitor the system for irregularities. On the highest level of automation, a system can both detect and fix errors by itself, continuing operations and minimizing need of human involvement.

Quote from Wikipedia: “Jeffrey Liker and David Meier indicate that Jidoka or ‘the decision to stop and fix problems as they occur rather than pushing them down the line to be resolved later’ is a large part of the difference between the effectiveness of Toyota and other companies who have tried to adopt Lean Manufacturing. Autonomation, therefore can be said to be a key element in successful Lean Manufacturing implementations.”

Thus it seems that a better way of error management would be to use Jidoka-style error recovery. For a computer program detecting problems is usually quite easy, by using the try-catch -statement. The difference comes from what to do the next. The traditional options are to pass the error forward on the next level (user), to suppress it, or to log it for debugging and sysadmins to fix later.

Sheigo suggests that when feasible and cost-effective, the system should try to repair itself and recover from the detected error. One bug that I’ve been recently fixing is related exactly to this. It occurs only in the rare situations when the client software looses connection to the server. The try-catch -statements detect the situation, and the recovery process includes passing the input back to the user. However, the problem is that though by quick inspection the return “looks like” being correct, it is missing vital added-value information provided by the server. In addition the recovery process introduces a (duplication) bug. Thus the recovery process is both errorous and recovering wrongly. Initially, when fixing the problem, I thought that it would be enough just to fix the bug that I was assigned to fix. However, when doing automated test cases, I noticed that since also the recovery process was functioning wrongly, another approach should be used than “seemingly recovering”, but not actually recovering from the missing server connection.

How could the missing server connection be remedied? I was thinking a few approaches, firstly the client-server connection wouldn’t need to be synchronous, an asynchronous queue and messaging system could actually handle the recovery better. A monitoring system should be set up to notify the system administrators of missing DB or server connectivity, or other environmental problems. The system could queue (and not block) the messages until the environment has been restored. The particular situation where the issue arises is actually doing development while commuting without Internet connection. The development environment issues could be also remedied by using mock services simulating an operational server.

Facebook has actually built an automatic remedy system for infrastructure caled FBAR.

Conclusion

The original idea for the automatic recovery become from refactoring legacy code by automated tests, so we support using the approach for all projects – automated testing surfaces issues that are otherwise easily bypassed. Also, I find the idea of automatic recovery important from both User Experience and error proofing (Six Sigma) point of views. When you are catching an exception, do not pass it forward, or suppress it, but initiate a recovery process that tries to fix the situation. The remedy process can for example contain asynchronous messaging system, monitoring or mocks. The users and admins should be notified only when the recovery process also fails.

Published by:

Understanding the WIP Limit and Impediments in Scrum

Software Engineering

Scrum and Agile software development is actually based on Lean production management and Queueing Theory, having a mathematically proven basis. Understanding the basic queueing model concepts leads in better understanding of Scrum practices. Historically Lean manufacturing was based on Fredrick Taylor’s work on Scientific Management in the 1910′s , Henry Ford’s Mass Production and in the 60′s Toyota Production System. In Computer Science the Lean principles seem to lag 20 years the manufacturing, but now it’s time to wipe the dust off the 80′s Lean books (like The Goal ).

Why WIP should be limited?

The key adaptation of software engineering agile methods to the traditional Lean is introduction of Iterations or Sprints to reduce the Work In Progress (WIP). While in manufacturing the work pieces are constant and multiple, in software engineering the task sizes can vary wildly from half an hour to several months. The idea of the iterations is to detect the too large tasks and split them into smaller, more manageable pieces. Scrum uses estimation methods like Planning Poker and Sprint Planning Meeting to detect stories that are too large to fit in one sprint. A too large story should be split into two smaller stories. The key concept that is often missed is the Project Velocity, which means the number of tasks completed in the previous sprint. If you got 6 stories done previously, you should WIP Limit the backlog for the next sprint to 6 stories (or story points). If the practices of Project Velocity WIP Limit on Backlog and splitting of the too large tasks is taken lightly, the danger is that the team fails to deliver anything at all (I’ve seen it actually to happen).

The Optimum WIP Limit = 1

A mathematical proof applies to Scrum WIP Limitation stating that the optimum WIP limit = 1 task per coder (or tester, or analyst) called Little’s Law :

L = λ W, where

L = WIP (number of tasks in a system)
λ = Project Velocity (arrival rate of tasks or throughput)
W = average time to complete one task (cycle time)

It’s maybe more intuitive to relabel the formula as

Cycle_Time = WIP / Velocity

This has some profound implications, since we can see that

  • if WIP = 4, the Cycle Time to complete one task is Cycle_Time = 4 / Velocity.
  • however, if we limit the WIP = 1, the Cycle time is Cycle_Time = 1 / Velocity, which is four times smaller than when WIP=4.

The Little’s Law says that the optimum WIP is 1 to the tasks completed in the fastest possible time. In other words, you should do only one task at a time for maximum performance, multi-tasking makes your total productivity slower. If you have two coders, the WIP Limit can be 2. Having more than one coder leads into interesting increases in working efficiency that can be modeled by the Erlang’s Blocking Formula (I’ll write more on this later).

Impediments are not WIP

Another concept that is widely not understood (or not used) is the Impediment, or a blocker that prevents you from proceeding a task. It has to be stated that almost none of the Task Management softwares support management of Blockers in a proper fashion, except our pure lean task management application http://tee.do , which was specially designed to support the Lean process. For proper management of WIP Limit and for maximum performance, the impediments must be removed from the WIP queue, and placed in a separate queue, or state. When a coder stumbles into a blocker (e.g. lack of specification, servers are down etc), he should tag the task as Impediment, and start working the next highest prioritized task on the TODO backlog.

The key top priority task of the Scrum Master is actually to daily fight to solve the blockers as quickly as possible, by facilitating the customer to give more accurate specification, buying licences, fixing the version control etc. This is actually the same process than managing the Critical Path in traditional project management, except the problems are not analyzed in advance, but reacted to as they occur (maybe there would be space for risk management planning in agile, too).

By separating WIP tasks from Blocked tasks, the WIP Limit can be adhered and the maximum performance of the team maintained. When an impediment is resolved, it should be placed back on the top of the TODO Backlog as the highest priority, to ensure that the task currently in WIP can be completed in minimum cycle time. In some situations the blocked tasks are of so high priority that they need to be worked immediately when the Impediment is resolved, but this leads into reduction of total efficiency and increased cycle time.

Reducing Waste by Releasing Often

The concept of WIP Limit is actually one key method to Reducing Waste in Total Quality Management. The idea is to minimize the working capital employed by the unfinished work. In manufacturing this means reducing the size of the warehouse, but in coding it means Releasing Often and using methods like Continuous Integration. The Release Cycle of one year leads into the waste of paying the salaries of the coders for 12 months without having the system in productive use (can be hundreds of thousands or more). On the other end of the spectrum is Kanban and releasing upgrades to the software several times per day, immediately when a task is completed and tested (reducing the waste to only tens or hundreds of EUR/USD).

Summary

To understand the practices of Agile ways, it is beneficial to know the basic principles of the Queuing Theory. The Little’s Law states that the you should limit your work in progress to 1 task at a time per coder. To achieve the maximum productivity you should do only one task at a time. It is crucial to also understand that if the progress of the task is blocked, it should be removed from the WIP list, and have the Scrum Master to resolve the Impediment as quickly as possible. The maximum total efficiency of a software engineering process can be achieved by applying the same principle also on the big level by releasing the feature immediately once it has been completed, even several times per day.

Published by: