Wednesday, December 29, 2010

Cognitive Biases in Agile Process, Agile Leaders, Agile Practitioners, and Agile User Group Participants

Recently I have been studying cognitive biases. I soon began to think that, "If a person or group of persons value certain biases and they compose some method or process are these biases passed on to the process and are they evident in the process?"

For reference I will use Wikipedia's entry for a list of Cognitive Biases.

Also, I will use Wikipedia's entry for Agile Software Development and the Agile Manifesto.

Of course anything I report will be affected by my own personal biases, as with any report, but this should not cause one to avoid the task but instead to recognize the reality that human bias manifests itself.

Anchoring – the common human tendency to rely too heavily, or "anchor," on one trait or piece of information when making decisions.

Agile Leaders state their opinion that software development had become heavily anchored to processes, tools, comprehensive documentation, contract negotiation, and following a plan.

Agile Leaders recognized the value of these "anchors" but proposed that such biases may actually have adverse affects in software development.

Agile processes reflect the belief of the agile leaders manifesting a shift from these traditional anchors by placing individuals, working software, customer collaboration, and response to change on one side of the balance and the traditional anchors on the other, with the scale tipping in favored weight for the new agile values.

Bandwagon effect – the tendency to do (or believe) things because many other people do (or believe) the same. Related to groupthink and herd behavior.

In my opinion the majority of participants in any software process, traditional or agile, are those on the bandwagon. Bandwagon participation may be justified by an authority bias in that one will say, "Most everyone in the industry is following this process and the process has been defined and recommended by several industry leaders." This opinion coincides with my experience in following company efforts in adopting agile processes.

Bias blind spot – the tendency to see oneself as less biased than other people.

I believe that many agile practitioners and agile user group participants suffer from the blind spot bias.

Confirmation bias – the tendency to search for or interpret information in a way that confirms one's preconceptions.

Many arguments, contentions, and heated debates between different schools of thought concerning software development have their premise based upon building blocks gathered specifically to confirm one's preconceptions. If someone finds a success story from a group that executed a waterfall approach to software development does that make that success transferable? That same goes for an agile shop.

Distinction bias – the tendency to view two options as more dissimilar when evaluating them simultaneously than when evaluating them separately.

During the early years of the agile reformation many tried to contrast the new agile methods with traditional methods by showing them to be very dissimilar. In my opinion the similarities of all software processes are greater than their dissimilarities.

Hyperbolic discounting – the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs, where the tendency increases the closer to the present both payoffs are.

This bias is particularly interesting in the Agile community. This one bias deserves in-depth treatment on its own. Briefly though I notice agile practitioners weigh immediate "everything" as preferred. For those that recognize such behavior as a bias it can cause conflict during planning meetings and discussions.

Illusion of control – the tendency to overestimate one's degree of influence over other external events.

Traditional software process seemed to suffer greatly from the illusion of control. For me, the recognition of this bias, was one of the main motivators for me to try and figure out a better way to develop software. Big Upfront Design wasn't working on large projects. Therefore I had began to lay ground work for alternative approaches and to identify methods and practices that seemed to contribute to the issues. I remember the CCB, the Change Control Board, where all changes had to go through committee. When I started reading about Extreme Programming I recognized some common concerns and therefore was further intrigued. (Initially I didn't see how XP worked together and started to write a paper against the process, but as I gave real effort in creating real examples of how XP principles didn't apply I soon found that they did.)

Irrational escalation – the phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the decision was probably wrong.

With heavy processes I have experienced this irrational escalation bias many times. "We have hundreds of pages of documentation and we just can't throw out all of that valuable work!"

I see the same with code. We have tens of thousands of lines of code and we can not afford any kind of change to that base.

Agile processes have offered many approaches to the avoidance of this bias. Refactoring, Test First, Continuous Builds, and many other.

Neglect of probability – the tendency to completely disregard probability when making a decision under uncertainty.

Agile processes speak directly to uncertainty and probability. "You Aint Going To Need It" (YAGNI) is an excellent example of this.

Normalcy bias – the refusal to plan for, or react to, a disaster which has never happened before.

Sometimes agile practitioners are accused of the normalcy bias. Through the use of the mantra "Do the simplest thing that works" leads people to believe that there is no planning for disaster. Some people may chose to not plan for disaster but I do not know of any software process that says not to be concerned with the potential of disaster.

Outcome bias – the tendency to judge a decision by its eventual outcome instead of based on the quality of the decision at the time it was made.

I believe that agile processes are outcome bias and that the agile leaders recognized this bias and therefore recommend short delivery cycles.

Semmelweis reflex – the tendency to reject new evidence that contradicts an established paradigm.

I believe that the adoption of any process may be effected by the Semmelweis reflex. Just as groups that practice traditional software processes have had difficulty accepting agile processes, groups that practice agile processes will have difficulty accepting some other process.

Status quo bias – the tendency to like things to stay relatively the same (see also loss aversion, endowment effect, and system justification).

Any individual or group of people may have the status quo bias. Sometimes persons dealing with others that are rejecting some proposed change immediately label the others as suffering with status quo bias. I don't know the name of the bias of labeling others with the most commonly known biases is called, but it definitely exists. This labeling may close communication paths and make it more difficult for change. (I have found it: the fundamental attribution error, correspondence bias or attribution effect.)

Forward Bias - the tendency to create models based on past data which are validated only against that past data.

This bias is tied to estimation. Someday I will have to give thought to "yesterday's weather" as compared to forward bias and the gambler's fallacy.

Primacy effect – the tendency to weigh initial events more than subsequent events.

Traditional processes often suffer from the primacy effect. Agile processes are not immune from the primacy effect either.

In the context of source code, the primacy effect is very real. The abstract idea of code mass may be a result of the primacy effect.

Disregard of regression toward the mean – the tendency to expect extreme performance to continue.

I feel that some advocates of new processes intentionally disregard the regression toward the mean. I have particularly noticed this in various implementations of SCRUM, where the idea is that performance increases should be expected over several months as the practitioner improves in their application of SCRUM.

Stereotyping – expecting a member of a group to have certain characteristics without having actual information about that individual.

I feel that this happens too often in the areas of "swarming" and "cross functional team".

Conclusion

I believe that biases are in full effect in the Leadership of any group and that those biases will be reflected in their proposed methods and processes.

Agile Leadership recognized various biases reflected in traditional processes and addressed some of those biases.

Agile practitioners and Agile user group participants manifest their particular biases as individuals and other biases associated with groups.

Wednesday, September 29, 2010

What is "Software Seasoning"?

Software Seasoning

September 29, 2010


I have been reading an interview with the late Hiromu Naruse entitled "What is 'Automotive Seasoning'?". Naruse was known as a "meister of automobile manufacturing". Naruse compares the inception, creation, and completion of an automobile to that of preparing a culinary dish. For me the analogy is clear and meaningful. I love automobiles and I drive as many different types as possible. It doesn't matter if the automobile is a Honda Civic or a BMW 3 series, I enjoy "tasting" each vehicle and judging if the automobile was satisfying in its role. 


Is there flavor to software products? I believe so. I will follow the flow of Naruse's interview and compare it to software. 


The performance of the software, the minimum system requirements, the list of features, and other "specs" of the software are simply the ingredients. Having all of the "right" features does not determine the quality of the software or whether it will impress the users. The goal of "software seasoning" is to provide the customer with the optimal product. Just as there are many different types of cars made for many different purposes and many different dishes there are many different types of software, from text editors, to games, from shell's to windowing environments. 


The flavor of the software should be adjusted to its specific characteristics. What are software characteristics? 


Two important ones are "flow" and "use". 


The software will have a purpose for which it is used and the work will progress with a certain flow. Complex software will have "inner" activities that will have their own flow. In that sense software can be similar to an entire meal instead of just a single dish. Each "inner" activity would be like a course of the meal. Refactoring software is not "seasoning" software. The seasoning of software is centered around how the user feels about the product. The fine tuning or seasoning of software results in true user satisfaction. 


A software user's experience can be changed greatly by moving an item in the "flow" or positioning a control in a different location. Naruse describes the quality of the ride of an automobile is related to the "logitudinal G-force". The quality of the software user's experience can be enhanced by such well known things as uses multiple threads to complete the initialization of the software thus giving the user the ability to interact with portions of the software sooner while other portions are still loading. I relate this to how the courses of a meal are delivered. While you are eating the soup the salad is being prepared. If you ordered your meal and then had to wait until all the courses were prepared before any course was delivered the overall experience would be tainted by the initial long wait. Everyone knows about using multiple threads for initialization, but since we all know about that practice it makes it an excellent example. When the technique first came into practice those that used it had a more seasoned product than those that did not. Now that practice is common to all and therefore not interesting anymore. It is up to you to find new ways to season your software. 


Naruse makes an interesting observation, "most people cannot really tell the difference between high-end brand clothes and relatively inexpensive clothes simply by looking at them, but when worn, the differences become apparent." 


Looking at feature lists and screenshots is not sufficient to determine the flavor of a software product. Usage allows the difference to become apparent. 


Naruse also points out that we get tired of food that tastes too good. That is interesting. Do people get tired of software that is too good? I can think of examples where this is the case. I will leave that as an exercise for you to see if you agree. 


True flavor comes out after years of use. This is an interesting point. Software changes quickly and often very drastically. Naruse says, "It has been commonly said that people get tired of a beauty after three days. With a car, as in the case of my dear wife, the true flavor comes out after years of being together, through thick and thin. As with one’s spouse, it is the odd imperfection that gives a car its unique character and appeal." 


If you have ever participated in a major change of a well established software product there is often a quick and vocal expression of dissatisfaction with the new flavor of the software. Removing all imperfections does not create quality. I like McDonald's fries because of their particular "over" saltiness. I can not sit down and eat McDonald's fries all day long because they are too salty for that. But they are "just right too salty" for certain occasions! I also like the fries from "Five Guys" and from "In N Out" but if you combined all three of these fries to make the perfect french fry do you think you will succeed? In other words, there are times when I like the salty quality of McDonald's fries. 


Naruse, again, makes another very interesting observation: "When adding seasoning, it is necessary to determine one’s own flavor. Even if you were to conduct a survey and ask customers what kinds of flavor they want, you wouldn’t find the answer there. Rather, there are two possible questions that you could ask customers. Does it taste good or bad? Or, do you want to eat it again or not? This is because customers are not professionals, and if you increase or decrease the salt according to customer requests, the flavor will gradually become peculiar. There is no sense in seeking a middle of the road taste that practically no-one would dislike." This is interesting because many software processes feel that continuous customer input is the only way to arrive at a quality product. This is very interesting. I have studied the development of the iPod and found that the product was kept secret and did not use continuous customer involvement. Are there lessons here? Can a software process be developed that will produce software that has qualities that appeal to users, such quality that the user will spend their money to own the product? It is arguable that all software processes try to lay some claim in that area. 

Naruse states in the interview, "At one point, there was an attempt to quantify my know-how and create a manual. In the end, however, it didn’t turn out well. This is because know-how is not the same as knowledge. Results such as 'in this type of situation, I used this kind of countermeasure' are no more than solutions for specific problems. What is important is asking how the solution was reached, or why something was done the way it was. This is what we call technique or craftsmanship. Craftsmanship is not handed down through education. Things that are learnt from others passively will never be useful. What is necessary is 'nurturing.' In other words, you will not learn unless you feel that you must do something and want to do something and have the desire to learn and to take from others. Craftsmanship is handed down in implicit knowledge." 


This too is very interesting when considering software processes, the teaching of a particular software process, and the expectation of a particular quality result from the process. Is your method of instructing developers on how to write quality software more like a manual of knowledge or more like a nurturing system of sharing know-how? 


Naruse states that the racetrack creates the flavor of cars. "Races are the best forum for handing down craftsmanship and nurturing human resources. Unexpected things happen all the time and things that must be done out of necessity occur constantly. It is necessary to skillfully and accurately solve problems with limited time and tools. These types of things do not happen within a computer, but happen right before our eyes. It is under these extreme conditions that we focus entirely on winning the race and work as hard as we possibly can. The word “can’t” does not exist at the racetrack. This type of experience builds our character, and builds cars. Both the drivers and engineers focus their five senses to engage in a dialogue with the car under the extreme conditions of the race. It is through this dialogue that the perfect flavor becomes visible" - Naruse. 


Do you develop software at the racetrack or in the shop? Did you notice that dialog is the key? Dialog between professionals, between craftsmen, that bring skill to the situation. The approach is based on "go and see". See it in use. Push it in use. Break it in use. Go and see, don't imagine it, don't talk about it, don't have meetings to talk about it, but go and see. 


Finally there has to be someone that is responsible for the "flavor" of the software. Until that one person gives the okay the software cannot be sold. This statement might not be readily accepted or understood by most. Some might quickly argue that Naruse is Japanese and "they" are different. Maybe Japanese are different from Americans, and Americans are different from Europeans, and maybe they are not. I feel that we are more alike than different. At least those of us that have come to recognize the difference between a worker and a craftsman, a stone cutter versus a cathedral builder. There can be democracy in a software development team. The freedom to fine-tune your work area, your work process, and the things for which you have responsibility. But ultimately the flavor being expressed will be that of one chef. If you are a cook but want to act like a chef you might become frustrated. If you want to be a chef then be one. Do not complain if you are a cook and treated as one. 

The software that many work on will have no particular taste, and will require no particular seasoning. But for those lucky developers that work on software that is valued and sought after, that brings some modicum of satisfaction to the user, those developers should be aware of the need for "flavor" and work together like the racing team through dialog to hand down craftsmanship and through "going" and "seeing" that the correct flavor will be made manifest.


Tuesday, August 10, 2010

Covariance and Contravariance and Generic Delegates

Generic Delegates now use the modifiers "in" and "out" with .NET 4.

"Out" is associated with return values and covariance.
"In" is associated with input parameters(method arguments) and contravariance.

I use the following classes and relationships throughout this blog post.


class A { }
class B : A { }
class C : B { }

Object "is a" Object => true.
Object "is a" A => false.
Object "is a" B => false.
Object "is a" C => false.

A "is a" Object => true.
A "is a" A => true.
A "is a" B => false.
A "is a" C => false.

B "is a" Object => true.
B "is a" A => true.
B "is a" B => true.
B "is a" C => false.

C "is a" Object => true.
C "is a" A => true.
C "is a" B => true.
C "is a" C => true.


Covariance


//-----------------------------------------------
public delegate R DCovariant<out R>( );

//-----------------------------------------------
//Methods that match the Covariant Signature

public static object CovariantObjectMethod( )
{
return new object( );
}

public static A CovariantAMethod( )
{
return new A( );
}

public static B CovariantBMethod( )
{
return new B( );
}

public static C CovariantCMethod( )
{
return new C( );
}


Delegate "DCovariant" is an example of using "out" to specify a covariant delegate. Covariance is concerned with the return type of the delegate.

Any covariant delegate can be assigned any method that matches the signature of the delegate and the return type of the method satisfies the "is a" relationship with the return type of the delegate.


DCovariant<object> dCovObj = CovariantObjectMethod;
DCovariant<A> dCovA = CovariantAMethod;
DCovariant<B> dCovB = CovariantBMethod;
DCovariant<C> dCovC = CovariantCMethod;

dCovObj = CovariantAMethod;
dCovObj = CovariantBMethod;
dCovObj = CovariantCMethod;

//dCovA = CovarianteObjectMethod; //wrong return type.
dCovA = CovariantBMethod;
dCovA = CovariantCMethod;

//dCovB = CovarianteObjectMethod; //wrong return type.
//dCovB = CovarianteAMethod; //wrong return type.
dCovB = CovariantCMethod;

//dCovC = CovarianteObjectMethod; //wrong return type.
//dCovC = CovarianteAMethod; //wrong return type.
//dCovC = CovarianteBMethod; //wrong return type.


Consider the declaration:
DCovariant<object> dCovObj

This delegate's return type is Object, so therefore any method that matches the delegate signature and returns something that is an object may be assigned to dConObj.

DCovariant<A> dCovA can be assigned any method that matches the delegate signature and returns something that "is a" A.

DCovariant<B> dCovB can be assigned any method that matches the delegate signature and returns something that "is a" B.

DCovariant<C> dCovC can be assigned any method that matches the delegate signature and returns something that "is a" C.

Contravariance


//-----------------------------------------------
public delegate void DContravariant<in T>( T t );

//-----------------------------------------------
//Methods that match the Contravariant Signature

public static void ContravariantObjectMethod( Object o )
{
}

public static void ContravariantAMethod( A a )
{
}

public static void ContravariantBMethod( B b )
{
}

public static void ContravariantCMethod( C c )
{
}


Above are declared a contravariant delegate using the "in" modifier and four methods that match the signature of the contravariant delegate.

Below I have declared and assigned four DContravariant delegates. The types used are Object, A, B, and C.


//Contravariant
DContravariant<object> dContravarObj = ContravariantObjectMethod;
DContravariant<A> dContravarA = ContravariantAMethod;
DContravariant<B> dContravarB = ContravariantBMethod;
DContravariant<C> dContravarC = ContravariantCMethod;

Below I have declared and instantiated an object of each type and then invoke each delegate with each type.

Object o = new Object( );
A a = new A( );
B b = new B( );
C c = new C( );

dContravarObj( o );
dContravarObj( a );
dContravarObj( b );
dContravarObj( c );

//dContravarA( o ); //Argument 1: cannot convert from...
dContravarA( a );
dContravarA( b );
dContravarA( c );

//dContravarB( o ); //Argument 1: cannot convert from...
//dContravarB( a ); //Argument 1: cannot convert from...
dContravarB( b );
dContravarB( c );

//dContravarC( o ); //Argument 1: cannot convert from...
//dContravarC( a ); //Argument 1: cannot convert from...
//dContravarC( b ); //Argument 1: cannot convert from...
dContravarC( c );

Notice that there are compile time errors where the input argument could not be converted to the proper type.

dContravarObj can accept all of the variables as arguments because they all satisfy the "is a" relationship with Object.

dContravarA can accept variables of type Object and A.
dContravarB can accept variables of type Object, A, and B.
dContravarC can accept variables of type Object, A, B, and C.

The contraviance and the use of "in" plays its role when assigning methods with different signatures to the delegate. Below are the results.


//dContravarObj = ContravariantAMethod; //no overload matches delegate...
//dContravarObj = ContravariantBMethod; //no overload matches delegate...
//dContravarObj = ContravariantCMethod; //no overload matches delegate...

dContravarA = ContravariantObjectMethod;
//dContravarA = ContravariantBMethod; //no overload matches delegate...
//dContravarA = ContravariantCMethod; //no overload matches delegate...

dContravarB = ContravariantObjectMethod;
dContravarB = ContravariantAMethod;
//dContravarB = ContravariantCMethod; //no overload matches delegate...

dContravarC = ContravariantObjectMethod;
dContravarC = ContravariantAMethod;
dContravarC = ContravariantBMethod;



Notice that any method may be assigned to any delegate if:
The delegate's input argument satisfies the "is a" relationship with the method's input argument.

Finally to tie this all together I will use the most derived class "C" and the corresponding delegate and assign it the method with the lease derived class "Object".


//Invocations
dContravarC = ContravariantObjectMethod;
//dContravarC( o ); //Argument 1: cannot convert from...
//dContravarC( a ); //Argument 1: cannot convert from...
//dContravarC( b ); //Argument 1: cannot convert from...
dContravarC( c );

Eventhough the ContravarianteObjectMethod can take any of our classes as input the delegate signature for dContravarC is DContravariant<C> and therefore the method invocation can only accept objects that "is a" C.

To help illustrate this I will assign dContravarB since class B is in the "middle" of the derivation.

dContravarB = ContravariantObjectMethod;
//dContravarB( o ); //Argument 1: cannot convert from...
//dContravarB( a ); //Argument 1: cannot convert from...
dContravarB( b );
dContravarB( c );


Notice that the delegate invocation will only accept things that satisfy the "is a" relationship with class B, which is class B and class C.

Following is the complete listing of the example code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace GenericDelegateModifiers
{
class A { }
class B : A { }
class C : B { }

class Program
{
//-----------------------------------------------
public delegate R DCovariant<out R>( );

//-----------------------------------------------
//Methods that match the Covariant Signature

public static object CovariantObjectMethod( )
{
return new object( );
}

public static A CovariantAMethod( )
{
return new A( );
}

public static B CovariantBMethod( )
{
return new B( );
}

public static C CovariantCMethod( )
{
return new C( );
}

//-----------------------------------------------
public delegate void DContravariant<in T>( T t );

//-----------------------------------------------
//Methods that match the Contravariant Signature

public static void ContravariantObjectMethod( Object o )
{
}

public static void ContravariantAMethod( A a )
{
}

public static void ContravariantBMethod( B b )
{
}

public static void ContravariantCMethod( C c )
{
}

//-----------------------------------------------

static void Main( string[] args )
{
//-----------------------------------------------
//Covariant
DCovariant<object> dCovObj = CovariantObjectMethod;
DCovariant<A> dCovA = CovariantAMethod;
DCovariant<B> dCovB = CovariantBMethod;
DCovariant<C> dCovC = CovariantCMethod;

dCovObj = CovariantAMethod;
dCovObj = CovariantBMethod;
dCovObj = CovariantCMethod;

//dCovA = CovarianteObjectMethod; //wrong return type.
dCovA = CovariantBMethod;
dCovA = CovariantCMethod;

//dCovB = CovarianteObjectMethod; //wrong return type.
//dCovB = CovarianteAMethod; //wrong return type.
dCovB = CovariantCMethod;

//dCovC = CovarianteObjectMethod; //wrong return type.
//dCovC = CovarianteAMethod; //wrong return type.
//dCovC = CovarianteBMethod; //wrong return type.

//-----------------------------------------------
//Contravariant
DContravariant<object> dContravarObj = ContravariantObjectMethod;
DContravariant<A> dContravarA = ContravariantAMethod;
DContravariant<B> dContravarB = ContravariantBMethod;
DContravariant<C> dContravarC = ContravariantCMethod;

Object o = new Object( );
A a = new A( );
B b = new B( );
C c = new C( );

dContravarObj( o );
dContravarObj( a );
dContravarObj( b );
dContravarObj( c );

//dContravarA( o ); //Argument 1: cannot convert from...
dContravarA( a );
dContravarA( b );
dContravarA( c );

//dContravarB( o ); //Argument 1: cannot convert from...
//dContravarB( a ); //Argument 1: cannot convert from...
dContravarB( b );
dContravarB( c );

//dContravarC( o ); //Argument 1: cannot convert from...
//dContravarC( a ); //Argument 1: cannot convert from...
//dContravarC( b ); //Argument 1: cannot convert from...
dContravarC( c );

//dContravarObj = ContravariantAMethod; //no overload matches delegate...
//dContravarObj = ContravariantBMethod; //no overload matches delegate...
//dContravarObj = ContravariantCMethod; //no overload matches delegate...

dContravarA = ContravariantObjectMethod;
//dContravarA = ContravariantBMethod; //no overload matches delegate...
//dContravarA = ContravariantCMethod; //no overload matches delegate...

dContravarB = ContravariantObjectMethod;
dContravarB = ContravariantAMethod;
//dContravarB = ContravariantCMethod; //no overload matches delegate...

dContravarC = ContravariantObjectMethod;
dContravarC = ContravariantAMethod;
dContravarC = ContravariantBMethod;

//Invocations
dContravarC = ContravariantObjectMethod;
//dContravarC( o ); //Argument 1: cannot convert from...
//dContravarC( a ); //Argument 1: cannot convert from...
//dContravarC( b ); //Argument 1: cannot convert from...
dContravarC( c );

dContravarB = ContravariantObjectMethod;
//dContravarB( o ); //Argument 1: cannot convert from...
//dContravarB( a ); //Argument 1: cannot convert from...
dContravarB( b );
dContravarB( c );

}
}
}

Monday, August 09, 2010

Covariance and Contravariance and Generic Interfaces

Generic Interfaces now use the modifiers "in" and "out" with .NET 4.

"Out" is associated with return values and covariance.
"In" is associated with input parameters (method arguments) and contravariance.

Covariance

// Covariant interface.
interface ICovariant<out R> { }

// Extending covariant interface.
interface IExtCovariant<out R> : ICovariant<R> { }

// Implementing covariant interface.
class SampleCovariant<R> : ICovariant<R> { }


The generic type parameter has "out" as a modifier. The "out" declaration can be used if the type parameter is used only as a return type of interface methods and not used as a type of method arguments.



ICovariant<Object> covObj = new SampleCovariant<Object>( );
ICovariant<String> covStr = new SampleCovariant<String>( );

//covStr = covObj; //Cannot implicitly convert type ICovariant<object> to ICovariante<string>. An explicit conversion exists (are you mssing a cast?)

// You can assign covStr to covObj because the ICovariant interface is covariant.
covObj = covStr;

String "is a" Object => true
Object "is a" String => false

(see : Covariance and Contravariance for Delegates in C#)

Since string is an object then the instance of SampleConvariant can be assigned to the interface ICovariant<out R>.

Contravariance


// Contravariant interface.
interface IContravariant<in A> { }

// Extending contravariant interface.
interface IExtContravariant<in A> : IContravariant<A> { }

// Implementing contravariant interface.
class SampleContravariant<A> : IContravariant<A> { }


The generic type parameter has "in" as a modifier. The "in" declaration can be used if the type parameter is used only as a type of method arguments and not used as a return type of interface methods.

String "is a" Object => true
Object "is a" String => false


IContravariant<Object> contraObj = new SampleContravariant<Object>( );
IContravariant<String> contraStr = new SampleContravariant<String>( );

//contraObj = contraStr; //Cannot implicitly convert IContravariant<string> to IContravariant<object>. An explicit conversion exists (are you missing a cast?)

// You can assign contraObj to contraStr because the IContravariant interface is contravariant.
contraStr = contraObj;


Since we are dealing with arguments (input parameters) and a string is an object you can assign an instance of IContravariant <Object> to IContravariant<String>.

As I am writing this I recognize that the above statement is not clear, even to me. That is because the example code is insufficient. I based the example code on MSDN code. So, let us continue now with a better example.


interface IBetterContravariant<in T>
{
void MyMethod( T t );
}

class BetterContravariant<T> : IBetterContravariant<T>
{
#region IBetterContravariant<T> Members

public void MyMethod( T t )
{
}

#endregion
}


This better example now has a method with a generic input parameter of type <in T> because the interface is contravariant.


IBetterContravariant<Object> betterContraObj = new BetterContravariant<Object>( );
IBetterContravariant<String> betterContraString = new BetterContravariant<String>( );

String s = "text";
Object o = new object( );

betterContraObj.MyMethod( s );
betterContraObj.MyMethod( o );

betterContraString.MyMethod( s );
//betterContraString.MyMethod( o ); //Error: Arugment 1: cannot convert 'object' to 'string'

//betterContraObj = betterContraString; //Cannot implicitly convert... error

// You can assign betterContraObj to betterContraString because IBetterContravariant interface is contravariant
betterContraString = betterContraObj;

betterContraString.MyMethod( s );
//betterContraString.MyMethod( o ); //Error: Arugment 1: cannot convert 'object' to 'string'


Notice that I have instantiated a String and an Object. Before betterContraString is assigned betterContraObj I invoke the "MyMethod" method on each instance of IBetterContravariant twice, once passing a String and then passing an Object.

Notice that betterContraObject.MyMethod accepts a String or an Object and there are no compile time errors. This is because a String "is a" Object.

Notice that betterContraString.MyMethod accepts a String however it does not accept an Object. This is because an Object is NOT a String.

Since betterContraObject accepts Strings (as well as Objects) it can be assigned to betterContraString.

Notice in the example code that after betterContraString is assigned betterContraObject that betterContraString still only accepts Strings as input to MyMethod. This is because betterContrString is still an interface to IBetterContravariant<String>. The interface did not change because of the assignment.

Here is all of the example source code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace GenericModifiers
{
interface IDefault<R> { }
class SampleDefault<R> : IDefault<R> { }

//---------------------------------------------------------

// Covariant interface.
interface ICovariant<out R> { }

// Extending covariant interface.
interface IExtCovariant<out R> : ICovariant<R> { }

// Implementing covariant interface.
class SampleCovariant<R> : ICovariant<R> { }

//---------------------------------------------------------

// Contravariant interface.
interface IContravariant<in A> { }

// Extending contravariant interface.
interface IExtContravariant<in A> : IContravariant<A> { }

// Implementing contravariant interface.
class SampleContravariant<A> : IContravariant<A> { }

//---------------------------------------------------------

//Better Contravariant interface example

interface IBetterContravariant<in T>
{
void MyMethod( T t );
}

class BetterContravariant<T> : IBetterContravariant<T>
{
#region IBetterContravariant<T> Members

public void MyMethod( T t )
{
}

#endregion
}

//---------------------------------------------------------

class Program
{
static void Main( string[] args )
{
IDefault<Object> dobj = new SampleDefault<Object>( );
IDefault<String> dstr = new SampleDefault<String>( );

//dstr = dobj; //Cannot implicitly convert type IDefault<object> to IDefault<string>. An explicit conversion exists (are you mssing a cast?)
//dobj = dstr; //Cannot implicitly convert type IDefault<string> to IDefault<object>. An explicit conversion exists (are you mssing a cast?)

//---------------------------------------------------------

ICovariant<Object> covObj = new SampleCovariant<Object>( );
ICovariant<String> covStr = new SampleCovariant<String>( );

//covStr = covObj; //Cannot implicitly convert type ICovariant<object> to ICovariante<string>. An explicit conversion exists (are you mssing a cast?)

// You can assign covStr to covObj because the ICovariant interface is covariant.
covObj = covStr;

//---------------------------------------------------------
IContravariant<Object> contraObj = new SampleContravariant<Object>( );
IContravariant<String> contraStr = new SampleContravariant<String>( );

//contraObj = contraStr; //Cannot implicitly convert IContravariant<string> to IContravariant<object>. An explicit conversion exists (are you missing a cast?)

// You can assign contraObj to contraStr because the IContravariant interface is contravariant.
contraStr = contraObj;

//---------------------------------------------------------
IBetterContravariant<Object> betterContraObj = new BetterContravariant<Object>( );
IBetterContravariant<String> betterContraString = new BetterContravariant<String>( );

String s = "text";
Object o = new object( );

betterContraObj.MyMethod( s );
betterContraObj.MyMethod( o );

betterContraString.MyMethod( s );
//betterContraString.MyMethod( o ); //Error: Arugment 1: cannot convert 'object' to 'string'

//betterContraObj = betterContraString; //Cannot implicitly convert... error

// You can assign betterContraObj to betterContraString because IBetterContravariant interface is contravariant
betterContraString = betterContraObj;

betterContraString.MyMethod( s );
//betterContraString.MyMethod( o ); //Error: Arugment 1: cannot convert 'object' to 'string'

}
}
}

Covariance and Contravariance for Delegates in C#

.NET 4 introduces new usages for covariance and contravariance.

From MSDN:

Covariance and contravariance provide a degree of flexibility when matching method signatures with delegate types. Covariance permits a method to have a more derived return type than what is defined in the delegate. Contravariance permits a method with parameter types that are less derived than in the delegate type.

What helps me understand this variance stuff is to base it around the "is a" relationship.

class A {}
class B : A{}
class C: B{}

A "is a" B => false
A "is a" C => false

B "is a" A => true
B "is a "C => false

C "is a" A => true
C "is a" B => true

Return type of delegate signatures (Covariance)

public delegate A HandlerMethodA();

The return type of the delegate is "A".

Speaking solely about the return type in this example, any method that returns something that "is a" A can be used as the delegate method.

public static A FirstHandler() { return null; }
public static B SecondHandler(){return null;}
public static C ThirdHandler(){return null;}

All three of these methods can be used as a delegate method for delegate HandlerMethodA.

public delegate B HandlerMethodB();

HandlerMethodB signature has the return type of B. Therefore only SecondHandler and ThirdHandler can be used as methods for the delegate HandlerMethodB.


Parameter types of delegate signatures (Contravariance)

Using the same classes above, A, B, and C, we define a new delegates:

public delegate void HandlerMethodC(C c);

public static void FirstHandler(A a) { }
public static void SecondHandler(B b){}
public static void ThirdHandler(C c){}

All three of these methods can be used as a delegate method for delegate HandlerMethodC.

The reason is that a variable of type C can be passed to any of these handlers.
The delegate signature parameter "is a" delegate method parameter.
C "is a" A is true.
B "is a" A is true.
A "is a" A is true.

Consider another delegate:
public delegate void HandlerMethodB(B b);

Only methods that can accept B as the parameter type can be assigned to the delegate HandlerMethodB.

FirstHandler and SecondHandler methods may be assigned to the delegate HandlerMethodB.
FirstHandler method can accept B types as well as A and C.
SecondHandler method can accept B types as well as C.


ThirdHandler cannot be assigned to delegate HandlerMethodB because there is no overload for ThirdHandler that matches the delegate.
ThirdHandler method can only accept C types.
Since the delegate signature is of type B, third handler cannot be used.

Regardless of which methods may be assigned to HandlerMethodC or HandlerMethodB the delegate will only accept variables that "are" of the type defined by the delegate's signature.

HandlerMethodC may only be invoked with a parameter of type C.
HanlderMethodB may be invoked with a parameter of type C or B.


I recommend reading:
http://blogs.msdn.com/b/ericlippert/archive/2009/11/30/what-s-the-difference-between-covariance-and-assignment-compatibility.aspx

http://geekswithblogs.net/Martinez/articles/covariance-contravariance-and-invariance-in-c-language.aspx


Here is my code I used to better understand Convariance and Contravariance for Delegates in C#.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace VarianceExperiments
{
class A
{
}

class B : A
{
}

class C : B
{
}

class Program
{
public delegate A CovarianceHandlerA( );
public delegate B CovarianceHandlerB( );
public delegate C CovarianceHandlerC( );

public delegate void ContravarianceHandlerA( A a );
public delegate void ContravarianceHandlerB( B b );
public delegate void ContravarianceHandlerC( C c );

public static A CoHandlerA( ) { return new A( ); }
public static B CoHandlerB( ) { return new B( ); }
public static C CoHandlerC( ) { return new C( ); }

public static void ContraHandlerA( A a ) { }
public static void ContraHandlerB( B b ) { }
public static void ContraHandlerC( C c ) { }

static void Main( string[] args )
{
A a = new A();
B b = new B();
C c = new C();

Console.WriteLine( "A is a B => {0}", ( a is B ) );
Console.WriteLine( "A is a C => {0}", ( a is C ) );

Console.WriteLine( "B is a A => {0}", ( b is A ) );
Console.WriteLine( "B is a C => {0}", ( b is C ) );

Console.WriteLine( "C is a A => {0}", ( c is A ) );
Console.WriteLine( "C is a B => {0}", ( c is B ) );


CovarianceHandlerA coHandler = CoHandlerA;
A coResult = coHandler( );
Console.WriteLine( "coResults is a A => {0}, is a B => {1},
is a C => {2}", ( coResult is A ),
( coResult is B ), ( coResult is C ) );

coHandler = CoHandlerB;
coResult = coHandler( );
Console.WriteLine( "coResults is a A => {0}, is a B => {1},
is a C => {2}", ( coResult is A ),
( coResult is B ), ( coResult is C ) );

coHandler = CoHandlerC;
coResult = coHandler( );
Console.WriteLine( "coResults is a A => {0}, is a B => {1},
is a C => {2}", ( coResult is A ),
( coResult is B ), ( coResult is C ) );

//---------------------------------------------------------------

ContravarianceHandlerC contraHandlerC = ContraHandlerA;
//contraHandlerC( a ); //Error: Argument 1: cannot convert A to C
//contraHandlerC( b ); //Error: Argument 1: cannot convert B to C
contraHandlerC( c );

contraHandlerC = ContraHandlerB;
//contraHandlerC( a ); //Error: Argument 1: cannot convert A to C
//contraHandlerC( b ); //Error: Argument 1: cannot convert B to C
contraHandlerC( c );

contraHandlerC = ContraHandlerC;
//contraHandlerC( a ); //Error: Argument 1: cannot convert A to C
//contraHandlerC( b ); //Error: Argument 1: cannot convert B to C
contraHandlerC( c );

//---------------------------------------------------------------

ContravarianceHandlerB contraHandlerB = ContraHandlerA;
//contraHandlerB( a ); //Error: Argument 1: cannot convert A to B
contraHandlerB( b );
contraHandlerB( c );

contraHandlerB = ContraHandlerB;
//contraHandlerB( a ); //Error: Argument 1: cannot convert A to B
contraHandlerB( b );
contraHandlerB( c );

//contraHandlerB = ContraHandlerC; //No overload for ContrHandlerC matches delegate ContravarianceHandlerB

}
}
}

Friday, July 30, 2010

WIP and the Developer

Work in Progress (WIP) at the individual developer's level is the point of view for this blog post.

WIP for an individual developer (for this blog) is defined as any prioritized item in the backlog/work queue for the developer. The backlog works as any simple priority queue, take the next item with the highest priority. (Starvation of items is acceptable!)

First, a manufacturing analogy.

Suppose you are on an assembly line where you attach three widgets to a gadget. You take the gadget from a bin (your backlog) and line up "widget 1", insert screw, and tighten, then line up "widget 2", insert screw, and tighten, and finally line up "widget 3", insert screw, and tighten, and then place the widgetfied gadget into a completion bin (inventory).

Could you gain performance if the process was changed to this:
Take gadget from bin, line up "widget 1", insert screw, and tighten, place in bin. Do this for all gadgets with widget 1. Then take gadget with widget 1 from bin, line up "widget 2", insert screw, and tighten, and place in bin. Do this for each gadget until all of widget 2 attached and then do this for widget 3 and place results in inventory bin.

Of course the answer depends. It depends on many things.

If all three widgets use the same screw, and thus the same screw driver, then it might be faster to stay with the original approach.

If all three widgets use different screws then the following should be measured:
- How long does it take to switch screw drivers and select proper screw?
- How long does it take to reach into the bin and extract a gadget plus how long does it take to place a gadget into a completion bin?
- How long does it take to switch position to work with different screw driver, screws, and bin (backlog)?

With these measurement one can decide which approach is faster.

Back to software development.

If the developer's backlog contains work that is related then the developer doesn't need to retool. For instance, if the developer works on a specific feature set, the domain is well defined, and the developer works with a small number of tools.

If the developer has many areas of responsibility, front end, middle tier, database layer, web services, etc., the developer has to mentally retool to work in these different areas.

In this situation the I suggest the consideration of a multiple queues for the single developer. The prioritization process is more complex, but the waste of retooling could be addressed. It may not be possible, but it should be considered. I wouldn't go too crazy with this. The reason is that if the code is being developed like a spike then the domain knowledge and flow from top to bottom is continuous and is good.

Switching context on features is expensive as well. I personally find it more expensive than switch context for tools.

If the developer's queue is made up of features (that are not subdivided so finely that it says, make UI change, make middle tier change, and make database change) then the developer can work on the feature from start to finish, keeping the domain context fresh and active in his mind. I prefer this approach.

If two features are related, like making a horizontal hourglass chart and a vertical hourglass chart, then the only difference is layout. The developer should look at the items in his queue and take "make horizontal hourglass chart" and "make vertical hourglass chart" and change it to "make hourglass chart", "vertical layout for hourglass chart" and "horizontal layout for hourglass chart".

I have been studying the idea that limiting the number of items in the work queue (the WIP) can increase throughput. This is known as Limiting WIP. It is suggested that the time to manage items in the queue is costly and therefore if the items are limited then the cost to manage them is limited.

The cost to get something in the queue for a developer consists of:
- Identifying a customer need
- Describing it as a feature
- Estimating its cost of deliver (time, effort, etc)
- Prioritizing it for a release
- Placing it into the developer's queue.

Once it is in the developer's queue I do not know of any significant cost for the developer to manage the queue. So, limiting WIP for a developer seems unnecessary to me. However, if items are thrown into a developers queue that have not been through the above steps and the burden falls on the developer to define, prioritize, or other tasks then there is cost to the developer.

If the developer's queue is in constant flux there could be reason to limit WIP. If product can't make up their mind on direction and one day feature A is priority one and the next feature B, and the next a whole new and never heard of feature C and then back to A then there is cost for the developer. (Are process constraints the right way to address indecisiveness?) In this case a developer's queue could be limited to five items, or one iteration's worth of tasks, and once the iteration has started the queue cannot be altered. But this is process trying to mitigate a more serious problem of indecisiveness. Granted it should be mitigated for a short term help but it is not the solution. Maybe the limited WIP could help gather metrics of cost to show to managers how expensive it is to jump from one high priority task to another, back and forth.

But in a stable system limiting WIP at the developer doesn't seem advantageous to me yet. Maybe there are those that have thousands of items in the backlog. That would be a pain to manage.

Limiting WIP and the product level does seem reasonable to me. Image you have thought up enough features for three product releases for a desktop application that is released annually.

If all of those features are placed in a product queue and defined as follows:
- Identifying a customer need
- Describing it as a feature
- Estimating its cost of deliver (time, effort, etc)
- Prioritizing it for a release

Features that do not make it in the current release need not nor should they be well defined because things will change and when it is time to place it in release queue much of the work will have been wasted. That is why "Agile" methods recommend using a high level description, one sentence, something to capture the thought and cause the proper conversation to happen at a future date.

Therefore limiting WIP at the release queue seems very reasonable to me. The limit is this, only fully prepare an item for release if it is in the current release.

Software features are perishable goods. Image a software feature as an apple. If you worked at an apple pealing station and your bin of apples had enough in there to keep you busy all year, I wouldn't want any of your apples after two or three works of work!

Thursday, July 29, 2010

Work In Progress (WIP) and Little's Law

I recently made a small writeup concerning Work In Progress (WIP). I described that throughput is the results of the amount of work to be done divided by the time it takes to do the work. I threw the writeup out yesterday when I was pointed to Little's law, which is exactly what I was defining.

www.factoryphysics.com/Principle/LittlesLaw.htm

(Note that I have had experience with line based work since a child. I grew up on a dairy farm and we went from a simple stanchion barn to what is called a double herringbone with 4 stalls on each side. This was useful for two workers but when there was only one it was too much and we removed 1 stall from each side so that it was a double 3)

I am still reading about Little's law, but this much I have noticed:

The key to the Inventory = Throughput X Flow Time is consistency and lack of variability in units of measure.

Note that even using "cost" as the common unit it is still difficult to use in software development.

Cost is unknown or varies.
Time is unknown or varies.
Complexity is unknown or varies.
Congestion is unknown or varies.
Bottlenecks are unknown or vary.

All of the above issues have been concerns of every software process and software estimation technique. But note that even the best techniques still give estimations.

Predictability is becoming (or maybe already is) the main selling point for software process. This is because if I were going to attempt to use Little's Law for software development I would have to bring software development into the realm of manufacturing and to do this I need predictability. In other words, I can do things to the software process to make it seem to fit better and then jump to the conclusion that if it fits well enough then Little's law still must hold.

If I could get every developer on my team to divide any task into equal chunks of work then I could apply Little's law. If any task no matter the size or difficulty can be divided, redivided, and re-redivided until it is a small and consistent chunk then I can apply Little's law. If the cost of subdividing large features is to much then I can argue that there should be no large features. I could argue no large features really means that all features can be delivered incrementally and therefore there is no need for any large features.

All of that arguing takes you down, not only a very narrow path, but one that is not necessarily true.

For instance, releasing a product into a market that is well established requires large feature development. If I were going to break into the word processing market my first release of the software could only have plain text, no wrapping, etc., and then I would incrementally release wrapping, fonts, until finally the product has a feature set that is comparable to the competition.

The "total incremental" delivery approach implies that all features can be evolved. (I would like to see that proof.)

Now, can there be things learned from Little's law? Certainly. Maybe we could have a discussion about that and find ways to use it to improve software development.

For instance, if you find you are enhancing exist software and roll-out is done regularly and consistently then you may be in a situation where you should subdivide large features and get your development down to the smallest reasonable "chunk size" as possible.

I am doing further investigation based on this:
"Reducing WIP in a line without making any other changes will also reduce throughput."

Geoff

p.s.
(Additional Research)
www.shmula.com/2407/lean-and-theory-of-constraints-either-or

Thursday, July 22, 2010

Agile and "People before Process"

As I have watched this thing called "Agile" develop I have always been looking for things to creep in that may be in violation of foundational aspects of Agile process.

Okay, I do more than watch, I try to direct where I feel direction is needed. It might not be wanted but it is what I "feel" is needed. One thing that took me a while to learn is that my opinion and observation is just as good as anyone's. I tended to defer too much because I didn't understand that some people are just "word" factories continuously spewing forth informational pollution. Maybe I have become a polluter.

Agile has been described and re-described and defended over an over. This is a refinement type process. It is natural.

A recent description of an Agile team is "a team who's members have all of the skills necessary to complete the task." If the team doesn't have the exactly right members then it has failed this Agile qualification and therefore is not Agile and any failures cannot be attributed to Agile.

Well, I am going to make up a new software process right here and right now. It is called "Superior Software Development Process" or SSDP.

Superior Software Development Process requires omnipotent and omniscient members. SSDP guarantees success. If you have mere mortals on your team you cannot use SSDP.

People before process. I think that is important. I recognize that people are imperfect. Sometimes imperfect people luck up and do better than expected and sometimes they chose incorrectly due to lack of experience and really mess things up.

An Agile team is made up of imperfect people with limited skill sets who are willing to learn and with new information and experience continuously refine estimates and improve code quality, product quality, and user satisfaction.

Agile recognizes this is a journey, that this is a growth process, that the end cannot be seen but we can guess what is coming and accept what actually comes.

End of Iteration is not the "final exam"

The end of a development iteration should not be considered the final exam.

During a semester at college for a given class there were many assignments to be turned in, there were quizzes, mid-term exams, more assignments and quizzes, and the final exam.

When software development organizations adopt an iterative development process I have observed that the developers tend to over react to the iteration aspects of iterative development and forget about the full Product Plan.

New iterative teams seem to overreact to anything that might interfere with their "well-define" iteration. If you get to the end of your iteration (and I am thinking about one to two week iterations) and you got less done or you got more done it is okay. The end of an iteration should not be viewed as a "final exam" and should not carry that type of weight.

I believe that developers are afraid they will be judged on the results of the end of an iteration in the same manner they would be judged on the results of a product release. This should not be the case.

I believe that managers are afraid as well and some how feel they need to show that they can accurately predict software delivery after ten or fewer iterations of release schedule that is made up of fifty or more iterations.

I know of "agile" managers that refuse to add something to an iteration even if the developer got all of the other tasks done and there are two more days left. Is velocity really that precious? Is the manager's ability to sell up the idea that this manager can accurately predict and estimate software to such a fine degree really what is important?

There is an entire Project/Product plan. Features and enhancement are gathered. These are divided into releases that could cover several versions of the product over several years. Releases are divided into iterations. Iterations are quizzes.

Maybe there should be some type of "mid-term" exam where the current set of functionality is deployed to a test environment or a test installer just to see where things are at and that we are not forgetting something (like securing licenses to redistribute 3rd part components).

Lighten up on the end of iteration stress that is artificial and not needed.

Resources versus Teams

Scenario

Suppose you find yourself at a software company that has 80 or more developers. The developers are assigned to teams which in turn are aligned to specific products, features, and business needs. Over the past ten years so so the company has hired and fired and people have moved around and found a team or product area that they like.

The company is at a stage where it wants to deliver several initiatives that it feels will excite customers and frustrate competition. Some of the new products are defined to pull together data and functionality across teams because it is obvious to unify product functionality and leverage established value.

The release schedule and product descriptions are presented to development. Teams will be required to integrate their systems. Development raises concerns because of the cost of integrating systems, team communication, scheduling, dependencies, and work that must be done on the existing systems in order to implement specific enhancements.

Response

Product Management is alarmed by the concerns of Development.

Why is it that the software doesn't readily integrate? "We" have been working together for ten years. Are the Architects doing there job?

Why is communication going to be costly? Where is the professionalism that is needed at this time? Why are we silo'ed?

Possible Reaction by Product Personnel

Product Management looks at the developers and recognizes them as highly skilled employees that are concerned with the big picture and with company wide goals.

We must break down these walls and tear down these silos. "What if we move developers around as needed, you know, if product 'X' needs integration with product 'Y' we move the people we need to get the job done and align the reporting chain accordingly." That is being agile.

My Remarks

Be very careful when viewing developers, which are people, as resources. Be very careful when discounting the usefulness of teams, which are groups of people. Be very careful thinking that all you have to do is get the right skill sets/resources together and that will almost guarantee the likelihood of success.

If you have development which seems silo'ed did you ever stop to think that maybe that is just a negative adjective "silo'ed" and what you really have is development divided along products that run efficiently and almost autonomously?

I argue that every team has some aspects of self organization. Through the hiring, the firing, and the team transfers, teams settle into a state that is acceptable both to the team members and well as to management. Even if your team is ran by a dictator you can still quit so anyone that remains on the team does so by choice.

I argue that any outward examination of a development organization may describe the organization as silo'ed. A well defined organization will always have well defined attributes that may be described as walls. Any skilled debater knows to use adjectives that are commonly viewed as a negativity to advocate their agenda. Possibly this silo'ed and walled development organization is really a self organized team aligned specifically to product needs and technological needs.

Possible Solution

Listen to development's concerns and then give them a week to propose solutions. If development's concerns are not about implementation and delivery but are about the products and features themselves then management may need to take the time to explain why these are the "right" products and features. This can be done with information from real customers. If you are thinking, "Developers don't need to know why we want these products and features they just need to do the job they were hired to do which is write software and implement the things we demand", then you should reconsider what a developer really is. A developer is more than a coder. Developers use software all day, every day, and have valuable experience in recognizing good software.

Developers understand that when deciding a product schedule that costs are to be considered and some ideas will not make it to the table because of some unacceptable attribute. They know that some ideas for a product can be described easily and with few words but just because everyone can understand an idea because of common experience which gives way for simple descriptions doesn't mean the software will be simple, short, or easy to develop.

Listen to the concerns of the developers and then give them a week to come up with solutions. Developers are problem solvers. Let them solve the problem. If development comes back and says the product cannot be developed then you have to ask why? It can't be developed because it will take longer than acceptable? It can't be developed because the existing list of things to do are higher priority or have been overlooked to often and these things have to be done before anyone can even guess what additional functionality might cost in time and effort. It can't be done because the technologies you want to integrate will not integrate because of data mapping issues.

There are many reasons why things cannot be done now. An interesting question is, "If it can't be done now what does it take to get things so that we can do it in the future and when will that be?"

Listen and ask for solutions and give time to come up with solutions.

Pet Peeve

I really cannot stand it when product people think up "cool" features and products and sell them up before selling them down.

As a developer I have said, "Wouldn't it be a sweet job for me to dream up really cool stuff that someone else has to develop." Work together.

Don't sell your boss on something and then find development saying it can't be done. Do you really want to go to your boss now and say that your team can't deliver what you sold him? Do you really want to press your developers just to save your face with the boss? Don't get yourself into that situation.

Maybe the silo and the wall is with the product people and the development teams. Maybe "Product" has silo'ed all of the activities of product and feature definition. Ah, I am glad you noticed that I used the term silo'ed with all of its negative connotations. Maybe I am becoming a skilled debater.

Wednesday, June 30, 2010

Broaden your horizons

I think everyone should be well rounded.
Thus said, I will now share my latest poem.

Tree growing
Leaves green
Cow grazing
leaves green

Friday, June 04, 2010

Meetings about work is not work...

"Work can not be replaced. Talking about work is not work. Planning to do work is not work. Meetings about work is not work."
-Maverick Software Development

Sunday, May 16, 2010

Call me Wolfgang. 

Image

Monday, April 19, 2010

UIView frame and bounds in GDB

I was trying to examine the frame property of a UILabel in the XCode debugger (GDB). Since I only do iPhone app development on occasion I tend to forget common tasks. I tried many different commands and none worked. I searched for posts on how to do this and found nothing specific.

So, here is how you do it:
(gdb) print (CGRect)[self.label frame]
$1 = {
origin = {
x = 20,
y = 20
},
size = {
width = 280,
height = 21
}
}

I have a View Controller that has an UILabel *label.

@interface TextViewController : UIViewController {
IBOutlet UILabel *label;
}

@property (nonatomic, retain) UILabel *label;

@end

Tuesday, January 26, 2010

Warning to those about to re-invent the wheel

I have to warn those of you (or us) that find them self re-inventing the wheel.

First, and we all know this, that you should not re-invent the wheel unless it is absolutely necessary. Often there is a performance criteria that causes the wheel to be remade.

I have dealt with four major "new" wheels that I vividly recall.

1) Memory Manager
2) WinMac - a port of the Win32 API to the Macintosh
3) XML parser
4) The Gadget Library, a replacement for WinForms controls, layout, and UI containment

All of these have commonalities.
1) Performance was the driving reason
2) Replacing well known and highly used functionality
3) Complex code to give the performance increase
4) One or two developers owning the code


The Memory Manager

The new memory manager allowed a product who's core was indexing and retrieval to be extremely fast on even Intel 386 based machines. Without such speed the product could not have gained the user base that it ultimately maintained.

The growth of the company and the spreading use of the product allowed for the development of a Unix version. That is where I came into the picture. Since I was a Mac programmer I also did some Unix development, or another way to read this was I was a Win'tel hater. Youthful convictions run deeper than reason.

I looked at porting the memory manager to Unix and found that the memory management on Unix was faster, so there was no need for the port. However I had to port the interface of the new memory manager so that I could reuse the "C" code of the product. So a bit of warning, performance issues and the resulting optimizations may not, most likely will not, be of value to cross platform code.

Finally the memory manager itself was the bottleneck. It was so in two ways. First, as new versions of Windows came out and new compilers came out the native memory manager became faster than the custom one. Second, since the memory manager was developed by one person he could not keep his custom memory manager up to date, he was a bottleneck.

Eventually the memory manager was removed. Was it necessary? Yes, it was. The company would not have grown to the point where the memory manager became obsolete if it hadn't have sold well allowing the company to stay in business and grow.

The memory manager should have been architected differently. The issue was it had its own procedure calls completely different than "Stdlib". If the interface would have been the same as the standard memory manager then it would have been a simple swap out. So, when replacing something like the memory manager, stick to the interfaces so that it is a compile time or link time switch that uses the desired memory manager.


WinMac - a port of the Win32 API to the Macintosh

A good friend of mine took the task of porting a Win32 application to the Macintosh. He is an excellent programmer and therefore there is no coding task that he will not consider as the correct solution to the problem. (I on the otherhand will say, 'Yes it can be coded, but should it be coded?')

To make a long story short he decided to get the Windows code to compile on the Mac without any modification to the code. Sounds good to management, one code base, all new Windows functionality is immediately available on the Mac. In those days no one even considered if it was legal, those types of legal issues had never been spoken of.

Well, he got behind, had no one to bounce ideas off of, and so they decided to hire someone to help. That is where I come in. The pay raise was around 15%, who could say no.

There were some big problems.

1) The WinMac library was a subset of Win32. It only contained a port of the Win32 calls that were made by the Windows product.

2) Macintosh OS 7 (8 and 9, and those before) is an event based model with one main event loop and cooperative multitasking. (I have dealt with crazy Mac programmers that coded up multiple event loops! Shame on them. SHAME.) Windows is a message based UI with crazy message pumps and all of those message id's, etc.

3) No specification of Win32. Just run it, use the event spy, and figure it out. There was documentation on how to developer for WIn32, but not a specification on how Win32 was developed.

4) Bugs in Windows.

As we developed on the single code base we quickly found ourselve's downstream of the changes made to the Windows product. We would get the code compiling and then a few days later get code again to find that some developer called some new Windows API that we hadn't ported yet. We ended up porting the vast majority of the entire Windows API.

Also, we found many bugs in Windows and the order of messages. The documentation would state a certain series of messages but the event spy showed at runtime that Windows did not do it that way. So, we had to mimic even the bugs.

Finally my friend got into a workplace polictical conflict with management and left me all alone.

Lesson learned, it is better to port ideas than it is code!


The XML parser

Once again performance was the issue. I do admit the developers I work with have a good vision for the future and when XML was very young the idea of using it for making calls to servers through http was leading edge. So, as SOAP is coming into existence the company has moved far ahead. Standard emerge and the team keeps the code up to date as best as they can. However, the XML that was used to test the custom XML parser was unique in that all attributes where in double quotes. No one, by habbit or dumb luck, ever passed xml that had attributes in single quotes. Well, that is where I come in. I had started using XML later and had used the libraries that came with the IDE. I spent several days trying to debug code that was an http call passing xml to a server and I kept getting exceptions. My XML was valid, I had checked in many times, had others look at it, etc. One of the "old timers" noticed the single quotes and said, "There's the problem".

As the company grew rapidly the new/younger developers started writing SOAP implementations and now there were both worlds in existense at the same time. At first SOAP was slower and those that used it got chastized by the old timers for not stepping up and learning the proprietary solution. But, eventually SOAP gets the performance it needs.

Lesson learned here is that if you are have a replacement for something in which there are standards then you have to keep up with the standard. The personel working of the code is the bottleneck. Typically an update in a standard is significant and you get it dumped upon you all at once. Therefore you find yourself picking and chosing and therefore you are now noncompliant.



The Gadget Library, a replacement for WinForms controls, layout, and UI containment

This is my most recent experience. Performance was the issue, that and the inability to create thousands of Controls dynamically, to draw all of these controls to offscreen bitmaps, or to visuall scale/zoom all of the controls.

Because I have replaced WinForm's controls, completely replaced them I find myself the bottleneck to getting in new functionality. Imagine just one guy owning the WinForms classes for Microsoft. Each bug, each new feature, each change is huge. The amount of code to keep in one's head is tremendous as well.

The issue is that UI components and their abilities are a commodity now. When I developed a gadget it was coded to do just what was needed and nothing extra. YAGNI was the driving mantra. Users are unaware that the Gadget library exists, it all looks like Windows to them. So, they say, "We would like it do X". "X" would be easy enough if I was using WinForm's controls. The users expect it and management doesn't necessarily see the difficulty in asking for it. But it is starting to wear me out. Too much code, too much responsibility, to much complexity, just too dang much.

The lesson learned here is when replacing a UI commodity, you should fully replace it in most cases. Fully means, full functionality implemented, all of the behavior of the original.

Final Thoughts

There are some items of interest as well, based upon my experience. When I was an undergrad at BYU I took several upper level classes. One of which was a GUI class by Dr. Olsen. I don't think that man ever liked me. In that class I wrote a GUI system on my little Mac SE. We were constrained and could only use the basic draw functions, like draw rect, draw string, etc. I made windows, controls, scroll bars, drop down menus, and other GUI components from scratch.

I would go home for the summers and work on my Father's dairy farm (which also included about 200 acres of alfalfa). I would purchase the computer science text books for the classes I would take in the fall and read them and work through the end of chapter items. During one summer I wrote a new Macintosh GUI system where I developed a language for describing a UI. I wrote a parser and code generator for that language. This was around 1986 or so.

Then I helped write that WinMac monstrosity. With these experiences in developing GUI environments I was at least able to write the Gadget library I use with WinForms development. The point is the path I had taken, or better yet been herded down, gave me the knowledge to be able to make the Gadget library which is extremely fast and meets the performance requirements. The hard road, the big tasks, the complex assignments, all have helped make me what I am capable of doing today.


Geoff