Thursday, February 28, 2008
Ron Davison's Systems Thinking Webcasts
If you are interested in systems thinking, you might want to watch Ron Davison's webcasts. I haven't viewed the whole series yet, but what I have seen is interesting.
Wednesday, February 27, 2008
How Organizations Change, Part 3: Drive Out Fear
I have just released part three in the How Organizations Change series. I decided to split the material into more digestible chunks, so I discuss only one of the root causes that make it difficult for organizations to learn and adapt: fear.
The webcast contains material from a ZDNet Australia interview with Lloyd Taylor, VP of Operations at LinkedIn. I would like to thank Brian Haverty, Editorial Director of CNet Australia for permission to use the interview. The full interview is available at http://www.zdnet.com.au/insight/soa/LinkedIn-Lloyd-Taylor- VP-of-Technical-Operations/0,139023731,339285616,00.htm
The webcast also contains an excerpt from the A Day with Dr. Russell L. Ackoff conference at JudgeLink. I would like to thank Dr. Ackoff for his kind permission to use the material in my webcast. Dr. Ackoff's talk at the conference was inspired, to say the least. You can view it all at http://www.judgelink.org/Presentations/GirlsLink/index.html.
Wednesday, February 20, 2008
Time Sheets Are Lame!
Speaking of measurements that do not work, Jeff Sutherland has written an interesting article about time sheets in software development. Good stuff. Go have a look.
Agile Productivity Metrics Again
Ken Judy posted a thoughtful reply to my post commenting his post about productivity metrics. Judy writes:
From the raw data the spreadsheet can produce:
The more specialization, and the more process stages you have, the more important the cumulative-flow chart becomes. I won't go into details here, but see David Anderson's book and
Reinertsen's Managing the Design Factory. This chart is useful to pinpoint the Capacity Constrained Resource in the project, which is a prerequisite for effective improvement efforts. It is also useful when judging the impact on events on the project, because prject velocity is determined by CCR velocity. (Bear in mind the CCR can and does shift.)
Both of the charts discussed above measure Design-In-Process (Inventory in TOC terms), but velocity can be derived from them. There is a catch though, as Judy points out, there are unknown measurement errors. In addition, velocity varies, a lot, for a multitude of reasons.
The throughput chart shows velocity. If that was all there is to it, it would be a less than useful tool. Fortunately, there is more: the statistical control limits. They show (if you are using 3 sigma limits) the upper and lower bounds of the velocity with 95% probability.
You can do a lot with this information:
So, the data is feasible to collect. There is no additional overhead compared to what ScrumMasters already do, because the new information is derived from the same data they use to create burn-down charts. It is just processed a little bit differently.
I would also say the information is extremely useful. However, I agree with Judy that productivity information on its own does not tell the whole story. For example, a feature may have a negative business value, so producing it faster means the customer will lose money faster. Also, a set of features that individually are considered valuable, may have a negative business value when considered as a set. This is usually known as "featuritis".
Using a productivity measurement without understanding it is a recipe for disaster. I agree with Judy there. The position I am advocating is that using it with understanding can bring great benefit.
Judy also writes:
On the other hand the statement is false because it is quite possible to figure out which factor, or factors, that limit the performance of a company. If the constraint is the software development process, then implementing agile will help. (Assuming it is done correctly, of course.) If the software development process is not the constraint, implementing agile will not help. Note that symptoms often arise far from the constraint itself. For example, a problem in development may show up in marketing, or vice versa. (Figuring out such causal connections is an important part of what I do for a living.)
The reason it is possible to figure out what the constraint is, is that companies are tightly coupled systems. In a tightly coupled system, the constraint can determined. Much of the time it is even quite easy to do so. The real trouble begins after that, when you try to fix the problem.
The method I use to Find-and-Fix is primarily the Theory Of Constraints (TOC). There are other methods around.
Judy finishes with:
There is no "best practice" set of measurements for software development. What you measure must be determined by your goals, and by the system under measurement. Once this is understood, measurements can be tailored to be what they are supposed to be: a tool set for problem solving.
Measuring is like anything else, it is very difficult if you haven't learned how to do it. A prerequisite for measuring complex systems, like software development teams and organizations, is understanding the system. To do that, you need to know a bit about systems thinking. You do not have to be the world's greatest expert, but you need to be well versed in the basics.
The first thing to do if you want to evaluate an effort to measure, is to ask for the systems map the measurements are derived from. The presence of such a map does not prove the presence of a good measurement system. However, the absence virtually guarantees the measurement system is dysfunctional in some way.
In 1992 Norton and Kaplan introduced the balanced scorecard system for creating measurements. It didn't work very well, precisely because there was no way to connect measurements to strategic objectives. In 2001, they rectified the problem by introducing strategy maps. I do not use this method myself, so I haven't evaluated. Seems to be on the right track though. Unfortunately, most people who design balanced scorecards use the earlier, flawed method. Go figure...
I use Intermediate Objective Maps, which are part of The Logical Thinking Process, a TOC application for doing systems synthesis and analysis. An alternative is using Strategy&Tactics Trees. However, S&T is currently poorly documented, and there is only a handful of people that can do them well.
It is also possible to use a combination of Current Reality Trees and Future Reality Trees to figure out what to measure. That is what I did before learning to use IO Maps.
So, IO Maps, S&T Trees, CRT+FRT, and the revised version of balanced scorecards, can be used to figure out what to measure.
As far as I know, none of these tools are part of any agile method. Not even FDD uses them, despite strong ties to TOC. Consequently, few agile practitioners have come into contact with the tools and the knowledge base for creating good measurements.
Consequently, the difficulty of making useful measurements is perceived to be greater than it really is. Tailoring a measurement system to fit an organization is a skill that can be learned. It is just not part of the agile repertoire, yet. I hope it will be.
Oh, in closing, a good measurement system must be able to measure itself. That is, if a measure does not work as intended, it must show up as an inconsistency between different measures. Otherwise, mistakes in the measurement system are very hard to catch. Fortunately, this can usually be accomplished fairly easily.
Just to be clear, my objection is not that agile should not be justified by hard numbers but that I haven't seen a metric for productivity gain specifically that both stood systematic scrutiny and was economically feasible for the average business to collect.If you have an andon (board with sticky notes representing units of work) set up, it is easy for the ScrumMaster (or project manager, if you do not use Scrum), to enter information about when each sticky note is moved into a spreadsheet. This takes the ScrumMaster a few minutes every day. (Or every other day. I would not recommend measuring less frequently, because if you do, you will miss information about inventory build up, and slow down responses to problems.)
From the raw data the spreadsheet can produce:
- A burn-down graph. The usual way of visualizing progress in Scrum projects
- A cumulative flow-chart, showing build up of inventory in each process stage. This is a very valuable tool for finding process bottlenecks
- A Throughput chart, where Throughput is defined in terms of goal units per time unit. A goal unit may be a Story Point or Function Point, or even Story or Use Case. (Story Points and Function Points are a little bit more uniform in size, so they work better.) To be useful, the Throughput chart must have an upper and a lower statistical control limit. Without that, the chart is just garbage.
The more specialization, and the more process stages you have, the more important the cumulative-flow chart becomes. I won't go into details here, but see David Anderson's book and
Reinertsen's Managing the Design Factory. This chart is useful to pinpoint the Capacity Constrained Resource in the project, which is a prerequisite for effective improvement efforts. It is also useful when judging the impact on events on the project, because prject velocity is determined by CCR velocity. (Bear in mind the CCR can and does shift.)
Both of the charts discussed above measure Design-In-Process (Inventory in TOC terms), but velocity can be derived from them. There is a catch though, as Judy points out, there are unknown measurement errors. In addition, velocity varies, a lot, for a multitude of reasons.
The throughput chart shows velocity. If that was all there is to it, it would be a less than useful tool. Fortunately, there is more: the statistical control limits. They show (if you are using 3 sigma limits) the upper and lower bounds of the velocity with 95% probability.
You can do a lot with this information:
- If there are measurement points outside the upper and lower control limits, the development process is out of statistical control. That means you have a problem the company management, not the project team, is responsible for fixing.
- When you take actions to reduce uncertainty, the distance between the upper and lower control limit will change. Thus, you can evaluate how effective your risk management is. A narrow band means the process is more predictable than if the band is wider. This is important when, for example, predicting the project end date.
- You can prove productivity improvements. If you have a stable process, and then make a process change (shorter iterations for example), and productivity rises above the upper control limit, then you have a real productivity improvement. (Or someone is gaming the system. However, if someone is, it will most likely show up in other statistics.)
- You can evaluate the effect of various measures, because you know how big a change must be to be statistically significant.
So, the data is feasible to collect. There is no additional overhead compared to what ScrumMasters already do, because the new information is derived from the same data they use to create burn-down charts. It is just processed a little bit differently.
I would also say the information is extremely useful. However, I agree with Judy that productivity information on its own does not tell the whole story. For example, a feature may have a negative business value, so producing it faster means the customer will lose money faster. Also, a set of features that individually are considered valuable, may have a negative business value when considered as a set. This is usually known as "featuritis".
Using a productivity measurement without understanding it is a recipe for disaster. I agree with Judy there. The position I am advocating is that using it with understanding can bring great benefit.
Judy also writes:
The problem with justifying an agile adoption based on revenue gains is there are so many other considerations that attempts to credit any single factor become dubious.This is both true and false. It is true because that is the way it is in most companies. Nobody understands the system, so nobody can really tell which factors have an effect or not. Attributing success, or failure, to agile under such circumstances is bad politics, not good management.
On the other hand the statement is false because it is quite possible to figure out which factor, or factors, that limit the performance of a company. If the constraint is the software development process, then implementing agile will help. (Assuming it is done correctly, of course.) If the software development process is not the constraint, implementing agile will not help. Note that symptoms often arise far from the constraint itself. For example, a problem in development may show up in marketing, or vice versa. (Figuring out such causal connections is an important part of what I do for a living.)
The reason it is possible to figure out what the constraint is, is that companies are tightly coupled systems. In a tightly coupled system, the constraint can determined. Much of the time it is even quite easy to do so. The real trouble begins after that, when you try to fix the problem.
The method I use to Find-and-Fix is primarily the Theory Of Constraints (TOC). There are other methods around.
Judy finishes with:
If someone can propose a relevant metric that is economical for a small to medium size business to collect, that can be measured over time in small enough units to show increased performance due to specific process changes, and doesn't create more problems than it solves, I will be happy to consider it.I can do that. So can any decent TOC practitioner or systems thinker. There are a few catches though:
- Measurements must be tailored to the system goal. Very few organizations are exactly alike in terms of goals, intermediate objectives, root problems, and constraints. Therefore, measurements must be tailored to fit each specific organization.
- Organizations change over time. When objectives or internal constraints change, measurement systems must also change.
- The environment changes over time. This means external constraints may appear, or disappear. For this reason too, measurement systems must change over time.
There is no "best practice" set of measurements for software development. What you measure must be determined by your goals, and by the system under measurement. Once this is understood, measurements can be tailored to be what they are supposed to be: a tool set for problem solving.
Measuring is like anything else, it is very difficult if you haven't learned how to do it. A prerequisite for measuring complex systems, like software development teams and organizations, is understanding the system. To do that, you need to know a bit about systems thinking. You do not have to be the world's greatest expert, but you need to be well versed in the basics.
The first thing to do if you want to evaluate an effort to measure, is to ask for the systems map the measurements are derived from. The presence of such a map does not prove the presence of a good measurement system. However, the absence virtually guarantees the measurement system is dysfunctional in some way.
In 1992 Norton and Kaplan introduced the balanced scorecard system for creating measurements. It didn't work very well, precisely because there was no way to connect measurements to strategic objectives. In 2001, they rectified the problem by introducing strategy maps. I do not use this method myself, so I haven't evaluated. Seems to be on the right track though. Unfortunately, most people who design balanced scorecards use the earlier, flawed method. Go figure...
I use Intermediate Objective Maps, which are part of The Logical Thinking Process, a TOC application for doing systems synthesis and analysis. An alternative is using Strategy&Tactics Trees. However, S&T is currently poorly documented, and there is only a handful of people that can do them well.
It is also possible to use a combination of Current Reality Trees and Future Reality Trees to figure out what to measure. That is what I did before learning to use IO Maps.
So, IO Maps, S&T Trees, CRT+FRT, and the revised version of balanced scorecards, can be used to figure out what to measure.
As far as I know, none of these tools are part of any agile method. Not even FDD uses them, despite strong ties to TOC. Consequently, few agile practitioners have come into contact with the tools and the knowledge base for creating good measurements.
Consequently, the difficulty of making useful measurements is perceived to be greater than it really is. Tailoring a measurement system to fit an organization is a skill that can be learned. It is just not part of the agile repertoire, yet. I hope it will be.
Oh, in closing, a good measurement system must be able to measure itself. That is, if a measure does not work as intended, it must show up as an inconsistency between different measures. Otherwise, mistakes in the measurement system are very hard to catch. Fortunately, this can usually be accomplished fairly easily.
Tuesday, February 19, 2008
Justify Agile Based On Productivity!
In a recent article Ken Judy takes the stand that agile software development should not be adopted on the grounds of higher productivity. The reason for that, Judy claims, is that there are better ways to justify adopting agile than with hard numbers.
I can sympatize, because I have worked in my share of software development projects where the measurements did more harm than good. Nevertheless, I believe Judy is wrong in this instance. Most organizations measure the wrong thing. That does not imply that measuring is bad in itself.
Judy is correct in stating that measurements drive behavior. He is also correct in stating that in most software development projects, measurements have unintended side effects. In many cases these side effects are quite nasty.
The problem is not with the idea of measuring, the problem is that how to design measurement systems, and how to use them effectively, is poorly understood.
To begin with, it is useless to measure unless we know the purpose of our measurements. To do that, we need a clear picture of what we are trying to accomplish. In other words, we must know the goal of the system under consideration. (System here is the project team and other stakeholders, not the software.)
If we do not know why we are measuring something, we are likely to get the unintended side effects that Judy describes. We must also be aware of the assumptions we make, or we may be mislead into measuring something we should not.
Take the infamous Lines Of Code (LOC) measure. It rests on several assumptions:
* There is a linear relationship between LOC and productivity. Productivity is the amount of functionality per time unit.
* There is a linear relationship between productivity and Throughput. (Throughput is revenue minus totally variable cost).
* Different programmers will use the same number of lines of code to implement a specific piece of functionality.
* What one programmer does, does not affect any other programmer. For example, when one programmers gets a high LOC measure by skipping writing documentation, or writing long, convoluted spaghetti code, this does not have any measurable effect on the productivity of other programmers.
All the assumptions above are wrong, and can be proven wrong quite easily. You do need to measure though.
The LOC measure is the result of a flawed idea of how software development works. Attacking the LOC measure is not very useful, unless the root causes are also addressed. Otherwise, all that happens is that we make the same mistake again, either with some other measurement, or by not measuring at all.
For example, Judy lists several reasons for using agile:
"We sought improved customer satisfaction, reduced risk, improved quality, incremental delivery, and innovation. We obtained other benefits including: great recruiting and retention, rapid professional development, high employee engagement."
This raises a couple of questions:
Obviously, customer satisfaction is important. Should we always strive to improve it? Most of the time, yes. (Especially in the software industry, where most products compete by sucking less, rather than being better.) Not always though. Beyond a certain point, increased customer satisfaction will not increase sales. Something else will limit the organization's ability to sell its software.
Microsoft is a good example. Windows sales are limited by the number of personal computers in the world. Yes, other systems, like Linux and MacOS do have a market share. However, the number of people using Linux and MacOS is considerably smaller than the number of people who do not own a personal computer at all.
Quality improvement is also double-edged. In at least one Toyota plant improvement efforts have backfired. Employees have a quota on of problems to fix each month. There are very few real problems to find, so they have to make minor acts of sabotage in order to fill their quota. Improvement system has become the problem.
The point is that even though a high degree of customer satisfaction, and high quality, are very good goals to have, even they can backfire if taken out of context.
Next question, are the objectives listed sufficient, or are other things required? For example, is innovation a good thing in itself? I'd say not. Commodore was an innovative company. Commodore killed itself, partly because management did not understand how to take advantage of their innovative capability. To make use of innovative capability, a company must be good at strategic planning, tactical planning, and execution. Commodore sucked at all three.
Likewise, getting and retaining highly skilled people is not enough. An agile company must also invest in maintaining and developing the skills of its employees. Skills become less valuable over time. COBOL programmers know this.
Misinterpretetation? Quality stands out. What high quality is depends on whom you ask. Ask a developer and it probably has to do with code quality. (BTW, code quality is measurable.) Ask a user, and quality has to do with how the software enables the user to achieve her goals. These goals may extend far beyond the actual use of the software.
When on the subject of misinterpretation, what does "great recruitment" mean? A company may very well find exactly the kind of people it is looking for, but unless it is the kind of people that furthers the company goal, the company will be in a worse situation than ever. If you do the wrong thing right, you become wronger. (See Martin Fowler's post about questionable recruitment strategies. See also his follow up. BTW, fowler is wrong about it not being possible to measure programmer productivity. It is possible, and it has been done. A much better problem to consider is this: is measuring individual productvity useful? In the vast majority of cases, it isn't. That would be a topic for another post though.)
Are there any conflicting goals in the list? How about "reduced risk" and "innovation". When you innovate, you do new, untried, things. This increases risk.
Risk can be reduced by doing only what has worked before, and sticking to solving the same kind of problem over and over. That is the antithesis of innovation.
The answer is not to reduce risk, but to manage risk. Risk management and innovation are compatible. Risk reduction and innovation are not. (No, I'll resist the temptation to delve deeply into risk management, and the difference between managing risks and reducing them.)
And this seems rather obvious: setting different objectives does not obviate the need to measure. If you do not measure, how do you know you are moving closer to your objectives? You don't!
So, if you set customer satisfaction as an objective, but do not measure it, how do you know how satisfied your customers are?
Lest I forget, Judy is rightly concerned about uncertainty in measurements. For example, Function Point and Story Point measurements carry a great deal of uncertainty. However this does not make such measurements useless. Measurements are always imprecise to some degree. Try measuring exactly when a train arrives at a train station. You can't. If you can come up with an exact number, you aren't measuring, you are counting.
For a measurement to be useful, there must be two values: the mean value, and the degree to which individual measurement points differ from the mean. (Eli Schragenheim made this point very well in a Clarke Ching podcast recently.)
Consequently, for velocity/productivity measurements to be useful, it is necessary to know the boundaries of variation, i.e. the upper and lower control limit. Six Sigma people know this. Agilists need to learn. (Me, I'm working on it. Slow going, but necessary.)
Learning is the essence of agile. Remember the manifesto: "We are uncovering better ways..."
In conclusion, the intermediate objectives of agile do lead to improved return on investment. What we need to do, is to prove it. To do that, we need to measure.
I can sympatize, because I have worked in my share of software development projects where the measurements did more harm than good. Nevertheless, I believe Judy is wrong in this instance. Most organizations measure the wrong thing. That does not imply that measuring is bad in itself.
Judy is correct in stating that measurements drive behavior. He is also correct in stating that in most software development projects, measurements have unintended side effects. In many cases these side effects are quite nasty.
The problem is not with the idea of measuring, the problem is that how to design measurement systems, and how to use them effectively, is poorly understood.
To begin with, it is useless to measure unless we know the purpose of our measurements. To do that, we need a clear picture of what we are trying to accomplish. In other words, we must know the goal of the system under consideration. (System here is the project team and other stakeholders, not the software.)
If we do not know why we are measuring something, we are likely to get the unintended side effects that Judy describes. We must also be aware of the assumptions we make, or we may be mislead into measuring something we should not.
Take the infamous Lines Of Code (LOC) measure. It rests on several assumptions:
* There is a linear relationship between LOC and productivity. Productivity is the amount of functionality per time unit.
* There is a linear relationship between productivity and Throughput. (Throughput is revenue minus totally variable cost).
* Different programmers will use the same number of lines of code to implement a specific piece of functionality.
* What one programmer does, does not affect any other programmer. For example, when one programmers gets a high LOC measure by skipping writing documentation, or writing long, convoluted spaghetti code, this does not have any measurable effect on the productivity of other programmers.
All the assumptions above are wrong, and can be proven wrong quite easily. You do need to measure though.
The LOC measure is the result of a flawed idea of how software development works. Attacking the LOC measure is not very useful, unless the root causes are also addressed. Otherwise, all that happens is that we make the same mistake again, either with some other measurement, or by not measuring at all.
For example, Judy lists several reasons for using agile:
"We sought improved customer satisfaction, reduced risk, improved quality, incremental delivery, and innovation. We obtained other benefits including: great recruiting and retention, rapid professional development, high employee engagement."
This raises a couple of questions:
- Do these objectives bring the company closer to its goal?
- Are these objectives sufficient?
- Are is there anything in the list subject to misinterpretation?
- Are there any conflicts between these objectives?
- How do you know you are moving closer to the objectives unless you measure them?
Obviously, customer satisfaction is important. Should we always strive to improve it? Most of the time, yes. (Especially in the software industry, where most products compete by sucking less, rather than being better.) Not always though. Beyond a certain point, increased customer satisfaction will not increase sales. Something else will limit the organization's ability to sell its software.
Microsoft is a good example. Windows sales are limited by the number of personal computers in the world. Yes, other systems, like Linux and MacOS do have a market share. However, the number of people using Linux and MacOS is considerably smaller than the number of people who do not own a personal computer at all.
Quality improvement is also double-edged. In at least one Toyota plant improvement efforts have backfired. Employees have a quota on of problems to fix each month. There are very few real problems to find, so they have to make minor acts of sabotage in order to fill their quota. Improvement system has become the problem.
The point is that even though a high degree of customer satisfaction, and high quality, are very good goals to have, even they can backfire if taken out of context.
Next question, are the objectives listed sufficient, or are other things required? For example, is innovation a good thing in itself? I'd say not. Commodore was an innovative company. Commodore killed itself, partly because management did not understand how to take advantage of their innovative capability. To make use of innovative capability, a company must be good at strategic planning, tactical planning, and execution. Commodore sucked at all three.
Likewise, getting and retaining highly skilled people is not enough. An agile company must also invest in maintaining and developing the skills of its employees. Skills become less valuable over time. COBOL programmers know this.
Misinterpretetation? Quality stands out. What high quality is depends on whom you ask. Ask a developer and it probably has to do with code quality. (BTW, code quality is measurable.) Ask a user, and quality has to do with how the software enables the user to achieve her goals. These goals may extend far beyond the actual use of the software.
When on the subject of misinterpretation, what does "great recruitment" mean? A company may very well find exactly the kind of people it is looking for, but unless it is the kind of people that furthers the company goal, the company will be in a worse situation than ever. If you do the wrong thing right, you become wronger. (See Martin Fowler's post about questionable recruitment strategies. See also his follow up. BTW, fowler is wrong about it not being possible to measure programmer productivity. It is possible, and it has been done. A much better problem to consider is this: is measuring individual productvity useful? In the vast majority of cases, it isn't. That would be a topic for another post though.)
Are there any conflicting goals in the list? How about "reduced risk" and "innovation". When you innovate, you do new, untried, things. This increases risk.
Risk can be reduced by doing only what has worked before, and sticking to solving the same kind of problem over and over. That is the antithesis of innovation.
The answer is not to reduce risk, but to manage risk. Risk management and innovation are compatible. Risk reduction and innovation are not. (No, I'll resist the temptation to delve deeply into risk management, and the difference between managing risks and reducing them.)
And this seems rather obvious: setting different objectives does not obviate the need to measure. If you do not measure, how do you know you are moving closer to your objectives? You don't!
So, if you set customer satisfaction as an objective, but do not measure it, how do you know how satisfied your customers are?
Lest I forget, Judy is rightly concerned about uncertainty in measurements. For example, Function Point and Story Point measurements carry a great deal of uncertainty. However this does not make such measurements useless. Measurements are always imprecise to some degree. Try measuring exactly when a train arrives at a train station. You can't. If you can come up with an exact number, you aren't measuring, you are counting.
For a measurement to be useful, there must be two values: the mean value, and the degree to which individual measurement points differ from the mean. (Eli Schragenheim made this point very well in a Clarke Ching podcast recently.)
Consequently, for velocity/productivity measurements to be useful, it is necessary to know the boundaries of variation, i.e. the upper and lower control limit. Six Sigma people know this. Agilists need to learn. (Me, I'm working on it. Slow going, but necessary.)
Learning is the essence of agile. Remember the manifesto: "We are uncovering better ways..."
In conclusion, the intermediate objectives of agile do lead to improved return on investment. What we need to do, is to prove it. To do that, we need to measure.
Thursday, February 14, 2008
I Have Turned Comment Moderation On
My blog has drawn attention from spammers lately. Therefore I have turned comment moderation on. I hope this will help.
Monday, February 11, 2008
Reward Failure
I won't blog much the next couple of days because I am working on the third and final (, OK, probably final,) part of the How Organizations Change webcasts. One of the things needed to solve the problems discussed in part two is to stamp out fear by rewarding failure.
By sheer coincidence, Falkayn posted a short article on the same topic today. He also linked to a ZDNET interview with Lloyd Taylor, VP of Technical Operations at LinkedIn. Here is a brief excerpt from the interview:
By sheer coincidence, Falkayn posted a short article on the same topic today. He also linked to a ZDNET interview with Lloyd Taylor, VP of Technical Operations at LinkedIn. Here is a brief excerpt from the interview:
Dan Farber: Now, it looks like you have spent an entire career innovating, how do you create that culture of innovation in these different places that you go, how do you inspire people to kind of break all the rules?
Lloyd Taylor: The culture needs to reward failure. That is the answer.
Friday, February 08, 2008
How Organizations Change, Part 2: Obstacles to Learning (Webcast)
I have just released a new webcast in the How Organizations Change series. This is the second part, and it is titles Obstacles to Learning.
Like any proper 2nd part (compare with Star Wars: Empire Strikes Back), this presentation is all about problems. The third and final part will present solutions, and thus be a lot more upbeat.
You will notice a change in how the video looks, compared to earlier videos. This is because I have switched from iMovie to Final Cut Express. I like iMovie, but over the next few months I plan to do some things that require a bit more horsepower.
Prefer Design Skills (Management Pattern)
Too busy to blog much at the moment, but if you want to read something thoughtful, I recommend that you read the Prefer Design Skills article by Martin Fowler.
Fowler is one of the deep thinkers in software development. He wrote the book on refactoring. refactoring is nowadays one of the basic techniques of agile software development. Refactoring is also a cornerstone in design methods like Kent Beck's Test-Driven Design (TDD) and Behavior-Driven Design.
Done correctly, refactoring can totally transform the economics of maintaining and developing code. In conjunction with a few other things, that is.
Prefer Design Skills, is one of those things, and it is often overlooked. The reason, I believe, is that Prefer Design Skills is a management pattern. In many organizations there is an unfortunate disconnect between what software developers do, and what management does. For example, companies invest vast amounts of money in object oriented tools and languages every year. These tools are intended to support developers with broad design skills, and a very particular mind set.
Unfortunately, when it comes to hiring and training, most companies focus almost exclusively on skill in specific tools. As a result, they hire, and create, code monkeys, rather than software designers.
The Prefer Design Skills pattern says that companies should prefer hiring people who are designers, with broad design skills, and a designer mind set.
Though this is a guiding pattern for hiring and training software developers, the pattern is also applicable to the hiring and training of many other types of knowledge workers. For example, it can be succesfully applied to hiring managers, and management consultants.
Questions for you to think about:
Fowler is one of the deep thinkers in software development. He wrote the book on refactoring. refactoring is nowadays one of the basic techniques of agile software development. Refactoring is also a cornerstone in design methods like Kent Beck's Test-Driven Design (TDD) and Behavior-Driven Design.
Done correctly, refactoring can totally transform the economics of maintaining and developing code. In conjunction with a few other things, that is.
Prefer Design Skills, is one of those things, and it is often overlooked. The reason, I believe, is that Prefer Design Skills is a management pattern. In many organizations there is an unfortunate disconnect between what software developers do, and what management does. For example, companies invest vast amounts of money in object oriented tools and languages every year. These tools are intended to support developers with broad design skills, and a very particular mind set.
Unfortunately, when it comes to hiring and training, most companies focus almost exclusively on skill in specific tools. As a result, they hire, and create, code monkeys, rather than software designers.
The Prefer Design Skills pattern says that companies should prefer hiring people who are designers, with broad design skills, and a designer mind set.
Though this is a guiding pattern for hiring and training software developers, the pattern is also applicable to the hiring and training of many other types of knowledge workers. For example, it can be succesfully applied to hiring managers, and management consultants.
Questions for you to think about:
- What, specifically, are the "broad design skills" Martin Fowler writes about? How do you recognize a person that has them?
- What are the analogue for managers and management consultants? What "broad design skills" do they need?
Monday, February 04, 2008
Clarke Ching Interviews Eli Schragenheim
This might be old news to most of you, but Clarke Ching recently published two podcasts with an interview with Eli Schragenheim:
Eli is a respected TOC management expert, with several published books to his credit. One of my favorites is Management Dilemmas.
Eli is a respected TOC management expert, with several published books to his credit. One of my favorites is Management Dilemmas.
Sunday, February 03, 2008
Blog Improvement Project: NPS Survey
I recently published an article about Net Promoter Score (NPS). Over the next few months, I will run an experiment. I am going to use NPS to improve your satisfaction with my blog.
Therefore I have two questions for you:
I am using Google Analytics to check how many visits I get, and what the most popular articles are. In six months I will write an article about the results: Did reader satisfaction improve? If so, did increased reader satisfaction result in more readers?
Some of you have blogs of your own, and are interested in process improvement, so why not do the same thing? It would be interesting to try improving (and expanding the readership of) a group of blogs. In six months or so, we could all do write-ups of our efforts and publish linked articles.
Therefore I have two questions for you:
- On a scale of 0-10, how likely are you to recommend my blog on your blog, or recommend it to friends and colleagues?
- What is the main reason for giving the score you did?
I am using Google Analytics to check how many visits I get, and what the most popular articles are. In six months I will write an article about the results: Did reader satisfaction improve? If so, did increased reader satisfaction result in more readers?
Some of you have blogs of your own, and are interested in process improvement, so why not do the same thing? It would be interesting to try improving (and expanding the readership of) a group of blogs. In six months or so, we could all do write-ups of our efforts and publish linked articles.
Subscribe to:
Posts (Atom)