I have a theory.
Agile and Lean development practices are not magic, they are also not revolutionary. And it doesn't really matter which flavor you choose. What makes Agile and Lean work is that they simply provide a recipe for following what the management gurus have been saying for the past century. If you want to increases productivity give regular, actionable, feedback.
As an example, take a podcast from Manager Tools on Over Assigning and Delegating Work which breaks down Peter Drucker's advice on how much work to assign to employees.
The key to doing this well as an organization is for all the employees, at all levels, to develop excellent skills at prioritizing work. Even though there is always more work than can be done, the goal is to in each day, week, month ... Do what is the Most Valuable for the company at that time.
To me, this explains the power of the retrospective. Having these regularly (and not like your "performance reviews" are regular - one a year under duress) is how you gather actionable feedback on how well you prioritized your work and goals. A very basic question to ask in each retrospective - is what you did valuable? Was it the most valuable?
Saturday, July 2, 2011
Tuesday, February 8, 2011
Practice makes Perfect... thoughts on software testing & professionalism
Which camp are you in? Should we overlook Christina Aguilera's Superbowl faux pas as a moment of human fallibility? Should we strip her of her citizenship and deport her to outer Mongolia for disrespecting the national anthem?
I think the whole affair provides us a good opportunity to reflect on the standards of professionalism. As a friend of mine pointed out, it was an easy mistake to make, could of happened to anybody, she doesn't deserve to be picked to death for it. This is true. On the other hand, in this context she was more than just anybody, she was supposed to be a professional. Being professional means you don't make 'easy' mistakes. It means that when the eyes of a nation are on you, you perform flawlessly, disregarding any but the most unforeseeable events.
Professionals are held to a higher standard. We expect them to practice, to drill, to repeat their performance hundreds of times. We expect them to accept criticism, to seek feedback, and to work out the kinks. As an audience, we don't accept seeing a beta-test on game day.
Which camp are you in?
Do you let project timelines drive shipping untested code to your customers? Do you shrug your shoulders at each new reported bug and say 'well I'm only human'?
Do you use automated testing so that each feature is practiced hundreds of times to work out all the kinks? Do you look at each new bug and ask "How could I have missed that?", and then take action to ensure it can never happen again?
Athletes and performers who "practice, practice, practice" are rewarded by defect free performances. Software developers who "test, test, test" are rewarded by defect free code. In either case, mistakes do and will still happen. But why waste your audience's respect and patience on the easy ones?
I think the whole affair provides us a good opportunity to reflect on the standards of professionalism. As a friend of mine pointed out, it was an easy mistake to make, could of happened to anybody, she doesn't deserve to be picked to death for it. This is true. On the other hand, in this context she was more than just anybody, she was supposed to be a professional. Being professional means you don't make 'easy' mistakes. It means that when the eyes of a nation are on you, you perform flawlessly, disregarding any but the most unforeseeable events.
Professionals are held to a higher standard. We expect them to practice, to drill, to repeat their performance hundreds of times. We expect them to accept criticism, to seek feedback, and to work out the kinks. As an audience, we don't accept seeing a beta-test on game day.
Which camp are you in?
Do you let project timelines drive shipping untested code to your customers? Do you shrug your shoulders at each new reported bug and say 'well I'm only human'?
Do you use automated testing so that each feature is practiced hundreds of times to work out all the kinks? Do you look at each new bug and ask "How could I have missed that?", and then take action to ensure it can never happen again?
Athletes and performers who "practice, practice, practice" are rewarded by defect free performances. Software developers who "test, test, test" are rewarded by defect free code. In either case, mistakes do and will still happen. But why waste your audience's respect and patience on the easy ones?
Sunday, January 30, 2011
Code, like food, has a MeatCake stage
What's your legacy code base look like? If you swap out "refrigerator" for "Source Control" does the following sketch from George Carlin sound eerily familiar to your team's refactoring discussions?
"Perhaps the worst thing that can happen is to reach into the refrigerator and come out with something that you cannot identify...at all. You literally do not know what it is! Could be meat...could be cake. Usually, at a time like this, I'll bluff:
'Honey, is this good?'
'Well, what is it?'
'I don't know...I've never seen anything like it. It looks like...MEATCAKE!'
'Well, smell it!'
'(sniff)-ah, (sniff)-ah...it has absolutely no smell whatsoever!'
'It's good! Somebody is saving it. It'll turn up in something.'
The hilariously on target post by William Woody at http://chaosinmotion.com/blog/?p=622 shows just how quickly our best intentions quickly go astray. We try to predict the future and we end up with meatcake code, interfaces and frameworks that do nothing, except get in the way of developing new features.
"Perhaps the worst thing that can happen is to reach into the refrigerator and come out with something that you cannot identify...at all. You literally do not know what it is! Could be meat...could be cake. Usually, at a time like this, I'll bluff:
'Honey, is this good?'
'Well, what is it?'
'I don't know...I've never seen anything like it. It looks like...MEATCAKE!'
'Well, smell it!'
'(sniff)-ah, (sniff)-ah...it has absolutely no smell whatsoever!'
'It's good! Somebody is saving it. It'll turn up in something.'
The hilariously on target post by William Woody at http://chaosinmotion.com/blog/?p=622 shows just how quickly our best intentions quickly go astray. We try to predict the future and we end up with meatcake code, interfaces and frameworks that do nothing, except get in the way of developing new features.
Friday, December 31, 2010
Look to the code before blaming your new global resources.
I hear it all the time, "Our offshore team members are having a heck of time meeting our project goals for time and quality". But are you to quick to blame the new people when it's old code that is actually at fault?
Consider the following little history:
An application I had inherited used EHCache to reduce load on Database by storing application defaults. It was optimized to preload all Caches at Server startup time. As a result the code has always been a little slow to start up, what with 62 separate objects, each configured as its own independent cache loader.
The real pain began once the company added new global resources to the project, the offshore team members were consistently missing deadlines, and quality, well ... it left us 'underwhelmed'. I'm embarrassed to say this state of affairs lasted 6 months. Until one day - adopting some agile team practices, I chanced upon the real root of the problem.
Turns out this DB chatty load process could take up to ONE HOUR to load all 62 caches for the offshore team! Suffice it to say, it's probably unreasonable to expect a highly productive developer when the developer can hope at best to confirm only 4-5 changes per day...
New Lessons to Live By?
* Make time with sit with new team members as they integrate with your code, new eyes will reveal old problems!
* Demos don't lie, I'm sure the offshore team had mentioned "things ran slow", too bad we had accepted "slow" in the US side.
* Test network latency early and often, it is a hurdle you can't just throw hardware at!
* Code needs unit-tests. Unit tests would have forced us to design for DB independence, improving both US and offshore productivity for incremental changes to the 'legacy code'
Consider the following little history:
An application I had inherited used EHCache to reduce load on Database by storing application defaults. It was optimized to preload all Caches at Server startup time. As a result the code has always been a little slow to start up, what with 62 separate objects, each configured as its own independent cache loader.
The real pain began once the company added new global resources to the project, the offshore team members were consistently missing deadlines, and quality, well ... it left us 'underwhelmed'. I'm embarrassed to say this state of affairs lasted 6 months. Until one day - adopting some agile team practices, I chanced upon the real root of the problem.
Turns out this DB chatty load process could take up to ONE HOUR to load all 62 caches for the offshore team! Suffice it to say, it's probably unreasonable to expect a highly productive developer when the developer can hope at best to confirm only 4-5 changes per day...
New Lessons to Live By?
* Make time with sit with new team members as they integrate with your code, new eyes will reveal old problems!
* Demos don't lie, I'm sure the offshore team had mentioned "things ran slow", too bad we had accepted "slow" in the US side.
* Test network latency early and often, it is a hurdle you can't just throw hardware at!
* Code needs unit-tests. Unit tests would have forced us to design for DB independence, improving both US and offshore productivity for incremental changes to the 'legacy code'
Saturday, October 9, 2010
Developer Speak for Project Managers - Translating the Percent Done
As a developer, I've tried a variety of ways to convert my innate sense of 'doneness' into a percentage for the weekly status report: Wild Ass Guess, percent hours spent vs time estimated, re-estimating time to completion. In the end, I've decided that once the project is in development, time or percent estimates are ultimately useless. or as I read recently "Nobody will remember when a project started or finished, but they'll be reminded of every mistake you made, daily for the life of the product"
In my experience developers, by and large, do try to be truthful in their estimates. The problem is code is a human endeavor, there will be misses, there will be changes, there will be unforeseen problems. If you're a PM measured on success of the project outcome (rather than success at project process), what you need is a handy way to covert standard developer status speak into an objective measure of progress.
Developer Speak: "I'm 90-95% done with that feature"
Translation: "What's done is done, I'm not touching this code until QA finds a bug. That missing 5% is my notifying the PM that I expect them to find something, they always find something. I may also have taken a few shortcuts that will slow down development next cycle."
How to respond: Ask for a demo, today. Developers, by and large, take pride in their work, having them demo a job well done is not only a useful check, it's a chance to show interest and appreciation for all that hard work. If the demo fails to meet your expectations, isn't it better you know right now, and not weeks down the line?
Developer Speak: "I'm 75 - 85% done with that feature"
Translation: "I'm confident that the code I have works, for everything I know about. I'm putting this on the shelf to see if I missed anything."
How to respond: This as a request for help, assign someone for informal demo and code review. Ask to be informed of findings, and a revised estimate to complete the feature. This is where developers feel most uncertain, there's always more than one way to solve a problem, and they've still got some open questions if they're on the right one. A peer review can quickly help to clarify the options and may help them to save time by dropping attempts to over-engineer a solution.
Developer Speak: "I'm 25% - 35% done with that feature"
Translation: "I think I understand how this is going to work, and my first attempts at code compile"
How to respond: Ward off problems early. Ask for a test case review. The project crippling problems start here, and lack of a test plan or an inadequate test plan is your surest, best warning sign a developer is out of their depth and will benefit from guidance and support from the senior team members.
What gets measured gets done. You can reward your team for generating numbers that give a temporary false sense of success, or you can reward your team for producing quality code that meets business needs. It's all in how what you choose to hear.
In my experience developers, by and large, do try to be truthful in their estimates. The problem is code is a human endeavor, there will be misses, there will be changes, there will be unforeseen problems. If you're a PM measured on success of the project outcome (rather than success at project process), what you need is a handy way to covert standard developer status speak into an objective measure of progress.
Developer Speak: "I'm 90-95% done with that feature"
Translation: "What's done is done, I'm not touching this code until QA finds a bug. That missing 5% is my notifying the PM that I expect them to find something, they always find something. I may also have taken a few shortcuts that will slow down development next cycle."
How to respond: Ask for a demo, today. Developers, by and large, take pride in their work, having them demo a job well done is not only a useful check, it's a chance to show interest and appreciation for all that hard work. If the demo fails to meet your expectations, isn't it better you know right now, and not weeks down the line?
Developer Speak: "I'm 75 - 85% done with that feature"
Translation: "I'm confident that the code I have works, for everything I know about. I'm putting this on the shelf to see if I missed anything."
How to respond: This as a request for help, assign someone for informal demo and code review. Ask to be informed of findings, and a revised estimate to complete the feature. This is where developers feel most uncertain, there's always more than one way to solve a problem, and they've still got some open questions if they're on the right one. A peer review can quickly help to clarify the options and may help them to save time by dropping attempts to over-engineer a solution.
Developer Speak: "I'm 25% - 35% done with that feature"
Translation: "I think I understand how this is going to work, and my first attempts at code compile"
How to respond: Ward off problems early. Ask for a test case review. The project crippling problems start here, and lack of a test plan or an inadequate test plan is your surest, best warning sign a developer is out of their depth and will benefit from guidance and support from the senior team members.
What gets measured gets done. You can reward your team for generating numbers that give a temporary false sense of success, or you can reward your team for producing quality code that meets business needs. It's all in how what you choose to hear.
Sunday, March 21, 2010
Get A Clue From The Clueless
"Why is my <manager>|<project>|<pointy-haired boss> so clueless"?
If you had a dime for every time you found yourself asking this question, would you have retired by now? It's the catch 22 for a developer, you have tons of code to write and test, but also tons of interruptions and meetings to talk about when you'll be able to finish your work. Worse yet, they're so out of touch with technology, they offer no help if you do run into a real problem, am I right? Completely Clueless, proof positive of the Peter Principal in action.
Or are we developers missing something, blinded by our brilliant technical skills?
Early in my career my father gave this advice: "Don't dismiss anybody. They have their job for some reason, be it a skill, knowledge they hold, or a personality trait. Find out what it is they know that you don't and learn from it."
If we take a step back and apply a little logic, that clueless manager is likely higher in the food chain than you, which means that at some point they've had to impress somebody to get promoted. They get to keep that job only so long as they continue to fulfill the business's needs. These facts would contradict our original hypothesis, perhaps those clueless managers aren't so clueless after all?
The reality is a manager has to be aware of and answer to the business reality. It's never just a matter of "will it work?" but "when will it work and at what cost?". No matter how good an idea seems on paper, ultimately it's got to look good in a project plan to be worth money from a project sponsor. Businesses survive because of Profitable Success.
So a challenge to you if you are struggling with a clueless manager. For the next few months, but aside your assumptions about what your boss needs to hear and instead open up and really listen to what they are asking for. For example you can start with the following exercises:
1) Switch from a pull to a push status model. If you don't know already, ask your boss what day of the week they report the project status up the chain. Then prepare your status report to them a day earlier and send it (put the reminder in your calendar!). Be ready to clarify anything they have questions on.
I've found that after a few weeks as my boss learned to trust my report, more importantly I learned how to phrase things to get my boss's attention when needed, those annoying interruptions to explain current project state dropped off to almost nothing.
2) Use your boss's words back at them: If your boss talks in percentages report percentages, If your boss likes hours talk in hours. Whatever your internal project clock is, figure out a formula to convert to their measure and stick to it consistently.
My company requires percentages. My boss likes hours. I only trust completed test cases. My solution was a spreadsheet that tracked hours worked then subtracted that from hours estimated, generating a total hours remaining and percentage complete. If my test cases success % matched these raw numbers, I knew project was on track. When my test case % started slipping, I adjusted the raw numbers down and more importantly, communicated how I would catch up with the plan. Turns out my boss is not a complete ogre, and offered up solutions other than overtime to get things back on track. (It helps that she's being measured on keeping week to week consistency. who knew?)
3) Temper your innate optimism and supreme confidence in your god-like technical skills. Leave room for a little self-doubt. Chances are your manager has seen all kinds of ways that projects fail: technical, political, financial. Chances are also that your manager is where they are because they've survived those various disasters, and earned the credit for saving what they can. Ask your boss for war stories. Turns out while technology moves forward, the fundamental problems facing software development haven't. Your boss has likely moved through all kinds of fads, vaporware, and religions. Likely, they've survived by paying attention to some eternal truths, and not limited their career to the buzzword of the day.
Wednesday, March 11, 2009
Thoughts on Furniture Police...
Yesterday, Neil Ford presented "On the lam from from the furniture police" as keynote to Atlanta DevNexus conference. In it he commented on all the ways that corporate workplaces inhibit 'knowledge worker' productivity. One of key points that stuck in my head were his examples of how far companies will go to maintain 'standards' no matter how ridiculous the enforcement of standards becomes.
The talk brought to mind a few quotes from Peter F Drucker:
"Efficiency is doing things right, effectiveness is doing the right things."
"There is nothing so useless as doing efficiently that which should not be done at all."
"What gets measured gets managed."
Corporations can measure and therefore manage the tangible - IDE software, Source Repository, types of UML diagrams used in code documentation. Good developers do appear to share some common philosophies for example, DRY, separation of concerns, testability. But what inevitably happens when companies try to measure these things? We lose the spirit of the art in order to adhere to the letter of the law defined by an inflexible, but measurable, standard.
You know good code when you see it. You know a good developer when you work with them. A system of peer review is the only reliable way to measure the true effectiveness of knowledge workers. It is also the only way to provide meaningful feedback that drives increased effectiveness.
How do you foster a culture that standardizes effective choice of tools for problem to be solved? Does coding style at your workplace measure reusability and extensibility or are you simply reformatting files to ensure 'proper' parenthesis placement? Who gives the measure of your work? Is it your manager or your peers? Do you find yourself going to ridiculous lengths to meet those expectations?
Subscribe to:
Comments (Atom)