How to handle big stories / tasks

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

How to handle big stories / tasks

lcestari
HI guys,

I created this thread so everybody could talk about $subject, like share the experiences with big task/;stories on lightblue or even other projects, how did you handled it? WOudl you do something different if you could do it again? Would you try to split it up before to make it easier to review or better plan and forecast the results?

A related presentation talking about the big picture of technical debts that can occurs during a project: http://www.slideshare.net/kka7/operational-costs-of-technical-debt
Reply | Threaded
Open this post in threaded view
|

Re: How to handle big stories / tasks

jewzaam
Administrator
Luan, you have some ideas around this you'd like to add?

I think with net new development on new applications with new processes, teams, infrastructure, etc (aka lightblue) it's a bit of trail an error.  We each bring our own ideas of how things could go based on previous experience, but those are with different factors.  A challenge is to be flexible.  I think the way we found that works for us now is to have a feature branch where everything is developed for something big and then have 1 or 2 people do a review of everything at one time.  From those reviews issues get created and then changes can be tracked as pull requests.  I guess tooling makes a difference as well.  From previous work, this wouldn't have worked because review processes were much more process driven because they were not supported by the tools the team had adopted.  But a few things I see carrying forward with anything we do:

* Everything gets reviewed, with some exceptions for maybe tweaks to documentation or configurations like travis-ci.

* Unit test the things that make sense to a level that is maintainable.  The more that can be done in a unit test the better, since there's (in theory) less that will be found in integration testing.

* Automate as much as you can.  From building out infrastructure to deploying and running integration tests, aim for CI/CD.  Any time a person has to spend doing this is time not being spent on higher value work.

* Be ready to change processes and tooling if they don't work.  Especially important with new work.

* Transparency and availability of stuff is critical, as much as can be done with legal and security constraints (if any).  This is documentation on why the product exists, what it does, how it works, processes for development / review / deployment, etc.

Probably more, but that's what's off the top off my head at this time.