Showing posts with label scrum. Show all posts
Showing posts with label scrum. Show all posts

Tuesday, 6 May 2014

Happy customer cocktail recipe

Ingredients 


Recipe 


1 - Take a bit of Kanaban, Scrum retrospectives, XP practices and mix well.


2 - Setup your board. ( You will need an electronic board, so choose between  JIRA agile , Versionone , Leankit , etc. Remember, it's not about how big it is, but how you use it ).

3 - Setup Jenkins. You will need to build the following pipelines : 

The following to run after every commit.

Continuous build job : this is triggered every time there is a commit. It runs unit and acceptance tests against trunk.

Sonar job : If the above passed, then  it's time to check the quality of the code committed. You can use sonar and the sonar plugin for Jenkins.

The following to run every night, each depends on the success of the previous.

Continuous build job : yes again, no point in running the following if this fails.

Release jobthis creates a production release (maven release or whatever is your company policy about releases) out of your trunk(svn)/master(git) branch.


Deploy to dev job : This job automatically deploys to dev environments.

Dev System tests job : This will trigger your integration tests against dev environments.

Deploy to perf job : This job automatically deploys to performance environments.

Performance tests job : This run your JMeter tests against perf using the permormance jenkins plugin.


4 - Each business story will need to follow this process :

Pair : If the story is not trivial, do pair programming. It will speedup development and it will reduce dramatically the number of bugs.

Distill the story :  Sit with your pair, QA and the Product Owner and write down all the acceptance criteria. Ask questions here and try to cover all possible scenario.  

ATDD : Write acceptance tests first. It will not compile at first because you do not have any implementation, so write the interfaces that you need in order to make the project to compile. Run the test, it will still fail. 

TDD : Now write the test for the most external part of your code. (implement leaves first). Run the test. Implement the code to make the test pass. Test more. Implement more. Continue until the story is fully tested and implemented.

ATDD : Re run your existing acceptance tests. They should pass now.

Review : Submit the code for review. Once the code has been reviewed you can commit.

Sonar : wait for Sonar to tell you how good you and your pair are.

5 - After the Jenkins nightly pipeline is successful completed, the QA team can pick up the Release and they can deploy to their QA environment. QA will run manual, automate and exploratory tests and if bugs have been found, developers need to act immediately. 

6 - When QAs do not find bugs, the release can be deployed into UAT where the Product owners can verify and accept the stories.

7 - Once POs are happy, Ops team can deploy to Preprod in order to test the deployment scripts and the deployment instructions.

8 - Ops can deploy the release in production.

  
Happy customers can celebrate now.

 







Wednesday, 15 May 2013

When do you think a daily stand up is too crowded?

Recently, looking at the daily stand-ups in my company, it makes me smile a bit.



I think too many = chaos and dispersion. 
Proper and efficient stand-ups should be 4 pigs and occasionally 2 or 3 chickens. 

This is my opinion, what is yours?

Thursday, 20 September 2012

Review Check box


One good and necessary practice when developing software is the code review.

Code review it's an excellent process to reduce bugs in the code and to raise up the quality of the software you are working on.

Basically, it means that before committing your changes to the repository, you ask to another developer to look at your code and see if everything looks fine.

But what is everything?

Well, this can vary between companies standards, but the following can be a good starting point:

OVERALL DESIGN
(Is the current design the best design (i.e. is it scalable, maintainable, Are the algorithms following the best practise in order to have the best performances? etc.)       
FUNCTIONALITY
(Does the code do what is supposed to do? Are we covering all the possible cases?)       
CODE READABILITY
(Is the code EASY to understand? etc.)       
UNIT TEST   
(Are the tests covering all possible scenarios? Are the tests reusable? If the developed code does not have 100% code coverage, why not?)  
ACCEPTANCE CRITERIA    
(What are the Acceptance Criteria, and if the criteria haven't been expressed as an automated test, why not?)    
ERROR HANDLING
(Do we really need checked exceptions? etc.)
WARNING   
(Are the eclipse/intellij warnings acceptable?)
LOG      
(Is the log useful in order to find issue? Is the log communicating what is necessary? Etc.)

Print this list and stick it in each developer desk. It will help during the review process.