Sunday, February 10, 2013

Continuous Delivery

  1. Design
  2. Develop
  3. Test
  4. Release
  5. Design
  6. Develop
  7. Test
  8. Release
  9. Design
  10. Develop
  11. Test
  12. Release
  13. etc.

...this loop is continuous delivery. If you are only looping Design/Develop/Test without the Release piece then you may be doing continuous integration, but continuous delivery entails having the Release step in the cycle to ensure the users are living the new code instead of being "protected" from it. This was the subject of a talk by Jimmy Bogard at the Dallas Day of Dot Net convention yesterday. In this model something is "done" when it is in production. This model throws away the following distinctions:

  1. done
  2. done, done
  3. done, done, done

...in which the first means that the developers are done, the second means that the product owners have checked off on it, and the three-peat suggests that the feature that is done has made it into production. The boundaries and distinctions for the three versions of done stem from a realization that it may not be easy to get from one variety of done to another. If you do need the distinction of the three varieties of done, that may highlight a problem. Ideally, it shouldn't be that painful to get something into production. Jimmy asserted that Facebook has new hires develop a feature and get it into production the first day one is on the job as part of its onboarding process. The longer one delays being at "done" the longer a feature goes unproven and the more painful it gets to get it to done. If there is a hurdle to bridge in getting a feature into production that makes a team hesitant to push, as much suggests a problem. Really any pain point suggests a problem. "If something hurts, do it more often" was a rule reiterated. If things are painful, it is only by doing them more often that a team deals with the pain. Don't hide from pain. This is the premise behind the third of the seven rules Jimmy offered for a repeatable, reliable process for releasing software to eliminate the need for the three versions of done:

  1. automate almost everything
  2. keep almost everything in source control
  3. bring the pain forward
  4. build quality in
  5. done means released
  6. everyone is responsible
  7. continuous improvement

(The "almost" in the second rule means that configuration-specifics are exempt and the final rule demands a culture in which individuals recognize a need to a fix a problem when something goes wrong.)

About half of the talk was on the continuous integration (automate almost everything) technologies. There was some stuff I was familiar with, such as Tarantino, and other stuff I was less familiar with. I have been in TFSland for the last four stints of employment and have not yet used Git for source control. If I were using Git, when I made a commit via a Powershell command line (example > git push origin master), I could kick off a process at TeamCity, a build server agent, which could then deploy via a Psake script (that is what the stuff in the picture above is) to a production environment, if the build didn't break due to an inability to compile code or a failing test. I've not used TeamCity or Powershell yet either. The CI stuff is still, to me, something neat that I hope someone else in the room can do. (I'm still, and may always be, a junior developer.) Azure can also watch a Git repository and kick off a push to another environment upon a change. The Init task below will wipe clean and recreate a database. You will want to do this before running every integration test that interfaces with the database.

You will also want to have test-specific/test-friendly data on hand when running integration tests. This is likely a different dataset than the data you might hold for just browsing the site or showing it off in a demo. No problem. You can have both. Psake configurations allow for different preparations in different environments. Use Psake to prep data one way for testing and another for demos.

I am used to having a src folder for source code and a lib folder for .dlls and other supporting stuff at the root of what I might check into source control for any one application. Jimmy had five folders. Beyond lib, the src folder was renamed to code and there was also a build folder, a LatestVersion folder, and a psakev4 folder which I suppose is for psake scripts. I think LatestVersion was what the name implied while build was a temporary build-to place with contents perpetually being destroyed and recreated. Jimmy added a new field to a database schema via Tarantino as part of an example of a new feature and then rolled it out. When rolling such a change out to production you have to ask yourself about how you will handle backwards compatibility. Will rows of data without the field...

  1. become illegitimate?
  2. be allowed to legitimate without the data point?
  3. be redacted to keep a dummy value?

The Tarantino means of note-keeping for SQL scripts makes easy deploying only the deltas (differences or discrepancies between two versions) of SQL scripts (which make up the whole of schema) in the name of making surgical updates to existing schemas without dropping all of the tables and starting fresh.

No comments:

Post a Comment