Is There Value in DevOps?

Devops

What is DevOps?  Wikipedia defines DevOps as “…a software development method that stresses communication, collaboration (information sharing and web service usage), integration, automation and measurement between software developers and Information Technology (IT) professionals” [1]

I would simplify that definition to be more along the lines of “an IT cultural philosophy that ensures that the operations and development teams are jointly engaged in the full development and maintenance lifecycles for the common purpose to provide a high quality, innovative, and maintainable solution to their customer.”

There are many well written and passionate views about DevOps that are both in the positive and the negative arenas.  The Agile Admin blog looks at some interesting comparisons of DevOps to similar concepts in the Agile Manifesto.  In their posting, they define DevOps in terms of values, principles, methods, practices, and tools.  In the introduction, they also formulate a DevOps definition of their own; “DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.” [2]  I highly encourage readers to review the referenced blog post above for an excellent backdrop to this post.

In a posting by Jeff Knupp, he discusses the troubles with expecting a resource to move from role to role as they navigate the DevOps responsibilities.  He very cleverly uses an analogy of a totem pole to describe those responsibilities.  That totem pole analogy orders those responsibilities as; “Developer is at the top, followed by sysadmin and DBA. QA teams, “operations” people, release coordinators and the like are at the bottom of the totem pole.” [3]  I also highly recommend reading his post in more detail.

When I provided my definition of DevOps, I intentionally used teams vs. individuals to describe the actors.  I believe that the DevOps team and their joint desire to find solutions that mitigate risk while pressing forward with new innovations, platforms, and processes are probably the single most critical measure of the probability of success or failure of the DevOps culture.  A successful DevOps organization always has the view that that no single person can do all things without gaining assistance from both development and operational experts.  How many developers do we know that have information security certification in their repository of skills?  How many security experts have development certifications in their repository of skills?  I encourage us to look back at how valuable we tend find those team leaders that know enough about security and development together so as to bridge the communication and priority gaps between those two heads-down experts.

At the end of my explanation above, I realize that I strayed slightly from my concept of teams and mentioned leaders.  To illustrate in more detail the importance in the use of DevOps leaders as part of the DevOps culture, let me provide a non-IT analogy…

One day, I am driving to work and I hear a noise coming from my car. I don’t know anything about cars other than how to drive them from point A to point B and fill them will fuel, so I call my auto mechanic for advice.  I describe the issue to my mechanic in layman terms.  “My car goes clunk when I drive in the bitter cold of winter.”  My car mechanic listens to my car’s symptoms and applies his expertise in general car repair to build a list of possible causes for the noise.  As part of his investigation, he looks for obvious issues like my muffler dragging on the ground, loose components, over-sized tires, etc. that he knows how to fix for any model of car.  However, at some point he might find that he need to engage with experts to see if there are more significant and specialized issues that could be causing the problem – in this case the auto manufacturer of my car.  The manufacturer has noted that there is a very unique but related defect they have seen only when the shocks that came with their car are used in very low temperature climates.  The auto mechanic turns that specialized information into general options for me to consider to solve the problem and make my car work best in the climate I use it.

By ensuring that the team of actors involved in this analogy have both the ability to 1)  communicate with each other, 2) understand enough about the car as a whole to find specialists as needed, and 3) are willing and able to work together to find the best possible solution to a problem is why my car no longer clunks in cold weather.

Consider how this whole model of interaction would have failed if any one of the actors in the analogy had failed to communicate or respect each others scope of knowledge, strengths, and weaknesses?  What would have happened if my mechanic would have ignored the report of clunking and assumed it was just my imagination?  How many parts would the mechanic have had to throw at the problem without the assistance of the manufacturer’s specific expertise?  What if the manufacturer didn’t care enough about the other actors to research issues found in the real use of their product and communicate possible solutions to mechanics?  Now…replace myself as the software product owner, the mechanic as a operations team member, and the manufacturer as a development team member.  Do we see that level of joint ownership, cooperation and interaction in our non-DevOps world?

I challenge the reader that this is the sweet spot for DevOps in the IT world.  We already know that development, QA, product, and other traditionally Agile-based team members work best when all members are open and trusting of each other in order to deliver on the common effort.  I assert that by adding in the operations teams to represent the needs of the systems, support, monitoring, and other traditionally operations areas into that development team is only adding to the probability of success for the product in both it’s features and it’s maintainability.

[1] Wikipedia – DevOps  http://en.wikipedia.org/wiki/DevOps
[2] What Is DevOps? http://theagileadmin.com/what-is-devops/
[3] How ‘DevOps’ is Killing the Developer http://jeffknupp.com/blog/2014/04/15/how-devops-is-killing-the-developer/

The Design Review – Only the Strong Survive

snaggle-toothed-dogLike a 10 year old heading to the dentist, the design review stage of a project is one of those areas that will drive a series of heebie-jeebies in even the most iron stomached software engineer or designer.  You’ve spent time with your users and business analysts.  You’ve diagramed all of the ins and outs of your solution.  Now you have to present your design to others and convince them why your design is the best approach to solving the problem within the constraints of your project.  Let the stomach aches begin!

When considering your upcoming design review, there are a few questions that you need to ask yourself.

  1. What is the target audience for this review?
  2. Has the audience spent the needed time reviewing your documentation?
  3. Are you prepared with answers to those reviewers that want to provide alternatives?

What is the target audience for this review?

Archery_TargetFor a successful design review you must understand your audience.  A strong designer will understand the technical level, responsibilities, and motivations in their design audience.  For example, if you are presenting your design to other senior technical leads within your immediate organization, you will want to structure your presentation to a level of technical review that is much deeper than if you are presenting to a group of architects.  Each type of audience will bring its own challenges.

Keep in mind that there is no design that at least one person can’t review and claim the opportunity for improvement.  Suggestions for change to your designs are occasionally motivated by technical philosophy instead of the merits of the design itself.  A well prepared presenter will use their understanding of the audience to work through these conditions.  As a rough example, sometimes a reviewer will believe that object oriented design is the answer to all solutions no matter the complexity of indirection and/or misdirection that object oriented design offers.  That person will likely challenge your designs on your use of data encapsulation, inheritances, and state if they are not all encompassing.  The more you are prepared to justify your design decisions for this review meeting in those low-level technical areas – they better you will be able to respond.  Just remember that a design review is not about finding a “different design” but to find the “best design” that meets the goals of your project.

Has the audience spent the needed time reviewing your documentation?

800px-Booster-LayoutThe single most time draining failure in a design review is if your audience has not reviewed your documentation before the meeting.  No one usually wants to sit in a two hour meeting listening to you read through your design document line-by-line.  Heck, I don’t even like having to read through my designs line-by-line.  To ensure your audience is prepared for your reviews, make the effort to give each participant the time to complete the review.  I also like to elicit a response from each participant before my design meetings where they are asked to confirm that they have read the design and are coming to the meeting to ask questions about the design itself.

At the beginning of the presentation, inform the participants again that the review meeting is an opportunity for them to ask questions about the design’s approach to solving the problem.   Good meeting management by the presenter is critical in preventing the meeting from turning into a document read-along.   If your project schedule and company culture allows, stopping the review for rescheduling if it becomes obvious that the review was not done by the majority of participants is sometimes a good peer-pressure trick to ensure the participants understand their role in the effort.

Are you prepared with answers to those reviewers that want to provide alternatives?

“Defence is our best attack” Jay Weatherill

The most effective tool a designer has in their arsenal is that of alternative solution investigation.  The designer must spend the time needed to explore the alternatives that may come out of a design review before the review occurs.  As quick on their feet as many designers believe that they are in answering a technical challenge with partially correct answers, I’ve seen situations where the reviewers that actually do know the correct answer to the technical challenge begin to doubt the designer’s credibility.  If you don’t know the answer to the feasibility of an alternative solution – just say so and take it off as a follow-up action.

Remember that the design review is not about you.  The review is about the proposed solution – not the person presenting it.  Assume your reviewers are looking for the best solution for the problem – not attacking your ability to design it.  In the end – if your design is well thought out and you’ve considered all of the possible fallacies of the approach – your design will only be improved by the combination of minds reviewing it but will usually enjoy more support from those that accepted the design.

720px-A_01_Audio_compact_disc_collection

I’ll provide a personal example to this topic.  I had worked on an application for two years of design, development, and maintenance.  The application had been reviewed over and over by at least 15 difference architects, senior developers, and designers over that period of time.  A new person joined the group and in about 45 minutes of reviewing the design suggested a hole in our solution that had been leaking memory for the entire two years.  We had been lucky enough to have loaded maintenance releases often enough to not have seen it – but sure as heck – it was there.  It never ceases to amaze me how just a single person can sometimes find something that the larger group never recognized.

In summary, embrace these design reviews.  Conducting a series of reviews taken in a positive manner and demonstrating solid understandings of risk analysis and thoughtfulness has propelled many developers into high-level roles and responsibilities.  Look at each design review as an opportunity for you to learn something new.  I assert that if you do – not only will you gain personally through your learning – but the designs you produce will be more solid and less prone to failure…and everyone like to support that kind of software.

Version Control of API and Other Documentation

In his post regarding “Version Control Your API Documentation with Github”, Kin Lane discusses an approach for using Github to version API documentation.  The post references using a single public repository to store and version the API documents with the APIs.  I agree with his thoughts and position.  However, I’d like to extend Kin’s thoughts into a practical problem seen if that solution is incorrectly understood and implemented at more of a root source control level.

The problem:  Assume that you have the repository structure below and you ask your novice developer to “check in” their API documents, database scripts, etc. related to this project (basically anything that is not really needed to “run” the code).

The concern: In this project structure (a Maven Archetype default), there is a strong risk that developers will place their API documentation into the resources folder.  However, since anything placed into this location is generally packaged with the deployable unit, suddenly your repositories (source control, Maven repositories, deployment folders, etc.) could see a swelling of storage needed to support the effort.

This storage concern is not a critical issue if your project is small and your documentation very minor.  However, small project efforts that allow bad habits have a tendency to create bad habits for a development organization as a whole on larger projects if they are not watched closely.  After all – who do you place on your projects other than a team of developers that have “already done it” but just on smaller efforts?

In a large project, you might suddenly see gigabytes of documents, diagrams, database schemas, etc. showing up in this resources folder.  Assuming that the projects might undergo hundreds of releases, branches, tags, and forks – this storage suddenly becomes an issue.

Imagine if your developers placed 2 gigabyte of “documentation stuff” into the project’s resource folder and requested that your build system should check out “a fresh” copy of the project once every hour so that it can execute a build and package after checkout (insane requirement – but I digress).  That is over 48 GB of data flowing back and forth between the source control system and your build servers each day.  Extending that assumption further- if you have more than one project at your company with that requirement – you might get a call from your LAN team.

The recommendation: Use strategies such as Kin mentions (along with the options below) to move your documents into a documentation repository and/or repository folder that is not automatically packaged with the release.

Option #1 (Recommended): Use a dedicated repository for documentation and hook them via technologies such as Github.

Option #2: Place the documents into a folder outside of the packaging process.  For example, place your documents into a folder at the same level of the root folder of the project.  You will still face the checkout issues – but at least you are not deploying your compiled package with a large set of documents.

Option #3: If you can afford it, invest into commercial toolsets.  Since I come from a SOA background there are things such as Websphere Service Registry and Repository, HP SOA Systinet, and Software AG.

Designing Long Term Solutions in an Iterative World

http://en.wikipedia.org/wiki/File:Development-iterative.gifIn this world of cost cutting and temporary resourcing, the importance of looking at the long term strategy when defining a solution’s design can get lost with the pressures of implementation deadlines.  How many times has a team been asked to shorten their design time in order to “get on” with the development effort and get something out to the market?  As a designer, how many times have you been told to worry about the fine grain details later – aka when we work on the low level designs for the developers?

Take an honest look at your project resources.  During requirements gathering we have; technical leads, business analysts, product owners, architects, project managers, and many others reviewing every single word in the requirements.  However, when the requirements are done, the team drops to a few technical leads and possibly some architects trying to push out a design with enough information to get the development effort started.   Who is inspecting every flow, component, and rule in the design?  Who is making sure that the design meets the basics of solid design fundamentals?

A respected VP and Director I know once told me that there are few things that are more important than a great design to make great solutions.  The analogy that was given is roughly repeated below:

“A good design is like a blueprint for a house.  Many designers can design a three bedroom house since the same blueprints usually work with a few tweaks on similar houses.  They know how to put in the plumbing, build a standard foundation, and where to put the doors.  However, I assert that the blueprint must be evaluated against the long term needs for the final solution.  If the blueprint is for a three bedroom single story house, they you should never expect that house to turn into a five story apartment just for the desire of making it one in the future.”

I’m a big fan of Agile and Iterative development methodologies.  These methodologies have moved development to greater flexibility and delivery capabilities in many ways.  However there is a catch…as there is usually a catch in all good things.  If there is not a blueprint for the final condition (or at least a review that ensures that the final condition can be supported) then no number of iterations or sprints will ever be enough to reach the final goal.

Let’s go back to the blueprint analogy and explore some of the detailed examples that came out from the analogy above.

Let’s say that the requirements from the new property owner describes a single story building that will support three people and will provide those people security from robbers and weather conditions.  Those owners know that at a later date they will be asking for something that will house 200 people, provide a gated security system, and contains a pool and tennis court.  However, since they really just want to get that single story house out there as soon as possible and start getting revenue from renters, they tell everyone that they will deal with those larger requirements “later.”

The designers look at the requirements and start building a blueprint for a house that  Designing Long Term Solutions in an Iterative World a Single Story House Viewhas a couple of bathrooms, a single-story-rated foundation, and standard deadbolt locks on the door.  Everyone likes the simple design, so the development teams start implementation and celebrate the completion of the single story house with great fanfare.  The owners have renters move in and praise everyone for the modest income they are seeing.

Later, the owners meet with the designers and ask them to add few more stories and 20 more apartments to the building so that they can get more tenants into the building and make some more money on an already winning income source.  After some tears and anger – the designers agree to give the owners what they want.  The designers decide that they will need to kick out the current tenants so that they can build a stronger foundation.  Then they realize they need to change the entire security model to include key card access and large gates to keep the strangers out and the tenants secure.  Finally, they are surprised to find that the city’s current residential sewer line will never support the new needs, so they design a new commercial-level sewer system for the property.

The construction crews that worked on the single story house are livid that they have to Designing Long Term Solutions in an Iterative World a Apartment Viewredo so much of their original work.  However, at least it is paying work!  They don’t have a lot of time to completely rebuild the foundation, so they decide that maybe they’ll just add some more concrete and hope that it will hold the weight.  They deliver an updated building and great fanfare is made of the new delivery.  Then the city inspectors are sent in.

The city inspectors find that the foundation is sinking under the weight of the new floors.  The sewer line could not be ran to the area because it was in a residential zone and commercial lines cannot be ran into the property unless they spend millions on rezoning and new city infrastructure.  Finally, after much finger pointing and lost money, the effort is shut down due to permit violations, zoning violations, and budget overruns.

So why spend so much time on the analogy above?  Because, this cycle is seen over and over again in the software design process that most teams work within.  How many times have we implemented software only to find that the initial authentication mechanisms were not adequate and we needed to implement complex single sign-on features using expensive third-party systems?  How many times have we seen software that was not thread safe and highly scalable because the development team thought they were building a 1 TPS solution vs. a 1,000 TPS solution?  For the operations folks – how many times has a team asked you to place a piece of software on your low transaction network only to find out that it was really supposed to be running across a highly redundant and highly scalable network?

How do we keep this from happening?

  1. Project owners need to insist that the teams (including themselves) do not get into a cycle of “we’ll figure out that complex feature later.”
  2. Designers need to understand the end state of their designs and implement solutions that allow for the migration from starting iterations into their final conditions without major foundation rework and redesign.
  3. Finance needs to ensure that they provide budget for efforts by giving the design team the resources and budget to ensure that they can understand and design for the end-point solution.
  4. Reward those teams that save time, rework, and funds by having a solid plan of how to get from state A to state G.
  5. Re-train those that feel that the measure of a good design is how many fancy window coverings, cool colors of walls and carpet, and number of awesome garden tubs that they can put into that single story house instead focusing on the long term goals.

The ideas above sound easy to do.  However, I am always amazed on the number of stories I hear at gatherings for development managers, designers, and technical leads across industries where they same comment is made over and over: “If I would have only known that they wanted it to me like that then I would have done everything differently.”   That said, over-engineering is also another challenge for good designs – but I will cover than in future posts.

In the end, give your design teams the tools and information that they need to create a design that meets the true needs of the solution and they will amaze you with the results.  Trust me – a few extra weeks in design can save thousands in development re-work.

Coding Standards: A Beginning

010712_1933_SOAGovernan1.pngWhile working on a proposal for a new open source incubator project, it came as no surprise that the topic of which coding standards we should use came to the top of my task list as code formatting arguments were raised.  In a flash of inspiration, I immediately provided the standard quick and concise answer:  “Lets use the Oracle Java Coding Conventions standard.”  Suddenly, the sun burned brighter and the birds took up in song as the brilliance of my efficient answer was delivered.  Later in the day, when I had more time to consider the ramifications of my earlier answer, I pondered that perhaps I had been too simplistic in the view of what the Coding Standards means to me, my project, and the information technology industry as a whole.

So…let’s be honest with ourselves here.  When push comes to shove, we do what we need to do to get the product out to the markets. How often do we tell ourselves, “who really cares if I used 5 or 10 spaces of indent” and “Why does anyone care that all of my variables contain a single letter?”  We know that true “format” issues don’t really matter to anyone other than only the most critical of code reviewers.  Also, we always tell ourselves that we will get back and fix all those little shortcuts we took (no comments, JavaDoc statements, commented out code, etc) just as soon as we have a little more time.  Besides, we also all know that badly “formatted” code runs in production just as well “formatted” code…right?

However, as I found some free time to myself  (aka the holiday period), I wondered if perhaps there were some things that are defined in high-quality Coding Standards that are perhaps a little more complicated that pure formatting.  An example of one of those items is found below.

STRUCTURE GUIDELINE – “Avoid nested method call at all costs unless it is completely guaranteed not to throw an NullPointerException.”

Example #1

this.theInstance.theMethod.getMap.get(“key”);

In the above example, there is a good possibility that this efficiently written single line of code will return a NullPointerException to the caller.  Code reviewers generally see samples of where this exception prone code is wrapped (usually later) as the example bellow shows.

Example #2

  1. try{
  2.      this.theInstance.theMethod.getMap.get(“key”);
  3. } catch (NullPointerException npe) {
  4.      log.error(npe.getMessage(), npe);
  5. }
  6. return npe;

When the NullPointerException message is inspected from the code above, the stack trace will tell you the line number that caused the exception (line 2), but cannot tell you if the Null object in this line was theInstance, theMethod, or getMap.  Suddenly, we begin to realize that perhaps high-quality Coding Standards can help us write more “reliable” code.

In summary, delve deeper into the coding standards available in the community and consider if your projects should use code formatting tools such as Checkstyle (my current preference) in their efforts.  It worked for me and hopefully it will work for you also.

Review of Enterprise Data Workflows with Cascading

Enterprise Data Workflows with CascadingEnterprise Data Workflows with Cascading by Paco Nathan (O’Reilly Media) is a great summarization of using the Cascading API.  Paco spends a sufficient amount of time providing a solid overview of Cascading along with an explanation of related extensions such as Pattern and Lingual. Test cases provided allow a novice user to quickly understand the basics of Cascading though some of the test cases followed along the same flows as the Cascading online documentation site.

Enterprise Data Workflows with Cascading is a great resource for beginning users that need to quickly come up to speed on using Cascading.  The book works the reader through evolutions of exercises such as setting up and loading files into Hadoop to using different types of joins along with finally reaching the point of integration points with the different languages and a larger case study based on the City of Palo Alto Open Data.

I’d recommend Enterprise Data Workflows with Cascading as a good entry point and base work to build upon as the reader gains more experience.

Review: Placing the Suspect Behind the Keyboard: Using Digital Forensics and Investigative Techniques to Identify Cybercrime Suspects

Placing the Suspect Behind the Keyboard

I wanted to take a look at a computer-based topic not normally in my programming domain and chose Placing the Suspect Behind the Keyboard: Using Digital Forensics and Investigative Techniques to Identify Cybercrime Suspects by Brett Shavers (O’Reilly Media).

As a former police officer, I found some of the discussions around generic evidence preservation to be slightly difficult to stay engaged with.  However, as a whole, Placing the Suspect Behind the Keyboard did not disappoint my desire to see what digital forensics was all about.  After reading this book, the reader should have a solid foundation to start delving into both the investigative and technical areas of a digital forensic investigator.

Placing the Suspect Behind the Keyboard takes the reader though a step-by-step process to ensure that digital investigations and interviews are carried out in a manner that will preserve the integrity of both your evidence and your suspects involvement.  Shavers reminds us throughout the book that it is not just about finding critical evidence on the digital device – but also ensuring that you can place the suspect “behind the keyboard” while those actions were occurring   With excellent references back to sources to keep you on track, Placing the Suspect Behind the Keyboard keeps the reader in line with well-established investigative procedures.  In addition, the Shavers also covers how to appropriately present your evidence to different types of audiences – something that is more challenging than most assume.

I highly recommend this book to a person just getting into digital forensics or that is looking for taking their technical knowledge to the next level.  While not a highly technical book, it is a great introduction into the digital forensics field.

Disclaimer: I received a free electronic copy of this book as part of the O’Reilly Blogger Program

Review: Metasploit – The Penetration Tester’s Guide

Metasploit: The Penetration Tester's GuideMetasploit: The Penetration Tester’s Guide by David Kennedy, Jim O’Gorman, Devon Kearns, and Mati Aharoni (O’Reilly Media) is very detailed and extremely valuable in demonstrating how penetration testing can be done using Metasploit along with having the great side-benefit of being able to learn about general methods and processes a pentester will go through during the testing cycle (PTES methodology).

The initial chapters deal with introducing the reader to the PTES methodology and Metasploit as a testing product.  As the chapters progress the authors pushes the reader deeper and deeper into the Metasploit product’s features along with how to use those features to complete the penetration test processes.  In the appendix, the authors have provided instructions on how to configure test environments that can support your exploits without sending the Feds to your front door.

Overall, this book is an good resource for those people that have good technical skills in Ruby and are comfortable in a Linux environment that want to understand penetration testing and the Metasploit product.

Disclaimer: I received a free electronic copy of this book as part of the O’Reilly Blogger Program

Review: Managing for People Who Hate Managing: Be a Success by Being Yourself

ManagingForPeopleWhoHateManagingAfter reading Managing for People Who Hate Managing: Be a Success by Being Yourself by Devora Zack, I was pleasantly surprised to find that the author had found a way to convince me to look differently at my “management” role.  In this relatively short book (133 e-book pages on my Nook) Zack guides the reader through several strategies to realize how you can take the “you” that earned that last promotion into the manager that you truly want to be – all without changing who you are.

In a writing style that is both informal and lighthearted, Zack works through many of the topics that we have all heard in management training sessions and management tombs – but never really connected with.  The conversation with the reader is in a style that makes complicated management strategies into manageable (pardon the pun) examples, stories, and thought provoking quotes.  After reading Managing for People Who Hate Managing: Be a Success by Being Yourself, I find myself refreshed and ready to work with the thinkers and feelers that are always all around us – while still remaining true to myself.

I highly recommend this book to new and experienced managers alike – and even a few employees wondering why their managers act the way that they do.  Sometimes, all a manager needs is a reminder of how we got here in the first place and Zack brings that to us in her book.

Review: MongoDB Applied Design Patterns

MongoDBPatternsMongoDB Applied Design Patterns by Rick Copeland (O’Reilly Media) is another in a series of patterns books that I would highly recommend for the implementer that knows the technology but is trying to discover if a previous solution (via a pattern) has been found for their problem.

As to the reader, MongoDB Applied Design Patterns is not for the MongoDB “Hello World” level implementer.  There is a basic assumption that the reader understands how to implement basic MongoDB solutions and is now ready to implement more advanced solutions using established patterns.  The Use Cases in the second half of the book were probably the most useful for me.  It not only gave me ideas of how to solve certain patterns of problems, but allowed me to explore features of MongoDB that I had not had a chance to explore.

I appreciated that Copeland explored several important topics at the beginning of the book.  For example, he takes time to explain how important it is to MongoDB solutions for optimization to be considered at the beginning of the design vs. afterwards.  Along with those lessons, Copeland provides detailed examples of the “why” instead of leaving the reader to just trust his statements.

I’d highly recommend MongoDB Applied Design Patterns to the implementer that is ready to move beyond the basics and move into highly viable and scalable solutions using MongoDB pattern as their initial template.

Disclaimer: I received a free electronic copy of this book as part of the O’Reilly Blogger Program