Review: Learning Rails 3

Review by Jason Armstrong of Learning Rails 3

Learning Rails 3

Learning Rails 3 by Simon St. Laurent, Edd Dumbill, and Eric J Gruber (O’Reilly Media) is a great opening guide for developers that are new to Ruby on Rails development.  The book does assume some basic background from the reader (as stated in the preface).  The reader should know HTML development (not just HTML via WYSIWYG tools) along with Ruby in order to truly understand the concepts that are being presented in this book.  The authors provide an Appendix to help in the Ruby ramp up. Finally, a background in how programming is done generally will help the reader understand the concepts being presented.

Like all technology books, the authors had to write the title to the version of Rails that was available at the time.  However, I feel that the authors have provided a solid foundation to the reader that can support the independent advancement of the reader as they iterate through newer versions of the technology.  The authors also provide warnings about potential problems and confusions the readers may experience.  Too few authors are willing to commit to these types of warnings and I appreciate those that do provide them.  After all, no technology is perfect in all ways.

While the Model View Controller (MVC) specialist in me kept screaming about some of the early conversations in the book, the authors actually found ways to meet the fundamentals of MVC while making sure that the concepts provided were maintainable and manageable.  I don’t fault them for their approach since the flow of the book actually results in the developer meeting those fundamentals as they progress through the book.  In fact, it was actually refreshing to see the MVC concepts being explained in a way that would reach all developers – not just the purists.

Overall, I recommend this book to the type of reader described above.  As the authors state in their preface, you will not be a Rails guru after reading it; but you be a lot closer towards it than you were before this book was read.

Disclaimer: I received a free electronic copy of this book as part of the O’Reilly Blogger Program

Version Control of API and Other Documentation

In his post regarding “Version Control Your API Documentation with Github”, Kin Lane discusses an approach for using Github to version API documentation.  The post references using a single public repository to store and version the API documents with the APIs.  I agree with his thoughts and position.  However, I’d like to extend Kin’s thoughts into a practical problem seen if that solution is incorrectly understood and implemented at more of a root source control level.

The problem:  Assume that you have the repository structure below and you ask your novice developer to “check in” their API documents, database scripts, etc. related to this project (basically anything that is not really needed to “run” the code).

The concern: In this project structure (a Maven Archetype default), there is a strong risk that developers will place their API documentation into the resources folder.  However, since anything placed into this location is generally packaged with the deployable unit, suddenly your repositories (source control, Maven repositories, deployment folders, etc.) could see a swelling of storage needed to support the effort.

This storage concern is not a critical issue if your project is small and your documentation very minor.  However, small project efforts that allow bad habits have a tendency to create bad habits for a development organization as a whole on larger projects if they are not watched closely.  After all – who do you place on your projects other than a team of developers that have “already done it” but just on smaller efforts?

In a large project, you might suddenly see gigabytes of documents, diagrams, database schemas, etc. showing up in this resources folder.  Assuming that the projects might undergo hundreds of releases, branches, tags, and forks – this storage suddenly becomes an issue.

Imagine if your developers placed 2 gigabyte of “documentation stuff” into the project’s resource folder and requested that your build system should check out “a fresh” copy of the project once every hour so that it can execute a build and package after checkout (insane requirement – but I digress).  That is over 48 GB of data flowing back and forth between the source control system and your build servers each day.  Extending that assumption further- if you have more than one project at your company with that requirement – you might get a call from your LAN team.

The recommendation: Use strategies such as Kin mentions (along with the options below) to move your documents into a documentation repository and/or repository folder that is not automatically packaged with the release.

Option #1 (Recommended): Use a dedicated repository for documentation and hook them via technologies such as Github.

Option #2: Place the documents into a folder outside of the packaging process.  For example, place your documents into a folder at the same level of the root folder of the project.  You will still face the checkout issues – but at least you are not deploying your compiled package with a large set of documents.

Option #3: If you can afford it, invest into commercial toolsets.  Since I come from a SOA background there are things such as Websphere Service Registry and Repository, HP SOA Systinet, and Software AG.

Review: Version Control with Git

A Review of Version Control With GITAlready having a background in advanced usage of ClearCase, CVS, and SVN, I picked up Version Control with Git by Jon Loeliger and Matthew McCullough (O’Reilly publisher) to understand how Git could help me solve some of the feature challenges I have been working through with other VCSs.  This book certainly was able to deliver to my expectations.

The authors work through the processes to setup and configure Git step-by-step.  In addition, they also spends a great deal of time delving into the more important topics required to work with Git as a power user.  The examples were useful and the diagrams where acceptable to convey the points needed.  There is no doubt that that the authors understand Git.  The time they take in explaining why to “do something” is important in moving the reader from a simple user of Git into becoming a power user.  The “Submodule Best Practices” chapter was helpful in solving some of my current challenges while the “Tips, Tricks, and Techniques” chapter gave me some quick wins.

While there are many ways to solve the same problem when using any VCS, I felt like the authors worked hard to provide an honest and open view of their approaches.  I highly recommend Version Control with Git to the reader that wants to understand more about a VCS like Git than simply a small number of quick commands via an IDE.  While this book is not a definitive reference guide in all things Git, it provides a solid foundation that allows the reader to head in the right path as they learn more about Git’s inner workings.

Disclaimer: I received a free electronic copy of this book as part of the O’Reilly Blogger Program

SOA Governance Control the Chaos

With the growing number of implementations in the development community based on the SOA (Services Oriented Architecture) paradigm, I see many different governance mechanisms stated in terms of “must”, “shall”, and “only” when someone has begun an implementation of SOA. In this post, I will discuss some of the observations I have seen in my 8+ years of working in the SOA paradigm and provide some advice on how to better manage one of the stickiest areas of SOA implementation – governance.

SOA Governance is probably the most misunderstood area of a SOA implementation in our organizations or projects. Even the community contributors to Wikipedia struggle to provide a single and concise definition for what SOA governance “is.” There are many academic reasons for this struggle, but the overriding reason for this struggle is that governance is a term that means different things to different levels in an IT organization. To the technical manager, governance is controlling what tools, resources, and processes their development team will utilize. To the developer, governance is controlling how they create a service, how they integrate them with other services, and generally how they should be built. To the architect, governance is about controlling what specific services are built, their interactions, the domain of responsibility they satisfy, and how well they can be reused.

With all of these multiple pulls at a SOA governance process, a person can quickly see why there are so many flavors of governance implementations around the community. With the exercise of outsourcing the development of SOA implementations to remote organizations, the puzzle of governance is even more complicated since motivations such as financial, domain control, and experience-to-cost situations arise. I will not be able to cover all of these complicated issues into a single model everyone can follow in every implementation (hence why this is an issue in the first place), but based on my experiences in the SOA Governance model, I will try to provide some simple recommendations that might make the implementation of governance easier for all of these different types of teams.

Understand SOA Governance’s level of control.

The first mistake an organization can make in implementing SOA Governance is to try to control the lowest level of decisions being made in the implementation of SOA services. For example, defining every development tool that a team should use in the implementation of the service is a recipe for disaster. I am not saying that there should not be guidelines around this tooling – only that there has to be a sense of reasonableness and flexibility to prevent teams from needing to “force” a tool to do something it is not really designed to do well. For example, in a presentation at JUG, an architecture member presented their “Toolset Recommendation for SOA Implementations.” It looked something like this.

It was explained to the room that the developers were advised that if they did not use these (and only these) tools in their service development, that their implementation would not be approved by the governance board and hence not eligible for implementation. I can’t fault them in attempting to resolve a major issue in most development communities: toolset control/toolset domain knowledge. After all, how many times have we all been on a maintenance effort where we find that the previous team used an obscure library that takes us days to figure out how to use (much less find a download or instructions for in the community)? However, while their approach was a valiant attempt towards a resolution to these common issues, I believe that it causes a condition I like to term as “over-control.” In my experiences, any approach that causes over-control will generally cause it to fail.

As an example of a condition of over-control; what if a service implementation contains a requirement to provide session support? In Apache CXF, session support is not supported out of box due thread-safe risks. While this condition can be mitigated through the use of some of the tricks of the trade, a different Service framework may have made this implementation easier to develop successfully. However, in the case above, the SOA Governance “standard” required the use of a single service framework. Therefore, the team is quickly out of compliance with governance for picking a different service framework and either fails to implement or implements potentially buggy workarounds to change to the governance standard.

How do you resolve this conflict? Create a toolset board that contains all of your brightest and most reasonably vocal of senior developers. That board should provide multiple recommended toolset options for each framework area. For example, the board should use their past experiences and combined knowledge to define recommended toolsets including documenting each tool’s “sweet spots” and “limitations.” This information should be readily available to the projects teams as then begin lower level design. However, the most important feature of this process is to allow the projects teams to challenge/add new toolsets to the standard through the board. If that case is accepted, the organization as a whole will benefit from the experiences of its project teams, the governance standards stay current and flexible, and projects teams don’t implement a toolset that has known issues identified by the other projects teams. With this approach the governance document for approved toolsets would look like the below (simplified view).

With these types of toolset control mechanisms implemented in SOA governance, we get the best tools being chosen for each of the areas of our implementations along with growing our organizations and development communities. In the end, we have achieve the needs to each of our organizational areas: the development managers get a list of tools to ensure core competency in their teams, the developers get their flexibility and some experience-based advice, and architecture gets their reuse and standardization.

Coding Standards: A Beginning

010712_1933_SOAGovernan1.pngWhile working on a proposal for a new open source incubator project, it came as no surprise that the topic of which coding standards we should use came to the top of my task list as code formatting arguments were raised.  In a flash of inspiration, I immediately provided the standard quick and concise answer:  “Lets use the Oracle Java Coding Conventions standard.”  Suddenly, the sun burned brighter and the birds took up in song as the brilliance of my efficient answer was delivered.  Later in the day, when I had more time to consider the ramifications of my earlier answer, I pondered that perhaps I had been too simplistic in the view of what the Coding Standards means to me, my project, and the information technology industry as a whole.

So…let’s be honest with ourselves here.  When push comes to shove, we do what we need to do to get the product out to the markets. How often do we tell ourselves, “who really cares if I used 5 or 10 spaces of indent” and “Why does anyone care that all of my variables contain a single letter?”  We know that true “format” issues don’t really matter to anyone other than only the most critical of code reviewers.  Also, we always tell ourselves that we will get back and fix all those little shortcuts we took (no comments, JavaDoc statements, commented out code, etc) just as soon as we have a little more time.  Besides, we also all know that badly “formatted” code runs in production just as well “formatted” code…right?

However, as I found some free time to myself  (aka the holiday period), I wondered if perhaps there were some things that are defined in high-quality Coding Standards that are perhaps a little more complicated that pure formatting.  An example of one of those items is found below.

STRUCTURE GUIDELINE – “Avoid nested method call at all costs unless it is completely guaranteed not to throw an NullPointerException.”

Example #1

this.theInstance.theMethod.getMap.get(“key”);

In the above example, there is a good possibility that this efficiently written single line of code will return a NullPointerException to the caller.  Code reviewers generally see samples of where this exception prone code is wrapped (usually later) as the example bellow shows.

Example #2

  1. try{
  2.      this.theInstance.theMethod.getMap.get(“key”);
  3. } catch (NullPointerException npe) {
  4.      log.error(npe.getMessage(), npe);
  5. }
  6. return npe;

When the NullPointerException message is inspected from the code above, the stack trace will tell you the line number that caused the exception (line 2), but cannot tell you if the Null object in this line was theInstance, theMethod, or getMap.  Suddenly, we begin to realize that perhaps high-quality Coding Standards can help us write more “reliable” code.

In summary, delve deeper into the coding standards available in the community and consider if your projects should use code formatting tools such as Checkstyle (my current preference) in their efforts.  It worked for me and hopefully it will work for you also.