Paying down some (technical) debt

A couple of weeks back we dedicated a full sprint to tacking some technical debt.

If you’ve read any of our previous tech blogs you’ll know that we take pride in putting the needs of our customers first and this means that we usually have a full-on focus on delivering the features and improvements that our customers ask for and deserve.

We always take a balanced view with our technical delivery but sometimes certain things fall to the way side, not because they’re not important but simply because there are only so many hours in the day!

Having hit some key milestones recently, we thought it was time we dedicate a couple of days to tackling technical debt.

We track all of our technical debt and review this at each planning session, so it was pretty easy for us to put together a backlog for the upcoming technical debt week.

This is what we achieved.

Making CI Great Again

We’d been having some issues with our CI builds recently, we had a few flakey specs and things seemed to be slowing down which was reducing our productivity.

Meaning a lot of wasted time waiting for our builds to pass, which at times became a blocker to shipping new features.

Containerising the app

One of our longer term objectives is to move over to a fully containerised infrastructure, the reasons for this are many but one main objective is that we want to reduce the development/test/production parity between our environments.
In layman terms, we want the app running in as similar a setup as possible everywhere, to avoid nasty surprises as our application gets out to production.
I’d been chipping away at containerising our application over the proceeding weeks and I’d been able to put together an image to run the app and our builds. Containerising the application reduced our build times by pre-installing and caching dependencies, this like Gems or JS packages, avoiding the need to install them on every job as we’d previously been doing. The eventual aim is that this image will form the basis of the deployment infrastructure.

Headless Testing

What I hadn’t managed to get around to was how to run our system (or feature) tests using this newly optimised container image because running these required additional dependancies we wouldn’t necessarily want in our deployment image – like headless chrome.

Most importantly, we wanted to avoid having different custom images floating about that we would have to maintain.

Instead, we opted to run headless chrome in a separate container that our build container would use to drive a browser, handily the selenium project provides a ready built headless chrome image.

We’ve used this image as an additional service in our gitlab ci setup and configured capybara to make use of it. This way we don’t need to pollute our application container that we run our builds on with additional dependancies like chrome that are only used for testing.

Documentation

It’s fair to say that our documentation had been lacking and what we did have was spread around in different systems and locations. We’d noticed this had become a slight issue for us at times as we tried to recall things like, what features have been switched on for which users and why where certain design decisions taken on particular features.

To ensure we can scale as a team it’s important that we don’t have an over reliance on particular persons (or their bad memories, talking about me here). In addition we have a new developer on board so we thought this would be an ideal opportunity to create and test drive how well our new documentation works during on-boarding.

Wiki Structure and Documentation Standards

We ran a workshop and decided on an overall structure for our wiki and an overall structure for how we document our projects and particular features.

The idea is that the wiki can be a one stop shop and living history of what has happened over time and why. We’ll continue to backfill this over the next few weeks with content and we’ll be adding documentation considerations to our definition of done.

Infrastructure and workflow diagrams

We did have some diagrams but these where created in a rather expensive piece of software that was really overkill for what we required. We decided to update these using the widely used PlantUML software, which uses a form of markup to generate diagrams automatically. It’s fair to say that the diagrams that PlantUML creates are not always a work of art but they are clear, simple and fast to produce, and very apt for us developers seeing as we can create them using a markup language (-;*

We used a couple of different sprite packages to get some nice icons such as plantuml-icon-font-sprites and AWS-PlantUML.

We now have an up to date infrastructure diagram as well as workflow and deployment sequence charts to help describe how we work and how we ship code through our test environments and eventually to production.

These are very easy to maintain and don’t require any expensive software licenses!

Additional Backups

Don’t worry of course we have backups! However our current backups are managed by our DevOps service cloud66.

We identified this as being a single point of failure, in the very unlikely case that cloud66 would suffer a catastrophic failure it would be difficult to recover our data in a timely fashion.

To remedy this we’ve created an additional backup that is fully under our control, meaning that we now have even more redundancy to recover our data.

Better Test Data 

Sometimes it’s useful to be able to debug using production like data in development or in a test environment, especially if you’re trying to track down a live issue.

We hadn’t been able to do this because it goes against our data protection policies and also because our data is encrypted production data would be unusable in any other environment.

We’ve created some data scripts that allow us to create a production like database in our development and test environments without using any personally identifiable information from production. This should allow us to test the application using more realistic data and catch problems much earlier.

React (hash) Router

Up until recently our react components had been standalone and isolated, it was possible to navigate to them using traditional HTTP routing and page loads.

Our ambition is to gradually move to more of a Single Page Application style client, allowing us to fundamentally split the application apart into a React frontend and a (rails) api backend.

We’ve started to build multi part components and need to be able to drop our users directly into sub-components rather than the default root component when following links (such as our upcoming notifications) within the application.

As an interim workaround we had been using a react tabs component and simply appending a tab number to links, this would then open the component on the correct tab as per the tab number in the url.

Whilst this worked, it was far from optimal and we always felt that utilising react router was where we would need to go.

Hash Router

Because our application is currently a combination of Rails views rendering traditional HTML/CSS and React we needed to consider the implications of using React router along with Rails routes.

One such implication is that Rails would interpret the React routes as Rails routes and try to render the incorrect resource and lead to errors.

Imagine we have a Rails applications with the following routes.

get '/posts',to: 'posts#index'

get '/posts/:slug',to: 'posts#show'

A common pattern in Rails routing is too append resource identifiers into the URL path like “/posts/:slug/show”, in this example Rails it expects to find some identifier after the “/posts/“ part of the path that it will use to lookup the resource e.g “/posts/how-to-hire-the-best” would translate to looking up an article with a slug of ‘how-to-hire-the-best’.

With react router however, we might want to do something different like “/posts/mine” just to show your own posts, in this case we want two things to happen…

  1. We want rails to render the posts index page and send our React component to the browser
  2. We want our React client to filter posts to the current user.

What would actually happen is that Rails will try to render the posts show page and it will try to find an article with a slug of “mine”. This is because Rails is following its routes correctly as it’s been told to!

Luckily there is a way around this, we can use an anchor or hash link. There’s nothing new about this, people have been using anchor links for years to link to specific parts of a page so it makes sense to use them to navigate to a specific part of your client. Rails ignores anything after the hash part of the URL allowing us to create URL paths like “/posts/#mime” which tells Rails to render the posts index and tells the React client to render the MyPosts sub component.

Rounding Up

It was great to spend a full week getting through some of our technical debt- obviously would have liked to have got through a lot more!

This is something that has proved very useful and we’ll definitely be doing another one soon.

What is Hiring Hub? Find out more here. 

Originally published 30th May 2019