Coding principles 7: Avoid “just in case” code

This is the 7th part of a series about 67 Bricks’s coding principles. The previous posts are: 12, 3, 4, 5 and 6.

The principle

Aim to write code for the present and for your knowledge of the near future, rather than on assumptions about what might happen

If any of these principles are to be taken in moderation and with pinches of salt, it’s this one, so let’s start with a caveat. I’m not saying “don’t plan ahead” or “don’t consider how the codebase might change over time”. However, it is important not to code based on guesses or assumptions. That is, not to code “just in case”.

That’s the general point, but let’s be more specific:

Don’t write code that you don’t know you need

The common adage here is YAGNI: You Aren’t Gonna Need It. It can be tempting to build a feature because it seems likely that it will be useful in the future. However, there are various downsides to this. The first is simply that we might be wrong, in which case we’ve spent time and effort on something that isn’t useful rather than on something that is. That non-useful code will still require maintenance and add to the overall size and complexity of the codebase unless or until the decision is made to remove it again, incurring more time and effort – what Martin Fowler calls the “cost of carry”.

Even if it does turn out to be a useful feature, building it before understanding the full requirements is risky. It’s likely we made incorrect assumptions that will still require correction later. Plus we spent that time building the unknown thing that could have been spent on something with a more immediate priority.

Don’t optimise code prematurely

Optimising code for speed or memory consumption or any other measure often comes with trade-offs in terms of readability and maintainability – not to mention that it takes development time. So heed the very common advice to avoid premature optimisation.

This isn’t to say code should never be optimised. If you have identified a performance problem in some part of an application, you absolutely should track down the source of the issue and improve it. But you should be sure there is a problem to fix first.

This also isn’t to say you should entirely disregard performance while writing code. There is a sensible middle ground between caring too much and too little about things like performance. A good general rule is that there are likely more important features of your code to care about in the first instance, like readability, maintainability and testability.

Don’t generalise code prematurely

Generalising code for reuse is an extremely important tool in the toolbox of creating a scalable and maintainable application. However, not all code needs to be reused. There can be a temptation to assume that a general and reusable solution is automatically the best one. But if you don’t know whether it will ever need to be reused, then it’s often better to stick to solving exactly and only the problem at hand.

As with optimisation, there are often trade-offs with generalising or genericising a class or function. Often it makes the code more abstract and harder to read or to reason about. It’s also easy to make incorrect assumptions about the right way to genericise or abstract code if you try to do it too early. After all, duplication is far cheaper than the wrong abstraction.

One common adage around this is the Rule of three, which boils down to the advice that you should only try to make a reusable function if the need comes up three times. Before that you should just write code for the case or cases that you have.

This is an example where caveats abound. There are cases where the costs of reuse are low enough and the costs of duplication are high enough that it make sense to genericise earlier. Nevertheless, the Rule of Three is a good rule of thumb to keep in mind and to only break when you’re convinced you have a good reason.

Don’t keep unused code around

Sometimes a class or function may become vestigial – in that it is no longer called by anything in the codebase – perhaps following a refactor or the removal of a feature. There can be a temptation to leave that code in place regardless – after all, it’s good code that serves a purpose even if it’s not currently a necessary purpose. We could leave it just in case the need comes back.

This is generally a bad idea. Uncalled code adds further clutter to a codebase that may already be intimidatingly large for a new starter or for someone trying to find their way around. But worse, uncalled code still requires maintenance. That uncalled code likely relies on other functions and therefore acts as deadweight, artificially inflating how much those other functions are used and how easy they might be to refactor or change. Once again, the YAGNI principle applies.

The same goes for leaving commented-out code in place just in case. A commented out block is likely simply to become out of date as the codebase evolves around it and leaves it behind.

If you’re ever in the position of being tempted to leave some unused or commented out code in place, remember that (I hope!) you use git or some equivalent version control system. So that code can always be retrieved in the history if it’s needed. But let’s be honest it probably won’t be.

Of course this advice needs to be modified if you’re writing a code library where the intent is to provide a range of classes or functions to other developers. In that case parts of your public API may not be called by any other code in the project. But the advice still applies to any code that isn’t part of the public API of the library.


Ultimately there are costs and benefits associated with every choice we make when writing an application. It’s tempting to only see the benefits of “just in case” code – more functionality, more optimised code, more flexible code – so it’s important to be cognizant of the costs too: things like time, readability, maintainability.

There are times when those benefits will outweigh the costs and also times when they won’t. The most important thing is to make sure we’re fully aware of what costs and benefits we’re weighing when we make these choices and to be aware of what assumptions we’re making in the process.

There are times also when it’s beneficial to try harder than usual to predict the future or to err more on the side of “better safe than sorry”. Data storage decisions can be especially hard to change later for example. In cases where you’re acquiring data from another source or from user behaviour, if you haven’t been storing some piece of information that a future feature requires, you may never be able to go back and fill it in for historical records. So some reasonable assumptions about the future may need to be made here. What “reasonable” means is unfortunately the kind of question that requires experienced judgement.

There are difficult balances to strike here. It’s sensible to plan for unexpected changes of requirements and to design your codebase so that you keep your options open for the unknowns of the future. But that flexibility has to be balanced against the other more immediate concerns we keep coming back to – readability, maintainability, testability. Always examine your assumptions and try to take account of all the costs and benefits when weighing these decisions.

References

YAGNI

The Rule of Three

The Wrong Abstraction

Wishing upon a star

When I was a boy, many years ago…

…I used to like building polyhedron models out of card. Here is a photo of me, probably around 1987ish, spray-painting a compound of ten tetrahedra. Evidently the photographer had partially obscured the aperture with their finger, but there were no digital cameras back in those days, you only found out your mistakes when you had the picture developed!

The author, as a boy, spray-painting a compound of 10 tetrahedra.
The artist at work

Making a model out of card was a long and laborious process. First, armed with a protractor and ruler, one had to draw the net on card, making sure there were tabs in the right place to glue the thing together. Then it had to be cut out, and the intended folds scored very carefully with a craft knife (you need them to fold cleanly, but if you cut through the card you’ll have to start again). Finally, it had to be glued together to give it three-dimensional form, a process which got increasingly fiddly as access to the interior became harder.

Continue reading “Wishing upon a star”

Coding principles 6: Don’t reinvent the wheel

This is the 6th part of a series about 67 Bricks’s coding principles. The previous posts are: 12, 3, 4 and 5.

The principle

Where reasonable, lean on reputable, existing libraries rather than writing your own solution for a problem that has already been solved.

There are a number of well known edicts in software engineering that relate to this principle. “Reinventing the wheel” describes writing your own software to solve a problem that has already been solved adequately. “Not Invented Here” (NIH) syndrome describes a tendency to believe that only self-authored code is valid and therefore to avoid using any third party software or libraries.

I’ve certainly been guilty of reinventing the wheel – sometimes because of NIH syndrome and sometimes because it didn’t occur to me to check whether the wheel had already been invented. I don’t think I’m alone in this.

XKCD image: 'We don't want to reinvent the wheel, so every day we google image search "wheel", and whatever object comes up, that's what we attach to our vehicles. Sure external dependencies carry risks, but so far they've all been pretty good wheels.'
https://xkcd.com/2140/

At 67 Bricks, we encourage the use of libraries and third party software in our solutions because we recognise that our real goal is to create valuable products for our customers. And we can do that more effectively when we lean on existing solutions to solved problems.

It’s worth considering that the authors of a library are likely to have dedicated considerable time to their solution and to have encountered and fixed problems you haven’t thought of yet. Problems, big and small – from dropdown components to search engines – have endless complexities and edge cases. If we tie ourselves up on these details, we are not working on solving the real problem of the system we’re working on.

Jeff Bezos said a company should “focus on what makes your beer taste better” as an analogy to explain why a software company might use AWS rather than managing servers etc in house. This is a useful way to look at writing software. What makes our “beer taste better” is understanding our customers’ problems and designing systems and products that solve them. We are not directly making our “beer taste better” if we’re spending time reimplementing an HTTP server or a frontend component library.

However, it’s important to evaluate libraries before relying on them. Consider questions like how actively is it developed and maintained? Is it compatible with other tools in use? Does it have issues or bugs that might make it difficult to work with or cause problems for the larger project? Is its license compatible with the project? Is it accessible? Your colleagues and the community will have thoughts, recommendations and advice, so don’t be afraid to ask.

The infamous leftpad incident highlighted the inevitable vulnerability that comes with relying on 3rd party libraries. The specific issues with NPM that allowed that to be such a problem have since been addressed, but nevertheless it’s worth bearing this warning in mind when bringing in any dependency. We always want to make sensible choices about when to rely on a library and when not to. There’s a balance to find here – sometimes it weighs in favour of choosing the library despite that vulnerability and sometimes it doesn’t.

As with so many decisions, it comes down to a cost benefit analysis. But it’s important to be realistic about all the costs. There are potential costs of using a library: it may be badly supported or become deprecated, it may have or develop a security hole – or worse, have a security hole put in it maliciously. But there are potential costs of not using a library too, primarily time. Can you afford to spend hours, days or weeks implementing and then maintaining some piece of your system that has already been implemented – perhaps better – already?

So I would advise using libraries liberally where they add value and allow you to concentrate on solving more valuable problems, but remember to evaluate them adequately. Ultimately aim to find a balance between not reinventing the wheel while also not bringing in a new library for every little problem you find however small.

On jspawnhelper and automatic updates

I had just settled down to write some ScalaCheck tests when suddenly my peace and harmony was shattered by the news that the “download PDF” functionality on one of our sites had recently ceased functioning, and it would be quite nice if it could be restored to good working order. And so begins our tale.

Continue reading “On jspawnhelper and automatic updates”

Pair programming with Cursor: A modern twist on collaborative coding

Imagine having a coding partner available 24/7—tireless, fast, and, while not flawless, always ready to help. With AI tools like Cursor, this vision of “pair programming” is now closer to reality, offering a fresh approach for solo developers and teams alike.

While AI development tools are becoming increasingly widespread, software developers often remain sceptical: Can an AI really help us write high-quality code? From my experience, using AI-powered tools reminds me of pair programming with a human. If you are reluctant to give them a go, I think approaching these tools from this perspective may help overcome this barrier, and ultimately will help you get the most out of them.

Listen to this post as an AI generated podcast

Why try AI pair programming?

Traditional pair programming has long been praised for its impact on code quality, error reduction, and team culture. Two developers work together at the same workstation, with one acting as the “driver” who writes code and the other as the “observer” who reviews each line. While this approach fosters learning and collaboration, it also requires two people on one task, which can be costly in terms of time and resources.

AI tools offer a practical, low-cost alternative to traditional pair programming. It provides instant feedback, useful suggestions, and a second set of “eyes” on your code—all without needing a human counterpart. While AI isn’t perfect, the back-and-forth interaction can feel surprisingly similar to a human pairing session, and it is beneficial to treat it this way.

How Cursor mimics traditional pair programming

The workflow will feel familiar to anyone who’s done pair programming before. Here’s how it typically unfolds:

  1. Starting the Task: You prompt Cursor to tackle a specific coding problem—perhaps generating a new function, designing a user interface, or drafting test cases.
  2. Iterative Refinement: Cursor generates an initial solution, which you review and adjust. You may request changes, fixes, or refactoring to refine its response.
  3. Moving Towards a Solution: Through this iterative feedback loop, Cursor refines the code until you reach a satisfactory result.

This conversational back-and-forth mimics the dynamic of traditional pair programming.  Approaching development tasks in this way encourages you to think about what you are really trying to achieve, rather than getting sucked into the details of the implementation at the outset.

Using cursor as your code reviewer

One powerful feature of Cursor is its ability to act as a reviewer, in a sense reversing this workflow. Rather than asking Cursor to write code from scratch, you can ask it to review code you’ve written, offering suggestions for improvement or optimization. This can be particularly helpful when tackling coding tasks you are less comfortable with, such as frontend development or infrastructure as code. The AI can often spot potential improvements you might overlook, augmenting your knowledge in these areas.

This reverse workflow can feel like having an unbiased reviewer who you can call on without worrying about taking up their time. Cursor may suggest alternative approaches, flag potential pitfalls, or highlight areas for optimisation if you ask.

An example: Building a search interface for a GraphRAG API

To illustrate how Cursor works in practice, let’s look at a recent project where I used it to build a simple front-end for a GraphRAG search API. The goal was to create an interface that could display the search responses, and pull out references to supporting documents. Here’s a step-by-step breakdown of how I approached the task:

  • Defining the Extraction Task: The tricky part of this task was writing a set of regular expressions to identify and extract references to documents embedded in the API’s text responses. I started by prompting Cursor to parse the response, giving it a concrete example of the data I wanted to isolate. For instance, in a response snippet like:

    “… as a result, Nvidia stocks are set to rise [Data: Reports (328); Entities (2542, 431); Relationships (372, 5709)]”

I explained that I needed to capture the document references—like “Reports (328)” or “Entities (2542, 431)” from the text.

  • Testing with Cursor’s Help: Having  generated an initial implementation, we worked together to write a set of tests. These tests were essential to validate that the references were extracted correctly, capturing different formats that might show up in the API’s responses.
  • Reviewing and Identifying Edge Cases: I reviewed the extraction results on a wider set of data to identify any edge cases that the initial implementation had missed. 
  • Refining the Solution: I added these edge cases as additional test cases, and asked Cursor to amend the code to account for the variations. With each iteration, it refined the parser incrementally to handle the new scenarios.
  • Removing duplication: As the test suite grew, Cursor was able to make suggestions to simplify the tests and remove duplicated code.
  • Finalising the Implementation: Once the solution was passing all tests, I committed the code. With the references reliably extracted, I moved on to the next step: displaying the referenced data in the frontend. “Cursor, please display the references as cards below the search result …”

You can see in this example the natural back-and-forth process with Cursor, similar to pair programming with a human partner. By iteratively refining the regular expressions, testing thoroughly, and addressing edge cases, my AI partner helped turn a tricky data extraction task into a manageable and enjoyable workflow.

Benefits of AI pair programming

AI pair programming offers many of the same benefits as a human partner in pair programming. For example:

  • Conversational Flow: Just as you’d ask a human partner for feedback, you interact through questions, prompts, and iterative requests, creating a “conversation” with your AI partner.
  • AI as the Driver: The AI partner  generates solutions based on your prompts, while you guide it toward the right path.
  • Expecting Mistakes: Like a human partner, the AI will make mistakes. Together, you can refine and improve the code, iterating to align its output with your vision.
  • Incremental Development: Working with AI encourages an incremental approach. You can request small sections of code, review each piece, and adjust as needed—fostering a step-by-step workflow that ensures quality.
  • Broader Perspective: With the AI handling the details, you’re free to step back and consider larger concerns, like UX, accessibility, and project-specific requirements.

Key Differences from Human Pair Programming

While developing with an AI tool shares many of the benefits of traditional pair programming, there are also significant differences:

  • Infinite Patience and Subservience: Unlike a human partner, the AI is endlessly patient and will take a backseat. You can ignore its suggestions without worrying about offence, conflict or having to take a long walk.
  • Freedom from Judgment: There’s no fear of embarrassment when asking a question that might seem “basic.” The assistance is provided without judgement, creating a comfortable environment to explore, learn, and iterate.
  • Knowledge Sharing and Mentorship: AI tools won’t spontaneously take you on a deep dive into the codebase, and lack the mentorship qualities that a human partner might offer. The explanations are often only as detailed as your requests.
  • Code Ownership and Accountability: When pairing with a human, ownership of the code is shared. With an AI partner, the responsibility for quality and accuracy ultimately remains with you. It’s important that these tools are used within a robust development process, with peer code review and testing in a CI pipeline.
  • Emotional Support and Empathy: A human partner can recognize signs of frustration, offer encouragement, or provide a sense of camaraderie that reduces burnout. An AI, whilst supportive in its own way, doesn’t provide emotional support 🙂.

Tips for effective AI pair programming with Cursor

Here are some best practices to help you get the most out of Cursor as your AI pair programming partner:

  • Use Precise Prompts: The clearer your requests, the more accurate Cursor’s responses. Providing relevant context from your codebase use the files and folders commands helps Cursor generate output aligned with your goals.
  • Handle Mistakes as Learning Moments: Cursor will make mistakes or miss the mark. Treat these errors as opportunities to refine your prompts and learn how best to interact with the AI.
  • Experiment with the Reverse Workflow: If you’re unsure about a piece of code, ask Cursor to review it. Its suggestions can help you catch issues early and improve overall code quality.
  • Know When Not to Use Cursor: Cursor may not be ideal for highly creative problem-solving or tasks that require complex decision-making. Use it as a support tool, but don’t rely on it for aspects that benefit from nuanced human judgement.

Final thoughts: Embracing AI as a pair programming partner

If you’re hesitant about using an AI-powered developer tool, thinking of it as a virtual pair programming partner may just be what you need to get started. Remember, these tools are in their infancy, and are continually improving—just like all good developers. Embrace the imperfections and cut them some slack; after all, you would not expect perfection of your partner, or yourself. Like it or not AI is here to stay and is a tool every professional developer needs to master.

AI assisted 3D printing

I’d like to share an anecdote which highlights why I find generative AI useful as a way of quickly being able to validate an idea or an approach.  I’ve recently been looking into replacing some small glass lampshades which I may have accidentally broken, and then broken some more whilst trying to glue them back together.  When they are not shattered and clumsily reassembled, this is what they look like:

Pristine lampshade, prior to impact with ground

Employing my usual strategy of scouring Ebay and Etsy for suitable replacements didn’t turn up anything useful.  I was able to determine that these glass lampshades were cut from a long tube of borosilicate glass, and based on this information I was able to find a supplier, who could provide the glass and even cut it to the required length – which would avoid me injuring myself, but would cost £100.

Having recently been exposed to a series of Tiktoks by Scott Hanselman expounding the value of owning a 3D printer, I wondered if it would be possible to 3D print some replacements.  I don’t have a 3D printer (yet), and I don’t know much about 3D printing, other than Scott thinks it is cool, and it does look pretty cool.  I thought it should be possible, but I didn’t really know how well it would turn out, and there were some non-functional requirements around thermal stability (i.e. is it going to melt) that I was interested in bottoming out.

I found a number of companies online who will print STL files in a range of materials, so now all I needed to do was generate the STL file.  A brief search on Reddit  suggested that for this type of model, a parametric design tool would be most appropriate – So I downloaded OpenScad and set to work.  I could have spent the next couple of hours learning the language and writing my scad file, but as I just wanted to see if it could be done as quickly as possible, I asked ChatGPT:

“Please can you help me create an openscad file for the following object: A hollow cylinder, with 20 circular cutouts around the inner edge. The cutouts should run the entire height of the cylinder.”

The response was encouraging:

// Parameters
$fn=100;                   // Setting high facet count for smooth curves
outer_radius = 40;         // Outer radius of the base circle
inner_radius = 35;         // Inner radius of the base circle
height = 100;              // Height of the extrusion
cutout_radius = 2;         // Radius of the circular cutouts
num_cutouts = 20;          // Number of cutouts

module cutout_pattern() {
    difference() {
        // Outer circle
        circle(r = outer_radius);
        
        // Inner circle to create hollow space
        circle(r = inner_radius);
        
        // Cutouts placed around the inner circle
        for (i = [0:num_cutouts-1]) {
            rotate([0, 0, i * 360 / num_cutouts]) {
                translate([inner_radius, 0, 0]) {
                    circle(r = cutout_radius);
                }
            }
        }
    }
}

// Extrude the 2D pattern to create a 3D object
linear_extrude(height = height, center = true) {
    cutout_pattern();
}

Tweaking the parameters so it was the correct size, and rendering it gave me:

Openscad rendering of AI designed lampshade

Not bad! I uploaded this to www.surfacescan.co.uk, and a couple of days later this arrived in the post:

3D printed lampshade in situ – it is the one on the right, I will probably replace them all

It cost about £3 to print, shipping was extra.  It looks pretty good in the lamp, and doesn’t melt with the 1.5W LED bulbs, so that’s my MVP! 

I did a fair amount of experimentation afterwards, trying my own designs, initially with ChatGPT, then increasingly more without it, as I became more confident in the tooling and the benefit of using Generative AI over coding it myself began to diminish.  I learnt why I should use $fa and $fs over globally setting $fn, I watched a series of introductory videos on Autodesk Fusion because I wanted to experiment interactively when modelling – all safe in the knowledge that I was spending my time on something I knew worked end to end. 

Honeycomb inside FTW, because Hexagons are the Bestigons

This is why Generative AI is so useful – I could definitely have written the Openscad file myself but not knowing how to was a barrier.  Being able to get from a textual description of a model, to a visualisation in a matter of minutes, to an actual object in days allowed me to validate the approach really quickly, without having to learn much at all. I think this is similar to product development – Generative AI may or may not be part of the solution, but it can certainly help you get there quicker.

Coding principles 5: Code should be reviewed

This is the 5th part of a series about 67 Bricks’s coding principles. The previous posts are: 12, 3 and 4.

The principle

All code should be reviewed before it is merged into the main branch. All project members, however junior, should be involved in code reviewing.

This is another – I hope – uncontroversial principle.

At 67 Bricks we generally use Gitlab for our code hosting and use their merge request feature for signalling that a change is ready to review and then for checking the changes and leaving comments for the author. All the other big git hosting platforms have equivalent tools that are just as good, so there’s no excuse not to do code reviews when working in a team.

Code reviewing is beneficial for the quality of the codebase because a reviewer may spot edge cases, mistakes, issues or potential problems that the original author didn’t consider. They may also be able to suggest improvements, based on their own knowledge of the project or of the relevant technologies.

We all make mistakes, so having another pair of eyes looking over your work makes it more likely that those mistakes get noticed before they cause a real problem. We also all have different knowledge, experiences, strengths and weaknesses, so code reviewing is a way of bringing the experience of a second person to bear on a problem.

Another benefit is that the reviewer comes to the code at some distance from the detailed problems the developer had to wrangle with, and this can be useful when seeing the complete set of changes as a whole. This is distance that the author will probably gain themselves over the next days and weeks, but it’s useful to have it immediately from another person.

Knowing that your code will be reviewed also encourages you to be more thorough. This is just human behaviour, especially when we are busy and keen to be seen to be making progress.

Slightly less obviously, code reviewing also has benefits for the reviewer because it exposes them to areas of the codebase they may not have worked on before and encourages them to engage with, and constructively criticise, others’ code. This gives them the opportunity to be exposed to and learn from others’ approaches.

And I should emphasise constructive criticism. When it works well, code reviewing can lead to a closer team built on trust. When we, as reviewers, suggest changes, we need to do so without implying criticism. And when receiving review comments, we need to understand that the health of the codebase (and the project) is more important than our egos.

As much as it takes effort to do a good, thorough code review, the benefit is huge. I’m sure I can’t be the only person who has been guilty of waving through a review with minimal attention – perhaps because I know the author is sensible and writes good code – only to find later that it caused some bug that I could have prevented if I’d engaged with it a bit more. Skimping on the effort to review properly is a false economy because the mistakes you miss will just need to be fixed later, leading to more reviews.

Generally speaking we at 67 Bricks think only one person need review each change, but there may be cases where it makes sense for more than one person to be involved, for example to get the input of a particular subject matter expert.

I don’t think anyone would pretend that reviewing code changes is their favourite part of the job, but there are things we can do when putting our code up for review that make everyone’s lives easier.

  • we can aim to keep merge requests small and focussed (spoiler alert: this is the focus of a future principle)
  • we can provide any necessary context, descriptions and (if applicable) screenshots when opening the merge request to give the reviewer the best chance of understanding what they’re looking at
  • we can aim to keep each commit focussed on a single, meaningful change rather than lumping lots of unrelated changes in each commit. This makes it easier to review commit by commit, which can be preferable in some cases
  • we can be available to answer the reviewer’s questions. It can even be helpful to quickly walk a reviewer through your change so they fully understand the context and your intentions before fully reviewing it themselves

Code reviews can unfortunately lead to a bottleneck in the development process where a number of changes sit unreviewed and becoming stale while the team works on other things, so it’s worth trying to keep on top of them. It often works to have a team policy that, upon finishing a piece of work, members should review at least one open merge request before picking up something new.

Code reviewing generally isn’t what gets anyone up in the morning, but it’s immeasurably valuable for the overall quality of the codebase. And slacking on it is likely to lead to costlier problems later on, so it’s worth trying to do well.

Resources

https://leanpub.com/whattolookforinacodereview

https://conventionalcomments.org/

Coding principles 4: Test at multiple levels

This is the 4th part of a series about 67 Bricks’s coding principles. The previous posts are: 12 and 3.

The principle

Test at multiple levels

I don’t think it’s controversial to say that tests are A Good Thing.

Functionality should be well tested so that we can be confident that it works correctly. Tests at different levels bring different benefits and should be combined to provide a high level of confidence in the software’s quality.

A rough common rule of thumb is that there should be:

  • lots of unit tests
    • these focus on individual units of code
    • they should be small, focused and quick to run
  • slightly fewer integration tests
    • these focus on testing multiple units together, potentially including external systems like databases
    • they tend to be a bit slower to run and more involved to set up
  • fewer again end-to-end tests
    • these test the whole stack
    • they generally test from the point of view of an end user or client of the system, so they might run via a headless browser or via a REST API
    • they tend to be comparatively slow and complex so they should be used sparingly and where they add real value
  • a small number of smoke tests
    • these are very basic end-to-end tests that can be run post-deployment or at regular intervals to check that the service is healthy

There is much that sensible people can disagree on in the above, like where the line sits between unit and integration tests; how much value there is in mocking in unit tests and much more. But I think the broader point that there is value in having tests at multiple levels stands.

By writing good tests at multiple levels, and running them often, it is possible to have a high level of confidence that a piece of software is in good working order.

Well written tests can bring a huge number of benefits, some of which are perhaps less obvious than others.

Tests verify that a piece of functionality works as intended

This is perhaps the most obvious benefit of tests: they test that some code does what you think it does.

While at 67 Bricks we are fairly agnostic to TDD (you’re welcome to use it, but you don’t have to), we do advocate for interleaving writing code with writing tests, rather than writing all the code first and leaving the tests till the end. Writing tests throughout the process can be hugely helpful in writing code that does what you intend it to with minimal bugs.

Tests encourage good, clean, modular code

It is a good rule of thumb that if a unit of code is hard to test, it’s probably an indication of a problem that you should fix. If it’s hard to test, perhaps it’s too tightly coupled or it’s making some undue assumptions or it has a confusing interface or it’s relying on hard-to-reason-about side effects… Wherever the difficulty springs from, the fact that it’s hard to test is a useful warning sign.

Tests act as specifications of behaviour

Each of your tests can act as an encapsulated description of how this unit of code is intended to act in particular circumstances. This is great as a way of documenting the developers’ intentions. If I come to a method and find myself wondering what to expect if null is passed into it, then my life will be made a lot easier if there’s a corresponding test like

it('throws an error when null is passed in', () => {

This example uses jest, a popular testing framework in the Javascript/Typescript world that allows you to write very spec-friendly, descriptive test names. In languages or frameworks that require you to use function names rather than strings for test names, I advocate making those function names as long and descriptive as possible, like

void throwsAnErrorWhenNullIsPassedIn()

Tests act as examples of how to use units of code

Related to the above point, they also act as written examples to future developers of how to use or consume the module under test.

Tests guard against regressions and other unintended changes

One of the most valuable things about adding tests as new features develop is that they remain in the codebase indefinitely as guards against unintended behaviour changes in the future. When working on a large system – particularly when developers come and go over time – it’s invaluable to be able to get instant feedback that a change you’ve made has caused a test to fail. This is especially true if that test is descriptively named, as recommended above, because it will help you understand what to do to fix the failure. Perhaps your change has had unintended consequences, or perhaps the test simply needs updating based on your change – a well named and well written test will help you make that call.

For this reason, it can sometimes be useful to test fairly trivial things that wouldn’t be worth testing if the only benefit were checking that your code works. Sometimes it’s valuable to simply enshrine something in a test to prevent accidental changes to important behaviour.

Tests help you refactor with confidence

When refactoring, tests are indispensable. If the code is well covered by good tests, and those tests pass after you’ve finished the refactor, you can have a high degree of confidence that you haven’t inadvertently changed any behaviour.

Resources

https://martinfowler.com/articles/practical-test-pyramid.html

Coding principles 3: Favour simplicity over complexity

This is the 3rd part of a series about 67 Bricks’s coding principles. The previous posts are: 1 and 2.

The principle

Aim for simplicity over complexity. This applies to everything from overarching architectural decisions down to function implementations.

This principle is a close cousin of the previous one – aim for clear, readable code – but emphasises one particular aspect of what makes code clear and readable: simplicity.

Simpler solutions tend to be easier to implement, to maintain, to reason about and to discuss with colleagues and clients.

It can be tempting to think that for software to be good or valuable it must be complicated. There can be an allure to complexity, I think partly because we tend to equate hard work with good work. So if we write something labyrinthine and hard to understand, it’s tempting to think it must also be good. But this is a false instinct when it comes to software. In code, hard does not equal good. In general complexity for its own sake should be avoided. It’s important to remember that there’s absolutely nothing wrong with a simple solution if it does what’s needed.

There’s also value in getting a simple solution working quickly so that it can be demoed, reviewed and discussed early compared to labouring for a long time over a complex solution that might not be correct. Something we emphasise a lot working at 67 Bricks is the value of iteration in the agile process. It can be extremely powerful to implement a basic version of a feature, site or application so that stakeholders can see and play with it and then give feedback rather than trying to discuss an abstract idea. Here, simplicity really shines because often getting a simple thing in front of a stakeholder in a week can be a lot more valuable than getting a complicated thing in front of them in a month.

This principle applies at every level at which we work, from designing your architectural infrastructure, down through designing the architecture of each module in your system, down to writing individual functions, frontend components and tests. At every level, if you can achieve what you need with fewer moving parts, simpler abstractions and fewer layers of indirection, the maintainability of your whole system will benefit.

Of course there are caveats here. Some code has to be complicated because it’s modelling complicated business logic. Sometimes there must be layers of abstraction and indirection because the problem requires it. This principle is not an argument that code should never be complicated, because sometimes it is unavoidable. Instead, it is an argument that simplicity is a valuable goal in itself and should be favoured where possible.

Another factor that makes this principle deceptively tricky is that it is the system (the architecture, the application, the class etc) that should be simple, not necessarily each individual code change. A complex system can very quickly emerge from a number of simple changes. Equally, a complicated refactor may leave the larger system simpler. It’s important to see the wood for the trees here. What’s important isn’t necessarily the simplicity of an individual code change, but the simplicity of the system that results from it.

There’s also subjectivity here: what does “simple” really mean when talking about code? A good example of an overcomplicated solution is the FizzBuzz Enterprise Edition repo – a satirical implementation of the basic FizzBuzz code challenge using an exaggerated Enterprise Java approach, with layers of abstraction via factories, visitors and strategies. However, all the patterns in use there do have their purpose. In another context, a factory class can simplify rather than obfuscate. But it’s important not to bring in extra complexity or indirection before it’s necessary.

Resources

The Wrong Abstraction – Sandi Metz

The Grug Brained Developer

Simplicity is An Advantage but Sadly Complexity Sells Better

Coding principles 2: Prioritise readability

This is the 2nd part of a series about 67 Bricks’s coding principles. The first post, containing the introduction and first principle is here.

The principle

Aim for clear, readable code. Write clear, readable comments where necessary

You should make it a priority that your work be readable and understandable to those who might come to it after you. This means you should aim to write code that is as clear and unambiguous as possible. You should do this by:

  • using clear variable, function and class names
  • avoiding confusing, ambiguous or unnecessarily complicated logic
  • adhering to the conventions and idioms of the language or technology you’re using

What can’t be made clear through code alone should be explained in comments.

Comments should focus on “why” (or “why not” explanations) far more than “how” explanations. This is particularly true if there is some historical context to a coding decision that might not be clear to someone maintaining the code in the future.

Note however that just like code, comments must be maintained and can become stale or misleading if they don’t evolve with the code, so use them carefully and only where they add value.

It is important to recognise that your code will be read far more times that it is written, and it will be maintained by people who don’t know everything you knew when you wrote it; possibly including your future self. Aim to be kind to your future self and others by writing code that conveys as much information and relevant context as possible.

I expect we’ve all had the experience of coming to a piece of code and struggling to understand it, only to realise it was you who wrote it a few months or weeks (or even days?) ago. We should learn from this occasional experience and aim to identify what we could have changed about the code the first time that would have prevented it. Better variable names? More comments? More comprehensive tests?

“You’re not going to run out of ink,” is something a colleague once commented on a pull request of mine to say that I could clarify the purpose of a variable by giving it a longer, more descriptive name. I think that’s a point worth remembering. Use as many characters as you need to make the job of the next person easier.

Of course, there’s some subjectivity here. What you see as obscure, someone else might see as entirely clear and vice versa. And certainly there’s an element of experience in how easily one can read and understand any code. The point really is to make sure that at least a thought is spared for the person who comes to the code next.

Examples

Here is an example that does not follow this principle:

const a = getArticles('2020-01-01');
a && process(a);

This example is unclear because it uses meaningless variable names and somewhat ambiguous method names. For example, it’s not clear without reading further into each method what they do – what does the date string parameter mean in getArticles? It also uses a technique for conditionally executing a method that is likely to confuse someone trying to scan this code quickly.

Now, here’s an example attempts to follow the principle:

// The client is only interested in articles published after 1st Jan
// 2020. Older articles are managed by a different system.
// See <ticket number>
const minDate = '2020-01-01';

const articlesResult = getArticlesSince(minDate);
if (articlesResult) {
  ingestArticles(articlesResult);
}

It provides a comment to explain the “why” of the hardcoded date, including relevant context; it uses much more meaningful names for variables and functions; and it uses a more standard, idiomatic pattern for conditionally executing a method.

Resources

Naming is Hard: Let’s Do Better (Kate Gregory, YouTube)