Silly Scala tricks – the Ternary operator

Scala doesn’t use the ternary operator:

  test ? iftrue : iffalse

preferring to use if:

  if (test) iftrue else iffalse

Dan and I (Inigo) were on a long train journey together and decided to see if we could use Scala’s flexible syntax and support for domain-specific languages to implement our own ternary.

Our best shot is:

class TernaryResults[T](l : => T, r : => T) {
  def left = l
  def right = r
}

implicit class Ternary[T](q: Boolean) {
  def ?=(results: TernaryResults[T]): T = if (q) results.left else results.right
}

implicit class TernaryColon[T](left : => T) {
  def ⫶(right : => T) = new TernaryResults(left, right)
}

(1+1 == 2) ?= "Maths works!" ⫶ "Maths is broken"

This is pretty close – it’s abusing Unicode support to have a “colon-like” operator since colon is already taken, and it needs to use ?= rather than ? so the operator precedence works correctly, but it’s recognizably a ternary operator, and it evaluates lazily.

VPN and SSH clients

In this week’s developer meeting, we discussed whether we wanted to set up an office VPN, and then the merits of various different Git clients.

At present, we use SSH and port forwarding to get access to office machines and services from outside. This is a bit tricky to set up, and requires that forwarding for each internal server is set up independently by each developer. Simon made the case that using a full VPN would make connections to the office simpler, and would also mean that it was easier to access our external servers to which access is limited by IP address. After some discussion of the various VPN options, we decided that Joe would set up OpenVPN next week.

After that, we discussed Git clients. We don’t have a single standard Git client, and so we were discussing the various strengths and weaknesses of the clients we do use. We each described clients we had used:

  • TortoiseGit – Windows only. Generally good, fairly simple to use but can do complicated things too. Has good Windows Explorer integration for right-click – the only client that does, and this is very useful.
  • IntelliJ IDEA/Rider integrated client – this is very good for projects for the languages it supports, because it has language specific diffing/merging and pre-commit inspections. It is also convenient that it’s integrated in the IDE. It’s a bit heavyweight to use when quickly checking something in an unfamiliar repo.
  • Gitk+command line – Gitk is a Git tree visualization tool – you use the command line for actual changes. It is good for use on Linux in conjunction with the command line. It can be set up to use alternate diff/merge tools, such as the IDEA differ. A major advantage is that the command line can do everything, and so users of other tools will always have to switch to the command line anyway to do complex tasks.
  • Kraken – Is pretty, starts quickly, and asks for a SSH password once per invocation only. We weren’t sure of the extent to which it could do complicated things, but it has some mechanism for undoing changes. We thought it was not as good as IDEA for Java/Scala projects, but good for other languages. It runs on Windows, Linux, and OS X.
  • SourceTree – Its most distinctive feature is a UI for staging individual hunks and lines within a file – but the app is unstable, doesn’t cope well with separate filesystem changes, and is sluggish when the repository is large. It is okay for browsing history, but has limited searching. Broadly not recommended.
  • GitHub – a simple client, but it can be used for any Git repository, not just Github. It’s too limited for a developer to use as their main client, but might be useful for non-developers. 

    We didn’t come to any firm conclusions about whether any tool was the best.

Developer meeting – Gitlab

This week we talked about Gitlab, and our migration from using Sourcerepo for Git hosting to using our own self-hosted Gitlab on Amazon EC2.

We’re switching to Gitlab because it has a better UI, has a REST API for creating and managing projects, and has some nice features like protected branches and snippets.

Our migration process was to write some Scala code that used Selenium to scrape our existing list of projects from Sourcerepo, and then used the Gitlab API to create new projects in Gitlab. It also migrated users and transferred issues from our Trac issue management system into the Gitlab issue management.

We discussed using the Gitlab “merge request” process in place of our existing Git Flow approach to code review. We didn’t reach any firm conclusions, but we’re likely to discuss this again in a future developer meeting.

We discussed other uses for the API:

  • It potentially allows for better integration with our Jenkins continuous integration server. We would need to write code for this. There is existing Jenkins code that will scan a GitHub account, and create a Jenkins build for every project it finds containing a “Jenkinsfile” pipeline configuration – we could use this as a basis. This would mean that we didn’t need to configure Jenkins projects explicitly, just create a new repo based on our standard project template and the Jenkins project would be created automatically.
  • In addition to the normal Gitlab backup process, we can use the API to maintain a local copy of all the Git repositories on the server. The API allows us to enumerate all of the repositories and create directories for them, then we can use a script to check out the latest contents of each on a schedule.

Ansible and other orchestration tools

In our developer meeting, Richard B talked about orchestration tools, reporting back on his evaluation of Chef, Puppet, Ansible and Salt.

The full details are in the slides that he produced, attached below. The high-level summary is that he thinks that Ansible is most appropriate for our purposes. Ansible playbooks let us set up repeatable scripts for provisioning a server – so the server setup instructions that we currently write can be replaced by an executable and testable script. The main downsides for us are that Amazon Linux doesn’t work well with Ansible, and that we will need to create modules from scratch for some of the software that we use such as MarkLogic.

Richard has set up one of our current projects to work with Ansible, and we will experiment with using Ansible for new projects.

Slides for the Ansible dev talk (PDF)

 

Developer meeting – streams and Vagrant

Rhys talked about Java / Scala streams. Using Java 8, you can iterating over a tree of files, using a Java Stream. With Scala 2.11, the Java stream isn’t compatible with Scala streams. However – all of this will change in Scala 2.12 – this brings interoperability between Scala and Java streams. Reece also talked about how Scala 2.12 fully supports Java 8, and requires Java 8. This includes using Java 8 lambdas, so it doesn’t need to create an interface wrapper and a separate class file for every lambda.

Simon talked about Vagrant – a tool for managing the lifecycle of virtual machines. It uses a “box” as a base image, and then has a “provider” like AWS, Docker, VirtualBox. A Vagrant file represents a base box, and a layer of configuration on top of that – to do things like installing everything on your virtual machine. Vagrant base boxes can be stored anywhere URL-accessible, although there are repositories for them. You can make Vagrant automatically set up port-forwarding to your local machine. It’s important to get the content that doesn’t change into the base box, to improve performance. Packer is a separate thing, that allows you to produce something that is more of a deployable production version of your system, rather than being more dev focussed like Vagrant. Vagrant share allows you to forward connections via Vagrant share to your local Vagrant instance. Those people who are currently juggling a number of virtual machines (e.g. via VirtualBox) should definitely look at using Vagrant, and other people should consider it.

Jumble sale developer meeting

At our developer meeting this week, we had a “jumble sale” theme, where various of us talked briefly about interesting technologies or things we’d been working with recently.

Inigo talked about Runnertrack and Heroku. Runnertrack is an app that he’s written that tracks runners running marathons, such as the London Marathon. Heroku is a very convenient hosting option for applications that are run for brief periods with low traffic like this. It’s easy to set up, and applications can be launched based on pushing to a Git repo.

Dan discussed Project Euler – he’s been working through the (currently 546) Euler problems and has just completed the first 100 problems. The early ones are all easy – the later ones are more challenging. The maths is not hugely complicated. You can get statistics for how many have been completed, and in what languages. Dan is using Scala. It’s a fun thing to do if you like maths and programming.

Chris talked about Serverless Cloud Computing. We’re using Amazon AWS for managing our servers – but that still means that there’s a server somewhere that needs to be managed, and have security updates applied, and apps installed, and so on. However, if you’re using one of their more specific servers like Route 53, or the Simple Queue Service, it’s just a service. API Gateway pro. Amazon Lambda is a mechanism for running code. A Serverless app has a single-page webapp hosted as HTML in S3, and then it triggers code running on Lambda, using data from Dynamo. The major disadvantage is that it’s entirely tied in to Amazon.

Simon talked about BioJS. It has a good set of visualizations, such as tree stuctures, heatmaps of interactions, graph plots, and so on. Some of these are very life science specific, but some are more widely applicable. The BioJS group is also trying to promote good practice in writing biological code – such as version control, common structure, and having demonstrations. Because of this, the BioJS site is essentially a list of references to other GitHub repos.

Nikolay talked about parsing Word documents. One of our clients has supplied us with documents without any real structure, because they’ve been converted from PDF. We’ve written a set of macros to add appropriate structure and reformat those documents, using Word search and replace. Word search and replace is surprisingly powerful – it can match and replace formatting and styles. Annoyingly, there are two forms of search-and-replace available in Word.

Rhys talked about mocking in Specs2 tests for Scala – and capturing how many times a given call has been made and what it is. There is a way of doing this easily – using the “capture” method. This allows you to capture the calls made to the mock, and then you can retrieve the actual values from the captor.

Loic talked about units, and using strong typing in Scala to attach units to your numeric quantities. There is a Squants library that defines a number of standard units, and the type system can then apply dimensional analysis to check that you are performing legal operations on them – not adding a distance to a time, for example.

Command line parsing with Scala dynamic and macros

In our developer meeting this week, we talked about using the Scala features of “dynamic” and macros to simplify code for command-line parsing.

The problem we were trying to solve is a recent project in which we have lots of separate command-line apps, with a lot of overlap between their arguments, but with a need for app-specific arguments and to apply different default parameters to each app.

Continue reading “Command line parsing with Scala dynamic and macros”

Developer meeting – mobile development

In our developer meeting, we discussed mobile development. We’re not primarily mobile developers, but modern web applications need to work effectively on mobile devices, and we have done some mobile-specific projects.

We talked about who cared about mobile browsers and responsive design – and came to the conclusion that everyone did. It’s not just people actively using a mobile device, but it’s also things like displaying on projectors at client meetings, docking browsers, and Chromebooks and other very small laptops.

We mostly use a responsive design CSS framework like Bootstrap, which does a lot for us but not everything. There is a “bootlint” tool that checks whether we are using Bootstrap classes effectively. We can also use Chrome and Firefox to simulate mobile devices. Actually accessing with real mobile devices is useful – but a bit of a pain for C# projects, because you have to configure the embedded IIS to allow access from outside the computer it’s running on, which isn’t the default.

We’ve had to do work in the past to fix display of items via “hover” – e.g. making them work on-click instead – but this may conflict with existing behaviour. Right-click is effectively unavailable too. Gestures are available – but not widely used.

 

 

Testing with Selenium

Inigo talked about testing with Selenium in our developer meeting.

The detail is in the slides – but the high-level overview is that Selenium is a useful tool for producing automated tests for web application behaviour, but it can be hard to write solid tests that aren’t flakey and dependent on timing. It helps a lot to use an API approach – writing a class for each page in the web application and the operations it exposes, and then separately writing the tests to exercise that API. This leads to much cleaner and easier to maintain code than if you just write tests directly against the pages.

Dev meeting Selenium slides

Agile on the Beach conference report

Chris was recently at the Agile on the Beach 2015 conference, and this Friday he reported back to the rest of the dev team on some of the talks that he went to there.

The death of continuous integration

Steve Smith talked about “the death of continuous integration” – CI is a cultural thing, not just Jenkins. The key questions are: whether everyone commits something to trunk every day, and whether problems in the build get fixed quickly. He described two build models – synchronous and asynchronous – the former is waiting for the build to complete before continuing, and the latter is allowing the build to continue, and swapping back if the build breaks.

He discussed three branching models:

* “Long lived feature branches” – these are generally a nightmare.
* “Short-lived feature branches” – fine if everything works as it should, but reviewers aren’t always available immediately, and you might need to switch context back to an old branch if reviewing takes a while. There’s also less of an incentive to fix the build, because it doesn’t immediately effect you. It’s also more painful to change something that affects many, many files.
* “Trunk based development” – apparently Google do this. Everything is committed to trunk, all the time, there are no separate code reviews, and half-developed features are disabled via feature toggles. Large scale changes can be handled by putting the change behind an interface, and then switching code behind the interface.

We discussed these models. Several people expressed scepticism about trunk-based development – in their experience, committing directly to trunk has generally led to long pauses between commits, rather than shorter development ccycles. We use short-lived feature branches – and while we do sometimes have problems with huge changes, we don’t typically have a problem with people not dealing with broken builds (because our culture is that the build should be fixed quickly).

Testing in Production

In a talk about Testing in Production by Unruly, they described that their process has teams responsible for specific features from end-to-end. This encourages developers to make the code more robust, because the dev team are the people responsible for it in production and will be woken up by pagers if it goes wrong.

They test their code in production, because it is too much work to maintain an exact mirror of the live environment. They have no QA environments. Half-developed features are kept out of production via “feature toggles” and “data toggles”. They have lots of monitoring on their live servers – they monitor things that are of value to the company – like “are we making money” rather than “have the servers gone down”. “Monitoring driven development” is about first writing a monitor to check your new feature, and then developing the code for it – similar to test-driven development but much more so. They also use a chaos-monkey style approach for testing – with badly behaved client code, load testing with extreme events – because if someone is going to break the live environment, it might as well be them. They also use mob programming.

We discussed this – many of these approaches seem interesting and appropriate for a company that has very granular income, from a very large number of clients, that is coming in quickly and very responsively. They seem less relevant to more traditional companies.

Management 3.0

Pia-Maria Thorens spoke about delegation poker. Chris showed a set of “delegation poker” cards, that showed the continuum of delegation decisions between “the manager makes the decision on his own” and “the team makes the decisions entirely without the manager”.

We discussed this as an interesting way of thinking about management and delegation.