Table of Contents

A new conversational Artificial Intelligence service - ChatGPT - has been making waves this week. I tried using it for a small coding project and was impressed. I think it has some profound implications for the software industry.

(W) here’s the code

Of course it would be daft to not make the code discussed available so here it is with builds published for:

  • darwin-amd64
  • darwin-arm64
  • linux-amd64
  • windows-amd64

Background

I’m a Platform Engineer. I sometimes describe platform engineering in tech as like providing the foundations and utilities for a building- water, electricity, telecoms, etc. You don’t think about what goes into making those resources instantly available at any moment, you just expect them to be available when you need them and often only think about them when they go wrong. Just like these utilities however, your house will (typically) be uninhabitable without them, and without foundations it will not stand for long. Again, just like these utilities, once your organisation hits a certain size, you can’t just leave it to the public supplier, you need to involve yourself to ensure that you’re covered for your use requirement, if something breaks etc.- think of supplying power to a hospital or large factory for instance.

As a platform engineer, I don’t focus on application software. I do do some, but it’s not a discipline where I would usually expect that to be the main thing. I enjoy writing applications using GoLang and have done some professional work and various personal studies with it. There is limited time in a busy life however and one of the key things important to me about developing in a professional environment but missing with private study is working with other people. I can’t emphasise how important this is- even the best people left on their own can develop all sorts of strange perspectives and practices when it comes to tech. Bouncing ideas around with other people and having multiple people reviewing or working on a thing (not necessarily at the same time!) is key to ensuring consistent progress and a robust result.

Earlier this year GoLang introduced a major new feature- Generics. It isn’t really important here to go into detail about what that is, but it encouraged me to make another stab at semi-formal study. I purchased an online GoLang course recorded by a university professor and made a start. Predictably (for me) it started well. I thought I understood what was being presented and I seemed to be getting through the exercises and worked examples, great! I then wanted a bit more time to consolidate before going onto the next topic, so I tried adding features to a worked example. Some trivial stuff was easy enough, but anything more than that sent me down a rabbit warren. I realised that the stuff I’d ’learned’ in the course had not really stuck or had not been meaningful. I decided to try a different tack which has worked well for me in the past - trying to tackle a small, well defined personal project in my own time, based on what I had covered in the course so far.

The personal project

One of the topics in the online course was a card shuffling application. I’m sure this is a familiar example for many student coders. I thought it might be interesting to extend this, from dealing conventional playing cards to dealing Tarot cards. I’m discussing this here as a purely technical challenge! There are a number of important differences between the two types of card deck to accommodate. I also wanted to use a ‘proper’ random number generator rather than the pseudo-random version most study examples use.

Progress was slow but steady. I found myself coming up against some common issues with my research- some generic, and some specific to Go:

  • I find that with any new thing in tech you will find a lot of the writing about it is from people doing their first study in that area. They are often describing very basic stuff very basically, without considering alternative approaches or having done much exploration or discussion with others about it. Often there’s fundamental points missed.
  • It can be extremely hard to find intermediate level information, that’s not super basic but explains something beyond that level accessibly. Often this will be something that is ‘obvious’ for a production use case but people haven’t written about - it’s too advanced for the beginner try-hards and too basic for the experts.
  • GoLang has a lot of orthodoxy- by design it’s an opinionated language. Sometimes this gets taken too far (IMO):
    • short names. Yes, it’s all very idiomatic to call your variables a; b; c etc. and you don’t spend long thinking up names, but it makes it hard to read. To quote Martin Fowler:

    “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”

    • GoLang has a focus on speed and efficiency. This is good, except when it comes at the expense of the job in hand. The (pseudo) random number generator is a great example of this. Sure it’s very fast, but you aren’t going to be able to perceive that speed or efficiency over something that really tries to be random on your own workstation. If you want something that really does try to be random, because that’s important in your use case, then you can struggle to find examples, and no, it doesn’t work in quite the same way, and yes, there are some non-obvious gotchas.

I soon reached the point where I seemed to be ‘almost there’ but it was taking me a long time to understand why some particular bit wasn’t working on each occasion. There weren’t accessible examples I could refer to and there wasn’t really anyone I could ask for input. Progress was slowing. I then changed projects at work before changing employers and this application was left languishing.

Enter the AI

Like everyone else, I heard about ChatGPT and decided to try it out with some toy examples. It was very impressive, certainly the coding examples were better than 95% of what I have seen or produced live-coding at interview. I tried it with some of the bits of my personal project from recent months that I was stuck on and it worked very well. Not only did ChatGPT ‘solve’ the challenges but it explained in each case what the problem it was tackling was, what it was doing instead, and why. I could ask for an alternative solution or modify the question to iterate on the answer. It was a joy to progress so quickly after months in the doldrums. Predictably some of my errors had been (typographically) tiny- it was infuriating to see that I had been so very close but not quite there in some cases. I was now able to move forward to implement things like interactive user prompts and other improvement features.

It soon became apparent that context was important and that the more context I could give ChatGPT, the better it could fit the offered solutions. I tried presenting my ‘whole’ (half-finished) program. A couple of fundamental changes were suggested, with persuasive reasons, but I finally had something complete that worked! ChatGPT kept some core parts of what I had done myself, not changing beyond all recognition, and maintained a similar style to my work, e.g. with longer style naming, as referred to above. Predictably for a card shuffling program, it wanted to use the pseudo-random number generator for the shuffling functions, but I asked ChatGPT to use the ‘proper’ one instead and almost immediately it made that iteration. There was then some significant dialogue and tweaking to refine elements of the user interface and handling exceptional cases.

The Takeaway

  • Context presented, dialogue, and interactive iteration is important.
  • A clear and well understood goal is certainly helpful.
  • Speed of iteration is very fast (compared to a human).
  • Design choices became far more important than implementation choices.

I am delighted and, just like many other people, I think this is an inflection point and that we will be seeing a lot more of this in this space (and others!). For the sceptics who want to claim that I ‘cheated’ - people always say this when an easier way of doing something comes along. You might as well say that I ‘cheated because I used a computer to write a computer program… oh… wait…’ I didn’t write my own IDE, linter or compiler collection either, let alone a host operating system…

We shall be seeing a lot more of approaches like this in the coming years and a lot less ‘FizzBuzz’ and ‘Caesar Ciphers’ in tech interviews, let alone, moving on from actually expecting people to remember and type in system commands like this one from the kubernetes kubectl cheat sheet:

# Check which nodes are ready
JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
 && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"

Yes, the people functioning merely as a mechanical interface will likely find their services less in demand in this field, just like all of those Flight Engineers who sat at the third seat in the cockpit in the early days of jet airliners but have now been eliminated from the crew. As ever, the people who are able, adaptable, ready and willing to move up the stack will be fine.

Update

I wrote a follow-up article about implementing tests