Table of Contents

This is a follow-up article to one I published in June 2023 - ‘Writing my own (Python) study course with ChatGPT - part 1’. This discusses a more structured and disciplined approach with reflective practice and constructive critique as part of learning and improvement.

What changed?

I had been searching for, sometimes random, exercises. Whilst discussing the first article (above) during a job interview, an interviewer suggested that Code Golf might be a good alternative to the sometimes complex and advanced challenges given in ‘Advent of Code’. I would have to agree that many of the Code Golf challenges are a better fit for cultivating familiarity with decomposing problems, language syntax etc. They are certainly much more concise in what is required and generally quick enough to tackle in a relatively short time. The brevity is a key strength, because if it’s something that is going to take hours and hours then it will be something that is only tackled occasionally or rarely. With shorter exercises it is possible to tackle something most days. The other great thing wth Code Golf is that it validates solutions as either passing or not.

What’s progressed?

At this point I have gone through a number of challenges on CodeGolf- in no particular order, and typically not trying to golf the solutions. For anyone who is interested there are notes in each accompanying README with each solution presented in the order in which it was attempted.

I find that I have more enthusiasm for problems that seem meaningful to me with a clear real-world application, e.g. calculating check digits for ISBN numbers, or morse encoding and decoding. Pure math theorems are of less interest to me. Early on in some cases I found that ChatGPT raced ahead to a solution just from early questions, e.g. collatz and diamonds but I was able to moderate this going forward e.g Intersection; Leap years. Whilst some problems are frankly too esoteric for me to want to attempt myself despite being interested in what a solution would look like, e.g. Brainfuck I have now been able with practice, discipline and perseverance to attempt a significant number unassisted that would previously have been daunting.

In a few cases, I still haven’t known where to start, and here ChatGPT has been valuable, e.g. maze. The trouble for me here is that at this point approaching this sort of challenge as a naive student I am reliant on useful documentation - either of primitives that I can readily assemble or higher-level tutorials or example that I can repurpose. The ‘Maze’ challenge is simply to advanced to fit into this category. A successful example of repurposing would be my solution for ordinal numbers, where after various messing about with grammatical rules guides and trying various string splitting and comparison techniques I changed tack to partly base on a javascript example at Stack Overflow. Where such direct or composable examples don’t exist or aren’t accessible, I feel it is better to have an explanation available than for something to be an off-limits closed book.

Feedback and improvement from and with ChatGPT

Any good teacher should give feedback on your approach and this is an incredibly challenging area for anyone working on their own. Beyond my own direct experience I have seen countless examples in my career of clever people doing clever things in a way that clearly showed a lack of perspective and team dialogue. The maxim ‘it’s only a clever hack if you wrote it’ is true enough but sadly it seems all too easy and common to become seduced by favoured tools and patterns and follow them beyond the point of reason in the absence of appropriate dialogue or alternative suggestions (a post for another day). Obviously at this point I am not building grand things as part of my studies, but beyond full solutions for baffling problems and syntax checking, ChatGPT is excellent for critique and feedback. This is not something that I am aware could otherwise be available in the absence of a human interlocutor. I have written above and previously about giving ChatGPT feedback, but asking it to improve on my work has been instructive.

With Zodiac-signs I came up with a passing solution and thought ‘good enough’ but when I asked ChatGPT it was clear that there was considerable room for improvement. I had to have a lot of dialogue with ChatGPT to get a working answer - the initial offering was extremely concise but broken- but being able to validate performance on CodeGolf was extremely valuable. Once I did have a working ChatGPT solution I was then able to explore with it how this was improved compared to my initial code. In summary (full version in README) ChatGPT referred to:

  • Revising my dictionary structure to cover only the time ranges available rather than possible. Yes it’s true that we don’t need to consider the ‘zeroth of January’ but does it matter if the code can handle that if we aren’t otherwise validating dates? I didn’t really think this improved anything in this context
  • Moving from filtering on dictionary comprehensions in for loops to a simpler max() and generator expression - this is obviously far clearer and more computationally efficient
  • Removing intermediate variables. These are clearly rookie practices on my part but also something that can readily be learned and applied as a general approach
  • Moving from an if-else statement to decide what to print, which adds extra lines to the code, to a single print statement with a conditional expression to output the result, reducing the number of lines. This was not a feature I was previously aware of and it isn’t like the original solution is ‘wrong’ but it’s a great example of a question that I did not know to ask!

There have been other occasions where I would not even have known to ask the question and relevant suggestions from ChatGPT proved enormously valuable, e.g. with Arrows I was not previously familiar with the unpacking operator. There’s also the classic typo type problem like wrong amount of indent.

Beyond CodeGolf and improving efficiency

There were several occasions where I asked ChatGPT if my working code could be improved and often it could, e.g. isbn; leap years

By the end of August I had largely exhausted the Code Golf challenges that interested me and was attempting most challenges without AI assistance. I retried the first challenge I had taken - 99-bottles-again, this time without referring to previous code or ChatGPT, and found that this seemed to be ‘as good’ as ChatGPT’s previous solution.

Clearly I needed to cast my net wider - which brough me to HackerRank. Obviously I had heard of it but I had thought of it more for grinding and interview testing than ‘real world’ learning. I decided to attempt Project Euler #1: Multiples of 3 and 5. I came up with a solution mostly myself - I had some minor difficulty because I had missed double counting which was pointed out by ChatGPT. On sorting this, I got ’time limit exceeded’ when I attempted to ‘submit’ this code. I again referred to ChatGPT and in an echo of the issues describe previously for ‘Zodiac-Signs’, as would be obvious to many, ChatGPT advised:

The issue with your code is that it’s not optimized for large inputs. The time complexity of your function is (O(n)) due to the three for loops that iterate up to (n). When (n) is very large, this can result in a “time limit exceeded” error on platforms like HackerRank.

You can optimize your code by using a mathematical approach

Here ChatGPT gave an optimized version with a time complexity of (O(1)) for each test case together with a full explanation

Takeaways and next steps

It’s clear that the next learning point is not to just to improve familiarity and fluency, but to produce cleaner and more efficient code. I expect that ChatGPT will continue to be an excellent tuition partner and constructive critic. A number of key points have already become clear with this self-guided tuition approach:

  • Clear, concise challenges are valuable for ensuring frequent practice and steady progress
  • Portals with testing to validate solution compliance are invaluable
  • Judicious use of ChatGPT continues to be enormously valuable- notably:
    • Giving an answer tailored to the specific question
    • Giving relevant suggestions that may not have ocurred to me question otherwise
    • Being able to critique a solution that ‘does work’ with detail why
  • Self discipline is needed to avoid ‘autopiloting’ complete answers although this does sometimes still happen anyway

I wrote previously about the importance of clear problem statements, a clear vision of what ‘good’ looks like, and being able to find relevant expertise or advice for the odd ‘gap’ and this continues to be true.

At this point I am not Expecting a ‘Part 3’ for this series. Unless there is some key new finding with general application to be discovered , I think the key elements have been covered. I feel that I have made and continue to make significant progress in a way that I always struggled to with traditional print and video recorded media. I would recommend this approach as an option for anyone looking for a personally matched coding tutorial program.