ai or nay?

You work in software for a while, and you get used to big buzzwords: Blockchain, Big Data, Write Once, Run Anywhere (I was a kid when I heard that one!) and today’s era is no exception with everyone racing to add “AI” to their LinkedIn profile, and to their company’s page in the hopes that some of the investor money sloshing about the place will land in their lap.

And working as a software engineer, you yourself either get told by people that you’ll soon be obsolete because “all the coding with be done by AI” or recommended tools that “make the development experience so much better!”. It’s kind of a hot/cold thing, and there’s an awful lot of hype.

The Truth Is Out There!

Of course, between the attempts to attract VC money, and the breathless “singularity any day now” pronouncements, there has to be the truth, and that’s what I want to talk about. I used the Github Copilot free trial when it came out, and after hearing it talked about for so long I decided to fork over for a monthly subscription, and see what I thought. I’ve had it about a month for personal projects, and this is my experience.

Copilot and VSCode – My Tech Stack

So primarily, I use Visual Studio Code for personal projects. I’ve tried numerous other IDEs, but in terms of flexibility and plugins, I always come back to Microsoft’s editor as do many other developers.

Naturally, that means that I would gravitate towards Copilot, rather than competitors such as Cursor, as all you need to do is just subscribe to Copilot, download the appropriate plugins (Github Copilot and Github Copilot Chat) and then log in and it’s all set up. I can’t speak for the experience working with Visual Studio on Windows, but I’d imagine that Microsoft would make it just as easy.

Copilot is the plugin that works in the code editor, and offers code completion above and beyond what you would get from Intellisense.

The Old and the New

This code completion does take some getting used to, and in some cases gets way too ahead of itself and offers to complete your entire function. I tend to look at it, and if it makes sense I’ll accept the extra help in filling out boilerplate.

Sometimes it hasn’t been very helpful if I’m thinking of a certain thing, but when I’m just trying to grind through, say, the setup for a Vulkan device that I’ve written a thousand times I’m quite glad for the help. I’ll let Copilot figure out what I’m thinking, do the bits that I would either copy-paste from old code or grudgingly type out, and then fix whatever Copilot has come up with. It’s not perfect, but if it’s code I’ve written a thousand times but for whatever reason have to type again, good enough.

Copilot Chat can be quite interesting too.

Finally, someone else that understands Vulkan!

Copilot Chat can explain functions quite well – the best I’ve ever seen has been ChatGPT 4 models, but if I’ve forked and downloaded a project that I’m interested in and have never seen before, I’ve found that Copilot can be quite a handy tool in understanding a new codebase but the answers can once or twice be a bit off. In the above image it’s explaining a Vulkan command buffer allocation function I wrote a while ago quite well, but when it’s code you don’t know it’s usually good at giving the gist.

But it has other usages too. A few nights ago, after having breathlessly pulled my bins out having forgotten to check the bin calendar, I decided to write a website scraper to get the bin details from the Fife council website and put it in my calendar, and churned out a few hundred lines of rather trashy Golang code.

I got the project working, but the code quality was shall we say, not great and I soon went into optimisation mode to try and rewrite it to be better. Most of the session actually turned into a discussion with Copilot Chat in my Code window about it, and although sometimes the suggestions were weird, Copilot Chat spotted some refactors that were in retrospect obvious and I went with them, pulling out a load of hard-coded variables into structs that made my code look a lot nicer. I could have done it myself, but mulling it over with AI chat and then being presented with a nice code sample that solves your problem is quite handy.

Copilot sometimes offers suggestions that are off, and at least with Vulkan code I’ve just shook my head at it and gone “just no”, but its suggestions are more often than not helpful. While initially I felt somewhat antagonistic towards it (if VC bros are shouting about it on X, it will get my back up) it’s found a useful place in my life, and that’s really what matters.

Towards the Plateau of Usefulness

“Artificial Intelligence” is a term that I’ve come to loathe. It’s not, and probably never will be “intelligent” in the way that human beings are, and most of the hype-induced language surrounding it is from people who are either a) lying or b) have a God complex. But moving away from the hype, I’ve had experiences with LLMs (large language models) including Copilot, that are useful, and here’s where I think they fit in.

ChatGPT has saved me countless hours in my YouTube addiction… I mean, viewing, for instance. Before the rise of ChatGPT and the Youtube Summarise extension, I regularly found myself watching hours and hours of videos about such diverse topics as the history of Sonic the Hedgehog 3, going to why cats to strange things and of course, countless coding videos.

Most of them were junk, but now I can just export it to ChatGPT and get a 5 bullet point summary that confirms that yes, the video is really junk and I should just move on. It’s cut things down by a lot!

Copilot is another useful product, and strikes me as a slightly more able Intellisense, able to provide me with sometimes useful completions to bits of boilerplate code, and refactoring opportunities that I might have missed. Overall, it’s been useful, and I think LLMs are on their way to becoming another technology that inhabits the Plateau of Usefulness. Something that is useful, but just another tool in our lives to save time rather than becoming our machine overlords.

You’re Just A Chicken for KFC!

is what, I imagine, some might say. I’ve seen countless videos of “no-code AI” startups where a smug announcer does something like show a 5-year old girl on voice chat whipping up a version of Breakout in JavaScript for herself to play, with the voiceover smugly noting “we’ve made coding into child’s play!”.

To which I’d respond that it’s possible to edit together a really nice video, leaving out all the bits where the girl’s dad most likely stepped in to help, and also that we only see snippets of the code – what was produced was most likely unmaintainable junk that while entertaining to a little girl, would probably make for a very bad MVP. Research supports this claim – code quality was observed to drop when AI assistants were overly relied upon.

While many businesses do get by on abominable code, mostly that code leads to a poor quality product that will be rejected by the market unless improved. And given that LLMs need to be trained on good quality human written code to do what they do, if we rely on purely LLM-written stuff from nocode tools in the future, then even those apps will break down from dogfooding badly written LLM code that is no longer usable.

There’s also a ton of legal issues such as plagiarism of proprietary code in public repos (more common than you’d think), and I guarantee you at least one company will accidentally open-source its codebase when safeguards fail to stop GPL v3-licensed code from showing up in there. The problems of going full-AI are manifold, and anyone who airily dismisses this has a bridge to sell you.

I think human software engineers are going to be around a while, and once the excitement dies down LLMs will form a useful part of our toolchain.

So, AI or Nay?

To summarise, I’ve mostly had a positive experience with LLM coding tools. Left to their own devices, they may turn out absolute junk, but using them for code I’ve found them to be overall time-savers, and in other circumstances as well such as the Youtube Summarise extension.

There is the worry about corporate capture – do we really want Microsoft or ${INSERT_STARTUP_HERE} to become “the programming company” that we can’t develop without? In that respect, I have faith in the progress of hardware and open source.

While llama.cpp, a popular open source LLM, brings my GeForce RTX 2070 laptop to a crawl after a short conversation, as newer GPUs and CPUs emerge onto the market the onboard hardware to run these more effectively will come included, and rather than having to call out to OpenAI’s servers (which is what most startups are doing!) we’ll be able to run the equivalent of Copilot locally, and customise it to our own codebases and weird side-project code!

So my answer, AI or nay? It is indeed AI (aye), but don’t buy into the breathless hype. Keep your feet on the ground and view it as a tool that can make help make your life better as a developer rather than the Second Coming/Armageddon (delete as applicable) of Computing!

One thought on “ai or nay?

  1. Great article Gareth!

    I happened upon it while poking about in Linkedin.
    I’m delighted to have discovered Theanswers42 and I look forward to reading your other posts in due course.
    Cheers!
    Alan Gunn

Leave a comment