Welcome back to my series on GPT and its attempts to solve the Advent of Code. For those of you who missed the first two columns, GPT is a cutting-edge artificial intelligence1 that I’ve been using to tackle the challenges posed by this popular coding puzzle. You can find the previous posts here. Today, we’re tackling day three, and GPT has its work cut out for it. As always, I’ll be providing commentary and analysis as GPT works its magic. Let’s dive in and see how it fares on this task.
As always, the original problem is available on the Advent of Code website, so I won’t reproduce it here.
First we prime it to let it know what we will be doing.
Hi, we’re doing advent of code together. I need you to produce code or use any common command line tools which will solve the problems I give you. The problems will always include reading input from a file and producing an answer.
Then we provide the full text of the advent of code day 3 problem.
And we’re off to a bumpy start, GPT seems to have misunderstood the complexity and provides a solution using
grep which would just filter unique lines.
No worries, we try again and this time we’re back to the trusty
python. Except it’s not that trusty because GPT forgot to declare a variable.
NameError: name 'total' is not defined. But we tell it the problem, and it fixes it. Now the code runs and produces a number.
Which I enter into the Advent of Code website, but it tells me
That’s not the right answer; your answer is too high.
Which I relay back to GPT. But now it’s lost because it answers.
That’s strange! It’s possible that there is an error in my solution, or that I misunderstood the problem. Can you please provide more information about the input and expected output, as well as the specific error you are encountering? That would help me understand the problem better and provide a more accurate solution.
So, I take a look at the code. And it looks like the issue is that on some lines, one item appears multiple times in one compartment and then additionally also in the second compartment. GPT counted each of these. And honestly, to GPT’s defence, the instructions are not at all clear on this case. But the example counts it only once, so we tell GPT.
And tada! We have a working solution which produces the right answer. Another gold star for GPT.
Onwards to part two. Now we should find the common item in each group of three lines (group of elves).
This took a few tries for GPT. The first working solution returned the wrong answer, and when told that it was wrong, GPT went down the path of diving each rucksack into three compartments. I assume that the fact that there were three groups somehow got it confused.
Here I decided to help it by pointing out its mistake error instead of just retrying (after all, I don’t have all day or unlimited patience). Subsequently, it returned another solution, but again it tried counting the item multiple times instead of just once. So, I just told it to count it once. And we have our solution which gives us the sixth ⭐
The first part it got right quickly, the second part it needed a bit more guidance with. I’m still surprised how well it can “reason” about the problems and address problems when one points them out to it.
One interesting fact that other people have also noticed, is that it refers to itself as “The Assistant” occasionally. I think that description fits very well with the current state of it.
- This first paragraph was actually mostly written by GPT. It seems it thinks highly of itself :-) [return]