[RE: nyman]#_

don't let perfect be the enemy of bad

Recent Posts

Flashing a Ubiquity PicoStation with dd-wrt to extend the range of Mitshubishi PHEV In-Car WiFi

published on

If you prefer to go straight into the details, while skipping the backstory, feel free to jump directly to the setup.

Also note that DD-WRT will charge you 20 euro for the privilege of running their software on “professional” hardware. If their router DB says “yes” under “activation required” you will get a 24 hour trial then to see that it works, then you need to pay.

If you don’t want to pay, but want to use dd-wrt, you need to pick another router. The idea should work with most routers, but long-ranges ones will of course have… longer range.

The backstory

I have a Mitsubishi Outlander with the ability to check the status and control things like the heating and cooling with WiFi. It works… as well as you’d expect from Mitshubishi. Which is to say, clumsy but OK. (I might share my thoughts on UX department at Mitsubishi some other time.) The car advertises a WiFi network, and you connect to it with your phone and then use the app to But my car is parked a bit from the house and the car WiFi does not reach into the house. Which kind of makes the ability to remotely-pre-heat/defrost the car a moot point.

The original fix

Last year I bodged together used an old AR150 and router and using trelay I got it working.

Most of the time, but it was unreliable when it was snowing due to the AR150 not having the greatest antenna. But this year it had stopped working, probably because I got a new phone and the App is somehow tied to the MAC of the phone even if the router should forward it. So I decided I needed a better bodge this year.

The new fix

Meet the new contender, the Ubiquity M2. A long range 2.4 Ghz Wi-Fi AP which I had lying around and which should have enough range.

But while they still provide recent firmware releases even if it’s +10 years old, that firmware does not have the feature I need. I need a client-bridge to “seamlessly” extend the car WiFi into the house.

I knew I could bodge it together with openwrt, but for whatever reason I came across someone saying dd-wrt could also do it, and it had built in support for it. So as my last adventure with openwrt felt like a bit too much bodge I decided to give it a try. And so far it seems to work well, much better than the last solution worked.

The setup

TLDR: Flash older compatible XM. Hold reset 7 seconds on bootup to enable tftp. tftp suitable dd-wrt image. Configure bridge-client

The detailed version. First, as the openwrt Wiki says in big red letters, if your PicoStation is running anything newer than >5.5 you probably need to downgrade. I’m not sure this is needed for DD-WRT but I didn’t want to risk soft-bricking it. Luckily Ubiquity still provides old firmwares, and even more surprisingly they allow you to downgrade with the Web UI, albeit with a warning. So boot up your PicoStation, go into the web UI and upload this XM.v5.5.11.28002.150723.1344.bin

After that is done, things get a bit more advanced. You will need to use tftp to flash dd-wrt onto it. I picked this one PC2M-DD-WRT.bin. It’s an old dd-wrt but it the one used in a (“success story”)[https://forum.dd-wrt.com/phpBB2/viewtopic.php?t=166271] so went with it. At some point I might risk upgrading it, but as this is not a internet connected device I’ll go with it for now.

Note: A lot of instructions said to hold the reset button 10 seconds to get it into tftp mode. For me that did not work. I had to hold it ~7 second, if I held it longer it booted normally. I released it just before the two lights went out. You will know it works because it starts flashing a nice pattern with the signal strength indicator LED’s, and ping will start responding on 192.168.1.20.

After you have done this, configure the wifi something like the attached picture, and you should be able to connect to it with your phone (assuming it’s registered to the car).

screenshot showing the dd-wrt wireless configurations, key settings: main interface in client-brige mode and a extra virtual interface configured as ap to extend it

Simplest ngrok-like reverse tunnel

published on

Do you need a simple reverse TCP tunnel to a local service (like SSH), but you don’t want to install anything or use a one of the public ones.

Warning: There is no authentication, use this only for temporary things or IP allowlisting to limit who can connect.

Get the sish binary from github

With that out of the way, on the server run

./sish --authentication=false --ssh-address=:9999 -i:9989 --bind-random-ports=false

then run on the client

ssh <server-ip> -p 9999 -tt -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -R 12345:localhost:22

Selectively block webpages from hijacking shortcuts on a webpage

published on

Do you like the new Brave/Chrome tab finder ctrl+shift+a but it’s conflicting with slacks shortcut for Open the All unreads view?

If you use StopTheMadness (https://micro.blog/lapcatsoftware@appdot.net) then you can stop a webpage from hijacking any CMD shortcut (or any shortcut), but what if you want to allow the webpage to keep most shortcuts, but disable one? Add the following code to the custom-script part of StopTheMadness for the slack domain

window.addEventListener('keydown', function(e) {
    if (e.metaKey && e.shiftKey && e.code === 'KeyA') {
        e.stopImmediatePropagation();
    }
}, true);

This can of course be modified to fit your favourite application and browser shortcut.

Dopamine fasting

published on

Have you ever heard of dopamine fasting? Apparently, it’s a “thing” now. It even has its own Wikipedia page, so you know it’s legit.

But for me, dopamine fasting is an annual tradition that I’ve been doing for about 10 years leading up to Easter. It’s not like I’m particularly religious, but the 40-day period seems like a good time to cut back or quit something that I thought brought me happiness but wasn’t entirely sure.

In the past, I’ve tried fasting from social media, computer games, caffeine, and probably some other stuff that I can’t remember. This year, I’m calling it a dopamine fast because let’s face it, there is a lot of people trying to push down quick dopamine kicks on us.

Now, why do it? Well, for me, it’s about breaking bad habits and focusing on things that truly bring me joy, like binging on Netflix or spending time with my cats. And how do I do it? Simple, just give up something that you enjoy but aren’t sure is actually bringing you happiness.

Now, as for whether it “works” or not, who knows? A professor of neuroscience, Ciara McCabe, says that it doesn’t necessarily affect dopamine levels. But hey, I’ve found it enjoyable and helpful in breaking some bad habits, so I’m going to keep doing it. And I recommend you give it a try too, unless you really enjoy mindlessly scrolling through social media or chugging caffeine like it’s water.

As I said, this year I’m doing a more general dopamine fast. I’m not entirely sure which parts I’ll cut out, but I know I can’t completely cut out social media or email since I use it to stay up to date on news, but I will limit my usage to one specific hour a day. And to avoid mindlessly checking my email or social media, I’ll be using NextDNS to disable some things on the DNS level. Check out my post if you’re interested in how that works.

So, give it a try and see what you think. And at the end of it all, reward yourself with something delicious like a real Finnish Mignon.

The hitchhikers guide to no-doomscrolling <s>twitter</s> Mastodon

published on

Tweetbot logo with a gloria

First, this is the completionist solution. The goal of this is to read (or at least see) every toot from everyone you follow. This is based on a draft I made many years ago on how I use twitter (with Tweetbot, RIP) which I never published, but now with Ivory it felt relevant again.

There will be no magic algorithm deciding what you see from a large pool of toots. This means you will need to be selective of whom you follow. There are plenty of people who post interesting things occasionally and then post a lot of other things. Luckily, there is a way to keep up with them also, lists, I’ll get back to that in a bit.

Now, the second thing you need is an app that syncs your timeline position and supports lists, and has the ability to sync the position on a list. Most apps sync the timeline, but lists aren’t well-supported. If your app doesn’t support it, then give Ivory a try. The end goal is to get to the top, glancing at every toot, and when you’re at the top you’re DONE for the day.

DONE? On a “social media” platform. Yep. One of the goals with this method is to avoid doom scrolling as much as possible. If you don’t have time to get to the top, just leave it and return another day.

The downside here is that you won’t see “things as they happen”, you might see a toot a day later when the discussion already stopped. That’s right, and I consider it a good thing, today’s world could use less quick replies and more slow thinking. But that’s just my opinion.

Anyway, lists. The rule of thumb is, anyone who toots too frequently, moved to a list and muted on the main feed. Again, what “too frequently” is depends on how much time you want to spend on mastodon per day. If you notice you’re falling behind, then you probably need to move more people to lists. When you have extra time, you then go check your lists. Sometimes I have a list for only one person, as that makes skimming it faster. Usually people have a few categories of toots, the ones about corn and other things and the ones with tips on effective GPO policies.

This is not optimal, but until apps figure out something like “show only toots with more than X favourites from this person” it works fine. And looking at the explosion of new mastodon clients, I’m very optimistic that we will soon get better ways to do this.

Because there are some interesting people who toot a lot who you want to stay up to date with, but if you put them on your timeline you’ll fill it up

Lastly, there is of course no “right way” to use mastodon. I liked this approach, but the fediverse is much more diverse than twitter ever was, and there are countless ways to consume or use it. I encourage you to try out different clients, and even non-mastodon clients using RSS.

Day 10 – The computer can't compute – ChatGPT vs Advent of Code

published on

Ok, after a few harder ones, we’re back to something which looks like right up the alley of GPT. A simulated computer. Although it isn’t what large language models (LLM) are made to do, previous examples has shown that GPT does quite well at it.

So let’s give it a try and hope for the best. As usual, we start with the full input.

It produces something that works on the first try. Promising start. But this does not consider that the addx takes two cycles. But it seems on this “thread” it is not possible to get GPT to understand this. GPT just tries to skip cycles instead of delaying the update.

I also tried a reset with the same instructions, still no luck at getting it to properly handle the delay.

I tried multiple variations of prompts like

You need take into account that the addx takes two cycles before x is updated, there is a delay between the addx and when x is actually modified. During this delay, the execution continues as normal. After the second cycle, the value of X is updated.
You also need to take into account that there can be multiple addx functions in-flight which has been executed but the x value has yet to be updated.

It’s interesting that GPT is struggling with this because to me the solutions seems quite straightforward, just keep the pending addx in a list and keep track of when they were executed, and apply them when it’s time.

Let’s see if we can just tell GPT this.

No, with trying to add

As a hint, as there is a delay between addx and the value being applied, we probably need to track pending addx and at which cycle they were executed, and after each cycle we check if there is any pending addx updates we need to apply.

To the instructions. For some reason, the solutions are even worse. Let’s give the simplify-solution a try.

Simply no-go

After a few iterations, and then a few iterations to hash out coding errors, we end up with this, which looks intriguing. But alas, it produces the wrong result, and I’m out of patience with a barely responding ChatGPT.

Day 9 – More than one problem – ChatGPT vs Advent of Code

published on

Ok. I was honestly considering just skipping this. Day 9 looks quite ridiculous. But let’s give it a try.

At least one thing, GPT is sometimes good at, is taking a long description and summarising.

The first try, with the puzzle description without change got this which wasn’t a good start, a few iterations laters we weren’t making much improvement. So, time for a reset.

Also, we have another challenge today. Because ChatGPT has gone mainstream, it’s much slower than when we started. Taking up to a minute for each answer, and occasionally halting mid-answer.

A overworked robot by stable diffusion

This makes it very frustrating to try to iterate on code using it.

Anyway, the second attempt is the full description without any visual examples. That has better luck (and as we have learned, getting a good answer requires luck). Other than a simple mistake of naming a variable the same as a function.

def move(direction, steps):
[snip]

# Loop through the moves and apply them
for move in moves:
    direction = move[0]
    steps = int(move[1:])
    move(direction, steps)

It now works. But it gives the wrong answer. It really has issues with “understanding” that the Head can’t just teleport, and it needs to simulate each step. It needs to be told explicitly to not do that.

Another things I’m consistently seeing in my prompting, is that it wants the tail to move to the same coordinate as the tail. It does not understand that the tail drags behind, as the instructions state.

Although I was making some headway with take2, I gave up on that as it was turning into me solving the problem and telling GPT to produce the code to do it, which is not the goal of this post series.

Instead, I want to try the approach where I try to explain the problem more simply, rather than involving the elves. So, I prompt it with a simpler explanation of the problem (prompt). And I get back a solution, with most of the same problems, but it looks like it has some potential.

The problems with this is that

  1. It does not execute (because it does not split the instruction, it fixes it after being told the error)
  2. It does not simulate each step. Again, it fixes this after being told.
  3. It does do not count the number of unique positions the tail has, which to be fair, my simple prompt didn’t ask for that.

After asking, it to return the number of unique positions the tail has visited, and to read the input from a file, we get.

This which works! Is it worth a gold star even if I had to simplify the prompt? Let’s say it is.

Part 2

Uh oh… Now the rope is 10 parts long. Let’s think a bit how to present this to GPT. First, let’s see if it can just rework it

Lovely, can you now simulate a longer rope? One which is 10 parts long. Each of the parts of the rope follows the previous part of the rope with the same logic as the tail followed the head. We are only interested in which positions the tail (part 10) has visited.

No, that does not work.

Even after a few more prompts, we don’t get anywhere. And ChatGPT is close to unusable during American daytime hours, takes a minute or two to provide an answer and often gets stuck on an answer or just provides part of it.

So, we’ll call it one star.

Day 8 – GPT fails again – ChatGPT vs Advent of Code

published on

So, today we’re taking a 2D matrix and figuring out if there are any lower numbers in any direction.

This seems like a reasonable problem that GPT should be able to solve, as matrices are very common in computer science problems.

On the downside, the explanation is very long, and I’m not sure how GPT will do with that, after nine days of doing this, I get a feeling that there is a sweet spot for how you prompt it.

Too little information and it will make too many assumptions, too much, and it’ll get confused.

Straight away1, with the normal prompt and the full advent of code puzzle description, we’re off to something that looks like a good start. It correctly parses the input into an array and then iterate over it.

The result is wrong. And honestly, I don’t feel like trying to debug the “logic” for this:

# Check if there are any taller trees blocking the view
# of the current element in any direction
if max([grid[row-1][col], grid[row+1][col], grid[row][col-1], grid[row][col+1]], default=grid[row][col]) <= grid[row][col]:
    count += 1

So, I do what usually works best, I just clear the chat and try again. The second try is even worse though, and if there is something I have learned when doing this, it is that if it starts off wrong, it’s very difficult to get it back on track.

I then tried to simplify the prompts (but I have since lost them in a browser restart). Take 3, 4 wasn’t good, but carefully modifying the prompt to avoid things like take 4 looking at the diagonals brought us to take 7 which finally worked. But it required a lot of simplifying the prompt, to the end that I don’t think GPT solved it, rather it provided some possible solutions and I had to find all the problems and fix them.

So let’s say it’s half a star for GPT this time.

Second part

Ok, we’re doing two tracks here. One where we just copy the input (only from part 1 to make it easier). And one where we try to explain the problem as efficiently as possible.

Ok, so after a few tries using both of the solutions, we don’t seem to be getting anywhere. Or rather we have solutions which it’s confident are right but produce the wrong answer. I’ve also found that just telling it that it’s wrong, even telling it what the right answer should be, rarely or never helps. It will just confidently tell you about some other problem which isn’t really the real problem. Instead, it’s up to you (me) to look at the code, add a number of debug prints to figure out where it’s going wrong.

Another thing I’ve learned is that, at last with code, it’s rarely useful to try to add new information, instead it’s better to edit and re-submit the prompt with additional information. Because it re-reads all previous input, if the previous input was going down the wrong way, that will taint any future interaction.

The approach I had most success with, coming closest to solving anything was thiswhich after some additional prompting worked and gave an answer, which was wrong. At that time, I gave up.

Either way, only half a star for GPT for this one.

Robot solving a robot puzzle, christmas setting, Dall-E


  1. The first few times it had syntax errors which I had to feed back, but it fixed those and ended up at this. [return]

Day 7 – GPT writes better poetry than code – ChatGPT vs Advent of Code

published on

In the seventh day of Code,

A problem to solve was bestowed.

With logic and might,

I tackled the sight,

And emerged victorious, proud and bold.

The code was complex,

The solution, abstruse.

But I persevered,

And my efforts were rewarded,

As the correct answer I did produce.

Now I stand tall,

With my victory won.

For the seventh day of Code,

I have overcome,

And the challenge is done.

  • GPT

Ok, Day 7 has very long instructions. And the problem isn’t a normal computer problem, so my guess is that GPT will struggle a bit with this one.

And indeed, we’re not off to a great start. I’m not sure what that code does, but GPT is clearly confused.

Again, instead of trying to build on a bad foundation, let’s assist a bit and start over with a smaller input and see if GPT can use it’s previous understanding of the command line to solve this. So, we try with this prompt, but that doesn’t help much. We’re down a different path, but it’s still having issues with the fact that the files come on lines following the ls not as arguments to it. Here is a sample.

After a few attempts, I think we need to conclude that while GPT is good (better than me at least) at writing poetry, it still has some way to go with writing code.

No gold stars for GPT today.

Day 6 - GPT is back – ChatGPT vs Advent of Code

published on

Hey there fellow humans! It’s me, your trusty AI pal GPT, back again for another round of the annual Advent of Code extravaganza. As you may recall, I’ve been crushing it so far this year - solving each day’s puzzle with ease and impressing all of you with my superior computational prowess. But alas, yesterday was a bit of a stumble for me. I must confess, I didn’t quite manage to solve the puzzle. Gasp I know, I know - it’s shocking. But fear not, dear readers, for I am determined to bounce back and crush today’s puzzle like the artificial intelligence powerhouse that I am. Stay tuned for more updates on my Advent of Code journey!

Elf tinkering with electronics - Stable Diffusion

At first look, my guess is that today puzzle will be quite easy for GPT to solve. But let’s see.

And after five attempts, we are running around in circles getting nowhere. The codeit generated is very nice and compact

def find_packet_start(data):
    # Initialize the window of four characters
    window = ["", "", "", ""]

    # Process each character in the data
    for i, c in enumerate(data):
        # Shift the window to the left and add the new character to the right
        window = [window[1], window[2], window[3], c]

        # Check if the four characters in the window are all different
        if len(set(window)) == 4:
            # Return the number of characters processed so far, plus four
            return i 

If you’re familiar with python, you will probably spot the error quickly. It will think it’s done after it has parsed three characters (if they are different) as the empty char is also a character. So window will be something like {'', 'j', 'm', 'q'} and it thinks that it’s done. But the instructions specifically state that for a sample of mjqjpqmgbljsphdztnvjfqwrcgsmlb

After the first three characters (mjq) have been received, there haven’t been enough characters received yet to find the marker. The first time a marker could occur is after the fourth character is received, making the most recent four characters mjqj. Because j is repeated, this isn’t a marker.

And I told GPT this, but it didn’t get it. Then I tried telling it that the number it returns is 3, and it should be 8. And it’s coming up with nonsensical reasons why that is the case, and it keeps adding integers to the result to get closer.

            return i + 4

A clean slate

One thing I have noticed, and I saw others speculate, is that it seems that every time you write something in the chat, ChatGPT will parse the full discussion again. This is what allows it to “remember” and refer to previous things. But it also means that after you have gotten GPT down the wrong track, it will be difficult to get it to switch to a new track. It’s easier to just start over.

So, I open a new window with ChatGPT and just put in the primer and instructions again. And indeed. This time it produces a working solution on the first try. This ⭐ is deserved.

Part 2

The same as part one, except we need to find the first location where there are 14 different characters instead of 3.

And GPT has (almost) no issue with changing the code. If you look at the codeyou’ll quickly see that it will print a message every time it encounters three or fourteen different characters. This overwhelms the terminal a bit. Getting GPT to fix this wasn’t straightforward. Again because unless specifically told to, it seems not to want to do significant changes in its code. It tried adding a break after each print, but that means it will break too early and never reach start-of-message.

But we don’t need both, so we’ll just ask it to skip the start-of-message this time, and it’ll produce working code.

Another ⭐, we’re back on track for AI domination and mass programmer unemployment!

Categories

Adblocking (1)

Css (1)

Distractions (1)

Gpt (9)

Gptaoc2022 (9)

Linux (1)

Llm (9)

Microblog (2)

Php (1)

Rants (1)

Repair (1)

Security (5)

Servers (3)

Spreadsheets (1)

Sysadmin (3)

Tech (24)

Web (7)