[RE: nyman]#_

don't let perfect be the enemy of bad

Recent Posts

published on

I’m a happy user of micro.blog since forever, but I must say when I saw the Ghost 6.0 release I was tempted to try, so shiny.

Then I saw the system requirements and feature list and remembered that it’s not what I need or want.

(Also a micro blog post is still a blog post)

Vibe coding is great until it isn't.

published on

Word of warning: This is mostly a rant or reflection, I’m not sure there is anything useful here so feel free to skip this one :-)

The problem

If you tried solving complex problems with any of the state of the art models, you have probably noticed how the LLM has a tendency work fine up until a point, and then they break down completely. And after that, they don’t seem to be able to recover.

Even if you tell them something is wrong, unless you give it the solution it will just state it’s typical “You’re absolutely right! I see the issue now” but then it usually breaks things even more. And there is no going back, or at least I have not ever gotten it back. The only thing is to back up enough in the discussion and fork it, or reset the context if you’re coding.

appendix.jpg

It makes sense because of how a LLM works, but it’s still limiting and wastes a lot of time because it’s hard to know where it took the wrong turn.

I’ve found this happens much more when dealing with more obscure things, my most recent example is logcheck, which is old but not very popular, and claude code got stuck multiple times when I was making my logcheck simulator.

This is an interesting effect, and the Illusion of Thinking paper discusses this also.

Note that this model also achieves near-perfect accuracy when solving the Tower of Hanoi with (N=5), which requires 31 moves, while it fails to solve the River Crossing puzzle when (N=3), which has a solution of 11 moves. This likely suggests that examples of River Crossing with N>2 are scarce on the web, meaning LRMs may not have frequently encountered or memorized such instances during training.

Possible solutions

One solution was mentioned above, reset the context and try to prompt it another way. Unless the problem is actually too complex, it might work.

If the problem is “too complex” you can try a bigger model if you can. Otherwise you have to figure it out yourself so you can break it down for the model.

Another, if the problem is medium-complex is to just let it spend more cycles on it. To allow it to do that, you probably need to do something where it can iterate on the problem. For me, when I was having issues with getting the regex converted between POSIX and javascript, I told it to create a js-test script that it could run with node. That allowed it to take me out of the loop, and it ran for a few minutes trying to brute force the problem until it happened to come up with a working solution.

Do you care?

I don’t consider myself an expert on LLM supported coding but I’ve played around with it enough to have gotten some experience. And one of the takeaways for me is that I would never use it for anything actually important.

Not because I don’t think it could do it, it probably could in many cases. But I’ve noticed that, for some reason, because the LLM produces so much so fast I very quickly become disconnected from the solution. Previously when I was writing code myself (which I’ll admit was quite rare nowadays) I cared about it being correct.

Maybe this is a luxury problem, looking at the state of software, many or most developers nowadays don’t care.

Which brings me back to the, if it’s important, and you care about it, don’t outsource it to a stochastic parrot.

logcheck for Turris Omnia and other openwrt devices

published on

logcheck, is a really old collection of bash scripts that are surprisingly great for monitoring a *nix server.

It’s great because it’s really lightweight and easy to set up compared to most modern logging and alerting stacks.

It can do this because it works in reverse to how most logging tools work. Instead of trying to find the important stuff and alert on that, it just filters out everything “standard” and alert on everything else.

On a normal, low activity server like my personal one, the standard logs (excluding noisy stuff like web logs) are generally very uniform and boring. And the maintainers and contributors of logcheck have spent quite some time to pre-write filters for all the standard noise which applications put into their logs as part of daily functioning.

I’d recommend everyone who runs their own servers to give it a try. The only annoying part is writing the ignore rules for the stuff that is not yet ignored, but I’m vibe coding a solution for that, for another blog post.

I will now explain how to install on openwrt, which is interesting and useful if you can’t dnf or apt-get it, if you can, do that instead. I’ll use the Turris Omni as example because I have one, but instructions should work for all openwrt and similar. You just need bash, msmtp (or similar) and some cron.

Turris

I have a Turris Omnia, which is a nice router running a variant of openwrt. A long time ago the USB dongle I had in it started throwing lots of errors and I was none the wiser until I happened to login to it by coincidence and saw the errors in the log.

Nov 11 09:40:04 turris kernel: [525532.234506] BTRFS error (device sda): bdev /dev/sda errs: wr 709, rd 1629, flush 0, corrupt 0, gen 0

So wanting get some kind of generic alerting, and having good previous experience with logcheck I thought I would try to get it installed. But logcheck is not in the normal openwrt package repository, so I had to go look until I found a blog post by Glen Pitt-Pladdy, who had made a simple logcheck in bash, which works on OpenWRT back in 2011. Here is what I did.

Installing and configuring

Start with downloading mini_logcheck.sh

If you have SMTP, great, if not then you’ll need to create one. I prefer mailgun.com but there are a lot of providers that have free low or restricted tiers.

Modify /etc/msmtp so it looks something like this. Replace the host with your smtp host.

# Example for a system wide configuration file

# A system wide configuration file is optional.
# If it exists, it usually defines a default account.
# This allows msmtp to be used like /usr/sbin/sendmail.
account default

# The SMTP smarthost.
host smtp.eu.mailgun.org
tls on
tls_trust_file /etc/ssl/cert.pem
port 587
from turris@YOUR-DOMAIN
auth on
user <YOUR-SMTP-LOGIN>
password <YOUR-SMTP-PASSWORD>

# Construct envelope-from addresses of the form "user@oursite.example".
#auto_from on
#maildomain oursite.example

# Use TLS.
#tls on
#tls_trust_file /etc/ssl/certs/ca-certificates.crt

# Syslog logging with facility LOG_MAIL instead of the default LOG_USER.
syslog LOG_MAIL

Create

mkdir /etc/logcheck.d.ignore/

edit a file inside that to create a rule, historically, to organise rule files they are split per process/daemon, but unless you have lots of rules I prefer to keep them in one like so

here is a sample to get your started

/etc/logcheck.d.ignore/rules:

odhcpd[[0-9]+]: DHCPV6
cron[[0-9]+]: \(root\) CMD
kresd[[0-9]+\]: $
kresd[[0-9]+\]: > hints
kresd[[0-9]+\]: \[result\] => true$
99-dhcp_host_domain_ng.py

Then edit/create

/etc/cron.d/logcheck with the following

MAILTO=""
36     *       *       *       *       root     /root/mini_logcheck > /dev/null

Testing rules in logcheck

The hardest thing in my opinion is writing correct rules, the regex grep uses is basic and I generally have to use quite a bit of trial and error to get more complex rules to match.

mini_logcheck does not support the test mode that the normal logcheck does, so I quickly hacked together a test_logcheck script based on mini_logcheck. It can be found here https://gist.github.com/gnyman/a4d7ad7c13113dd9c3fa74442e42c17c

It will test the rules and display any matching lines. So you can modify your rules and re-run the script to see if it matches.

Blaugust

This is another blaugust post. It’s a draft I had lying around but it has not received enough editing or spell checking to graduate from that, so it’s a #draft still.

How much text can we fit into a QR code?

published on

Many years ago, Mikko Hyppönen posted a thread on twitter[xcancel.com] on machine readable codes like QR codes.

It was interesting and I went and made this one. I dare you to scan it. If you haven’t figured out what it is, try singing it. You can find the music for it here.

Either way. A while later while reading the chapter on machine-readable codes in If it’s secure it’s vulnerable by the same Mikko, I went down the machine-readable code rabbit hole again.

First, QR codes has an encoding called ALPHANUMERIC. That allows 4296 characters from a limited character set.

So I was curious, would the whole chapter on codes in the book fit into a QR code.

The answer is no, the chapter is ~4700 characters. ~400 too many. Also <alphanumeric>ALPHANUMERIC IS/NT VERY READABLE. NO NEWLINES AND ONLY UPPERCASE AND $%*+-./: ALLOWED</alphanumeric>

But wait

What about compression?

Yes! Even DEFLATE can do it, and there is a BASE54 encoding specifically for ALPHANUMERIC QR codes.

Now the whole chapter fits in ~3500 chars (or 3400 with bzip2).

And actually… the BASE54 is unnecessary. We can store binary directly in QR. A whopping 23648 bytes (~23 KiB) if we use the lowest error correction.

So I wonder if we could compress the whole book into one code?

Spoiler: The answer is no, and if you’re an expert on QR codes you know why and was already writing me an angry email to correct me. It’s actually not 23648 bytes, it’s bits. So a binary QR code can fit around ~3 KiB and the text content of the book compresses to 111 KiB, so it will not fit. But if I knew that I wouldn’t have continued down the rabbit hole so let’s just continue seeing how much compressed text we can fit into a QR code.

The plaintext of “If it’s secure it’s vulnerable” compresses to roughly 111 KiB of bzip2, which is ~̶𝟻̶𝚡̶ too much. (To what I believed a QR code could store).

How about more modern ones? Let’s try zstd, and brotli. No… actually turned out bigger! 123 Kib and 129 KiB respectively. Is there anything else out there?

Turns out yes, there is at least two long long running competitions for compressing pure text as much as possible with little regard for speed or resource usage.

mattmahoney.net/dc/text.html and https://prize.hutter1.net

So let’s try the second best one from mattmahoney’s competition, cmix.

Ok, wow… that was slow, it took 10 minutes (vs <1s for bzip2 –best). But it got us down to 88 KiB!

That’s nice but not enough. We’d need ~30 QR codes (with 2.9 KiB per QR). Which is actually not that bad. A whole book in 30 QR codes.

Thinking outside the box

So if we leave the limitation of QR codes aside and look for any machine readable code format that we can print and then later scan back into data.

Then we will have no problem getting one book onto a A4.

Martin Monperrus had a great overview at monperrus.net/martin/store-data-paper (the link is dead but I linked to the archive).

We could use OPTAR which can apparently store ~200 KiB of data per a4 page, so the whole compressed “If it’s smart, It’s vulnerable” (~120 to 88 KiB) would fit just fine on one page.

or JABCode which I’m not sure, it seems 4,6 KiB per “symbol” (square) but you can have more than one symbol. AB Code seems interesting. Seems it was developed by Fraunhofer Institute for Secure Information Technology and is nowadays a ISO Standard 23634:2022

If you want to read the details without paying, the BSI doesn’t paywall their standards .

Blaugust note

This is day three of blaugust. Again, while this is mostly based on a old twitter thread of mine from 2022, little to no spell-checking has been done so marking as #draft

How much text can we fit into a QR code?

published on

Many years ago, Mikko Hyppönen posted a thread on twitter[xcancel.com] on machine readable codes like QR codes.

It was interesting and I went and made this one. I dare you to scan it. If you haven’t figured out what it is, try singing it. You can find the music for it here.

Either way. A while later while reading the chapter on machine-readable codes in If it’s secure it’s vulnerable by the same Mikko, I went down the machine-readable code rabbit hole again.

First, QR codes has an encoding called ALPHANUMERIC. That allows 4296 characters from a limited character set.

So I was curious, would the whole chapter on codes in the book fit into a QR code.

The answer is no, the chapter is ~4700 characters. ~400 too many. Also <alphanumeric>ALPHANUMERIC IS/NT VERY READABLE. NO NEWLINES AND ONLY UPPERCASE AND $%*+-./: ALLOWED</alphanumeric>

But wait

What about compression?

Yes! Even DEFLATE can do it, and there is a BASE54 encoding specifically for ALPHANUMERIC QR codes.

Now the whole chapter fits in ~3500 chars (or 3400 with bzip2).

And actually… the BASE54 is unnecessary. We can store binary directly in QR. A whopping 23648 bytes (~23 KiB) if we use the lowest error correction.

So I wonder if we could compress the whole book into one code?

Spoiler: The answer is no, and if you’re an expert on QR codes you know why and was already writing me an angry email to correct me. It’s actually not 23648 bytes, it’s bits. So a binary QR code can fit around ~3 KiB and the text content of the book compresses to 111 KiB, so it will not fit. But if I knew that I wouldn’t have continued down the rabbit hole so let’s just continue seeing how much compressed text we can fit into a QR code.

The plaintext of “If it’s secure it’s vulnerable” compresses to roughly 111 KiB of bzip2, which is ~̶𝟻̶𝚡̶ too much. (To what I believed a QR code could store).

How about more modern ones? Let’s try zstd, and brotli. No… actually turned out bigger! 123 Kib and 129 KiB respectively. Is there anything else out there?

Turns out yes, there is at least two long long running competitions for compressing pure text as much as possible with little regard for speed or resource usage.

mattmahoney.net/dc/text.html and https://prize.hutter1.net

So let’s try the second best one from mattmahoney’s competition, cmix.

Ok, wow… that was slow, it took 10 minutes (vs <1s for bzip2 –best). But it got us down to 88 KiB!

That’s nice but not enough. We’d need ~30 QR codes (with 2.9 KiB per QR). Which is actually not that bad. A whole book in 30 QR codes.

Thinking outside the box

So if we leave the limitation of QR codes aside and look for any machine readable code format that we can print and then later scan back into data.

Then we will have no problem getting one book onto a A4.

Martin Monperrus had a great overview at monperrus.net/martin/store-data-paper (the link is dead but I linked to the archive).

We could use OPTAR which can apparently store ~200 KiB of data per a4 page, so the whole compressed “If it’s smart, It’s vulnerable” (~120 to 88 KiB) would fit just fine on one page.

or JABCode which I’m not sure, it seems 4,6 KiB per “symbol” (square) but you can have more than one symbol. AB Code seems interesting. Seems it was developed by Fraunhofer Institute for Secure Information Technology and is nowadays a ISO Standard 23634:2022

If you want to read the details without paying, the BSI doesn’t paywall their standards .

Blaugust note

This is day four of blaugust. Again, while this is mostly based on a old twitter thread of mine from 2022, little to no spell-checking has been done so marking as #draft

AI and LLM's will give me work work, not less

published on

(Let’s skip the discussion about if LLM’s are a net positive or negative. Let’s just look at what is happening.)

LLM’s are increasingly being used to write code, lots of code. And according to a recent veracode paper they have a tendency to write insecure code.

The tl.dr. of that paper is that (just like humans), unless told and trained to write secure code they won’t.

No surprise there really, as a lot of example code out there is insecure. The old adage of bad data in, bad data out still holds.

This is probably bad news for the society, but good news for anyone working in incident response and security overall.

And even if they could be made not to write obviously insecure code, they write a lot of code, and humans are lazy and won’t spend the time to understand it. As long as it works it will be shipped.

So we’re going to se a lot of new vulnerable code being pushed to production in the coming years. I think we’re going to see a whole methodology popping up around finding what kind of mistakes are systemic for the AI’s and then abusing them.

Isn’t there anything we can do about it? Not really. After all we have been trying to get humans to write secure code since the start. The incentives are just not there.

This is one of those things where, as much as we techies would want, it’s not a technical problem it’s a political one.

Blaugust note

This post is the third one from me for the Blaugust festival. It was conceived and written in less than an hour and not edited or proofed, so I’ll mark it as a #draft

Whats the point of blogging for me?

published on

Why did I join Blaugfest to encourage myself to write more? I’m not sure.

Sometimes I think it’s become of some subconscious hope it in the hopes that my post would go viral, but why? I assume it’s some built in social drive to be “popular” because that was once important for survival.

But I’ve seen many examples of being popular on the internet is not a good thing. I would like to specifically thank Marcus Hutchins aka. Malwaretech for regularly sharing examples of the downsides.

Which means even if I on some level want it, I don’t on another level. Ugh, internal conflicts are the second worst kind of conflicts.

This definitely something I will need to explore during the month. And luckily, or unluckily if you were here for the previous somewhat serious tech blogging, I will need to write more on this.

I’ll just end with a old screenshot from something Marcus posted a long time ago, with a good example of how having a “hit post” will not be roses and sunshine.

Screenshot from twitter where @malwaretechblog says: You'd be surprised by the sheer quality difference between replies from followers and non-followers. When a tweet reaches outside the audience it was intended for, your mentions go to complete shit. Bigger your account, the more frequently it happens. 10 retweets: I'm learning a lot from experts chiming in.&10;100 retweets: there's a good discussion going&10;500 retweets: "Hi, l'm a carpenter from Ukraine and I'm here to explain British cyber security policy to you"

Blaugust - Day 1 - What the?

published on

Hi there! Yeah, you! My one trusty reader (honestly I don’t have any stats but I don’t think I have that many readers). Sorry if I surprised you by popping up in your feed again, I bet you assumed this was another dead blog.

But no, it is apparently Blaugust.

What? Yeah I know, it sounds a bit like the sound the dog makes when it ate too much too fast. But it’s this great thing where people join and commit to blogging (more) during August. I’d like to thank Juha-Matti “Juhis” for introducing this to me, he also wrote the great explanation linked above. If you want to learn more check it out.

Either way, because as I’m a late starter and I have not read any other posts than his, I’ll assume he knows that he is right when he says its customary to start with an introductory post.

Who am I?

My name is Gabriel. I’m not sure what I am but I work in infosec and have done so for many years. I’m old enough to have first gotten on the internet when you had to pay per minute, but I’ve been called an old man shouting at clouds since I was young.

Blaugust and I

My goal with Blaugust is just to write more, because practice makes perfect better. It’s something I’d like to do, but I have a tendency to put too high expectations on myself and end up doing nothing. This website’s subtitle even says “don’t let perfect be the enemy of bad” because I need to remind myself that practice makes perfect.

Things I hope to write about, in no particular order

  • information security
  • books which were important enough that I remember them
  • rants about the state of technology
  • technology tips and guidance
  • life philosophy

And probably some other things, this list was made up half an hour before the deadline of the first post

If any of these seem interesting, then you can follow this years attempts using RSS, on micro.blog, by following me on mastodon or the old-fashioned way by typing in this address every day in the hopes that there will be a fresh new post waiting for you.

Celebrating defenders

published on

What is the main job of information security?

Is it to break things? Or to protect things?

I believe that most people would answer something along the lines of defending.

So if we agree that the end goal is to defend, why does it seem like infosec is mostly about the offensive side, and is this a problem?

This impression that offensive security gets more attention seems to be a common view based on my limited polling. To confirm it’s not only me, I did a quick poll in a few infosec channels (Signal, Mastodon and Discord).

The question I posed was: In YOUR experience, which part of infosec receives more attention from the community itself?

And the answers were:

40: Offensive security (attacking/red team)
6: Defensive security (protecting/blue team)
8: Neither/Other/Can’t say
Total sample size/n: 54

So ¾ of people share the same opinion as me. This confirmed that I am not too off track with my thesis, although I wouldn’t read too much into this data because of the limited and homogeneous sample base.

This question also spawned some good discussions and questions. For example, is it the wrong question to ask? Isn’t red team just an extension of blue team? And yes, the question can be improved. And offensive security is often a part of defensive (sometimes called purple), but there are also large parts of offensive security which do not work with the defenders. Sometimes it’s neutral research, and other times the activity directly targets the defenders’ ability to defend.

Does offense really get more attention?

While my little poll confirms my impression that infosec seems to be more about offensive security, I think1 most of the industry is actually focused on defense and most people work in defensive roles.

It might actually be more of a perception issue, which is interesting in itself. An interesting datapoint I found was that according to this pre-print InfoSec.pptx, the talks at the biggest infosec conferences were actually mostly defensive. While that paper only looks at a few conferences, it was still a surprise to me. I was under the impression that most talks at conferences were offensive. But that topic is a bit too big to go into here.

Is this a problem?

Yes? Maybe?… I don’t know.

At least there does seem to be a tension between what infosec is about and what gets the attention. This inconsistency seems like it would be detrimental in the long run.

Could it be one of the reasons why infosec is often such a thankless and stressful job? If you work in infosec and your work is 90% defense but you only get attention when there is a new attack, that might not be very motivating. Feeling that one’s work matters and getting appreciation for it is an important factor for being satisfied with one’s work.

A related issue is people who join the field coming in with the wrong expectations.

Why is this?

One reason is probably that it’s easier to break than build. That is true for most things except if you’re doing concrete sculptures. That means the initial exposure to infosec is often offensive.

The other reason is that hacks and vulnerabilities get into the news, while successful defenses are rarely newsworthy, because it is usually not clear what was prevented, if anything, by successful defenses.

Mikko Hyppönen put this well in his book “If it’s smart it’s vulnerable”:

When information security works flawlessly, it is invisible. And rarely is anyone thanked for stopping a disaster that didn’t happen.

What can we do about it?

One thing we should do is agree that the end goal of infosec is to defend. And celebrate the people who do it and help others do it.

We should make it more fun being a defender. Help build better tools to defend. Share them freely.

We should accept that it is currently harder to be a defender, and we should call that out.

Work together to share information to help defenders.

Juan Andres Guerrero-Saade, who knows much more about this than I do, recently had an interesting rant on the Three Buddy Problem2, about how part of the industry/community has an issue where it treats IOCs as too valuable to share, which causes all kinds of issues. We can’t force everyone to share everything, but we should celebrate and support those who do share.

We should try to make it easier to be a defender. And that also means don’t make it harder for defenders by immediately breaking or bypassing everything they come up with.

Breaking things is important

I’m not saying we should stop calling out issues. We don’t want to get sloppy with the defenses. But criminals have enough incentive already to build tools and find issues; is it really our job to help them?

So before you publish the next POC or publish info on how to bypass the latest detections, think about if this helps the defenders or the attackers.

The topic of offensive security tooling and full disclosures is a deep rabbit hole. It will get its own blog post at some point, but in short, I think a lot of people underestimate how much POCs and red-team tooling hurts defenders.

Conclusion

In the end, maybe the imbalance is intrinsic and impossible to change. But I believe we can and should try harder.

Our world is inevitably moving towards more information technology. And personally, I do not feel information security is improving. Rather, it feels the opposite is true and we are falling behind. If this has much to do with an internal conflict of offensive and defensive security, I can’t say.

But it’s time to make sure we celebrate defenders.

If you have thoughts or comments on this piece, comment on mastodon, bluesky or by sending me a email.


  1. But a little disclaimer that I have not researched this, and I have been wrong before. [return]
  2. https://securityconversations.fireside.fm/fixing-threat-actor-naming-mess at around 1h [return]

Flashing a Ubiquity PicoStation with dd-wrt to extend the range of Mitshubishi PHEV In-Car WiFi

published on

If you prefer to go straight into the details, while skipping the backstory, feel free to jump directly to the setup.

Also note that DD-WRT will charge you 20 euro for the privilege of running their software on “professional” hardware. If their router DB says “yes” under “activation required” you will get a 24 hour trial then to see that it works, then you need to pay.

If you don’t want to pay, but want to use dd-wrt, you need to pick another router. The idea should work with most routers, but long-ranges ones will of course have… longer range.

The backstory

I have a Mitsubishi Outlander with the ability to check the status and control things like the heating and cooling with WiFi. It works… as well as you’d expect from Mitshubishi. Which is to say, clumsy but OK. (I might share my thoughts on UX department at Mitsubishi some other time.) The car advertises a WiFi network, and you connect to it with your phone and then use the app to But my car is parked a bit from the house and the car WiFi does not reach into the house. Which kind of makes the ability to remotely-pre-heat/defrost the car a moot point.

The original fix

Last year I bodged together used an old AR150 and router and using trelay I got it working.

Most of the time, but it was unreliable when it was snowing due to the AR150 not having the greatest antenna. But this year it had stopped working, probably because I got a new phone and the App is somehow tied to the MAC of the phone even if the router should forward it. So I decided I needed a better bodge this year.

The new fix

Meet the new contender, the Ubiquity M2. A long range 2.4 Ghz Wi-Fi AP which I had lying around and which should have enough range.

But while they still provide recent firmware releases even if it’s +10 years old, that firmware does not have the feature I need. I need a client-bridge to “seamlessly” extend the car WiFi into the house.

I knew I could bodge it together with openwrt, but for whatever reason I came across someone saying dd-wrt could also do it, and it had built in support for it. So as my last adventure with openwrt felt like a bit too much bodge I decided to give it a try. And so far it seems to work well, much better than the last solution worked.

The setup

TLDR: Flash older compatible XM. Hold reset 7 seconds on bootup to enable tftp. tftp suitable dd-wrt image. Configure bridge-client

The detailed version. First, as the openwrt Wiki says in big red letters, if your PicoStation is running anything newer than >5.5 you probably need to downgrade. I’m not sure this is needed for DD-WRT but I didn’t want to risk soft-bricking it. Luckily Ubiquity still provides old firmwares, and even more surprisingly they allow you to downgrade with the Web UI, albeit with a warning. So boot up your PicoStation, go into the web UI and upload this XM.v5.5.11.28002.150723.1344.bin

After that is done, things get a bit more advanced. You will need to use tftp to flash dd-wrt onto it. I picked this one PC2M-DD-WRT.bin. It’s an old dd-wrt but it the one used in a (“success story”)[https://forum.dd-wrt.com/phpBB2/viewtopic.php?t=166271] so went with it. At some point I might risk upgrading it, but as this is not a internet connected device I’ll go with it for now.

Note: A lot of instructions said to hold the reset button 10 seconds to get it into tftp mode. For me that did not work. I had to hold it ~7 second, if I held it longer it booted normally. I released it just before the two lights went out. You will know it works because it starts flashing a nice pattern with the signal strength indicator LED’s, and ping will start responding on 192.168.1.20.

After you have done this, configure the wifi something like the attached picture, and you should be able to connect to it with your phone (assuming it’s registered to the car).

screenshot showing the dd-wrt wireless configurations, key settings: main interface in client-brige mode and a extra virtual interface configured as ap to extend it

Categories

Adblocking (1)

Blaugust (8)

Css (1)

Distractions (1)

Draft (6)

Gpt (9)

Gptaoc2022 (9)

Infosec (1)

Linux (1)

Llm (9)

Microblog (2)

Php (1)

Rants (1)

Repair (1)

Security (6)

Servers (3)

Spreadsheets (1)

Sysadmin (3)

Tech (25)

Web (7)