The other day I wanted to use my noscript.it with one of my old iPhone 4S running iOS 6, but I was met with “could not establish a secure connection to the server”.
Turns out it was because I had, out of habit, configured the server with a “modern” list of TLS ciphers. And the poor old iOS 6 didn’t support any of them.
So, I went on a mission to ensure noscript.it works with as old devices as possible.
It turns out enabling TLS1 and TLS1.1 on Ubuntu 20.04 is a bit harder than I expected1. Luckily someone else solved it already.
So now, after using the old mozilla SSL config and appending @SECLEVEL=1
, it works. Even on my vintage iPhone 3G. Hurray!
But, I hear you say, isn’t this less secure? I mean now you only get a B on the Qualys SSL Report! Clearly this is bad!?
Let’s take a step back and think about what the score actually means. noscript.it automatically gets a B because it supports TLS1. But let’s go one step further and assume we’re looking a bank with a C2. A site gets a C if it supports SSLv3, meaning it is vulnerable to the SSLv3 POODLE3 attack. This is clearly bad for a bank!? Or is it? How likely is it that someone will successfully execute this attack, which requires the attacker to have the ability to intercept and modify the data transmitted. And compare this likelihood with how likely is it that someone will need to access the bank website from an old XP (pre-SP3) machine only supporting SSLv3? The second seems more likely to me.4
Okay, you say, but won’t keeping SSLv3 around make everyone vulnerable because of downgrade attacks? If that were the case, the risk calculation would be different. But luckily, we have TLS_FALLBACK_SCSV to avoid that. TLS_FALLBACK_SCSV
ensures that modern client and browser won’t risk being fooled to downgrade its encryption.
So to wrap things up, don’t stare blindly at the rating or certification. A site with A++ is more secure than one with a C rating. But if you (or someone less fortunate) can’t access the site when they need it, it will be a pretty useless site. Personally, from now on, unless the site needs 5 absolute security, all my projects will optimise for compatibility rather than getting an A++. After all, it is much more likely someone will try using it with a Windows XP or old Smart-TV compared to someone MITM-ing that person at that moment.
Please note though, don’t read this as an argument against doing things securely as default and following best practices. Rather it is just some thoughts on this specific issue of TLS and SSL configurations. If you break with best practice, make sure you understand the reason why it’s best practice to begin with and what risks or weaknesses you introduce by not following them.
I needed to have someone transfer some files to me securely. But I had a few requirements
Previously I have used locked-down ssh-keys and force-command. Both are good solutions.
This time I ended up using a small sandboxed ssh environment in a docker container with a mounted folder. The benefit compared to internal-sftp
is that it gives the sender some flexibility with how he/she wants to transfer the files, scp
, sftp
and specifically rsync
all work.
Warning: Docker containers are not secure sandboxes. The uploader can (by design) upload anything and has shell access so he/she can upload and execute any executable. Any kernel or docker vulnerability could lead to an escape from the docker image. Don’t use this unless you trust the uploader.
In this case, I found a docker-image made specifically for a locked down ssh/scp/rsync environment.
First create a folder, for example named upload
in the directly where you want to upload files, then run and remember to change <USER>
to and <PASSWORD>
to something else
docker run --rm -it \
--name docker_ssh --hostname ssh \
-c 128 -m 256m \
-e PGID=1000 -e PUID=1000 \
-p 64822:64822 \
-v $PWD/upload:/home/<USER> -v $PWD:/etc/ssh \
-e CNTUSER=<USER>\
-e CNTPASS=<CHANGEME> \
-e ROOTPASS=$(openssl rand -base64 12) \
woahbase/alpine-ssh:x86_64 \
/bin/bash
And then get uploading!
For example,
scp -P 64822 test3.sh <USER>@<SERVER>:~/
or
rsync -e "ssh -p 64822" ./ <USER>@<SERVER>:~/
If you use 1Password Business in your organisation, you might be aware that you can get notifications and alerts for various events pushed to your Slack1.
This is quite useful, but I found the notification quickly get overwhelmingly noisy because a notification is generated for every time anyone unlocks 1Password.
This is too bad, because mixed in the notification spam about unlock’s are notifications for when someone logs in from a new device or adds a new trusted device.2 To fix this, I did a little hack.
It consists of two parts, first a go-bot slacker, second the reacji which is a slack app that automatically copies messages with certain emoji to another channel.
The idea is that the slack-bot watches #security-spam for messages that contains “was added as a new device”. When it sees a message that matches this, it will add a 🔏 emoji to the message, and reacji will then copy this to #security-notifications.
You can view a minimal go-bot sample here, figuring out how to install reacji and how to get and configure a bot-token is outside the scope of this post. There are lots of good guides on how to do that available. Just remember to keep the slack-bot permissions to a minimal.
Using these two parts, you can now mute the #security-spam in Slack and stay on top of when any team members or someone more malicious logs in to 1Password.
U2F and Webauthn are the two most exciting developments in web authentication in the last 20 years.
The most common way to use it is with a hardware dongle like Yubikey, which I never got around doing. Instead, I relied on TOTP for my 2-factor authentication.
That was until I found SoftU2F and combined it with Safari-FIDO-U2F to get it working with Safari, which worked, most of the time.
With the release of Safari 14, Apple finally brought proper WebAuthN support to Safari1.
So now, you can quite easily get this experience without any additional hardware.
All you have to do is get the latest SoftU2F.pkg and install it.
Now you have two options; you can let SoftU2F store the key materials in your keychain, which is the default and where you will authenticate by approving or rejecting with a notification.
Or you can use the slightly hidden option, and store the key in the Secure Enclave Processor (SEP), aka the TouchID. But be warned, while the keychain can be backed up and transferred, the SEP can’t2. So make sure you have backup authentication methods for when your Mac decides to stop working.
To use the SEP, you need to run the following command /Applications/SoftU2F.app/Contents/MacOS/SoftU2F --enable-sep
You can find more documentation about the SEP implementation in the pull request
All done!
Now you can enjoy having your own built-in FIDO2 key.
subscribe via RSS