Authy, Google Auth., MS Auth., Other???

  • Be sure to checkout “Tips & Tricks”
    Dear Guest Visitor → Once you register and log-in please checkout the “Tips & Tricks” page for some very handy tips!

    /Steve.
  • BootAble – FreeDOS boot testing freeware

    To obtain direct, low-level access to a system's mass storage drives, SpinRite runs under a GRC-customized version of FreeDOS which has been modified to add compatibility with all file systems. In order to run SpinRite it must first be possible to boot FreeDOS.

    GRC's “BootAble” freeware allows anyone to easily create BIOS-bootable media in order to workout and confirm the details of getting a machine to boot FreeDOS through a BIOS. Once the means of doing that has been determined, the media created by SpinRite can be booted and run in the same way.

    The participants here, who have taken the time to share their knowledge and experience, their successes and some frustrations with booting their computers into FreeDOS, have created a valuable knowledgebase which will benefit everyone who follows.

    You may click on the image to the right to obtain your own copy of BootAble. Then use the knowledge and experience documented here to boot your computer(s) into FreeDOS. And please do not hesitate to ask questions – nowhere else can better answers be found.

    (You may permanently close this reminder with the 'X' in the upper right.)

GregM

Member
Dec 6, 2020
19
2
I think I've been moved to use Authy, v.s. Google or Microsofts Authenticator Apps for the following reasons:
  • Synced to all other devices it's installed on
  • App available for Windows and Linux Desktops also
  • If I lose ALL devices (house fire or something equally bad) all I need is the Backup Password to recover functionality when I get my replacement devices.
Is there any reason I shouldn't use Authy, or is there another App that might be better that I am not aware of?

As far as I can tell Authy is free . . . is it likely to stay that way? Is there a chance that I'd get fully committed with all my accounts (customer accounts as well that I look after too) and then Authy would somehow move to a subscription system and the PIA factor of moving everything to another app would be too much to bear?

Thoughts?

Thanks.
 
Last edited:
Is there any reason I shouldn't use Authy
No, not really, except for lock in. There is, by design, no way to export any data out of Authy if you should ever want to leave. To do so, you would need to go to each account you used Authy with, and deactivate the TOTP authentication, then regenerate a new secret for your new future platform.

Authy also has an Authy specific authentication protocol that only it supports, as a further form a lock-in.

My suggestion is as follows: Use a secure secret manager (or your Password Manager (for your first factor) if you want all eggs in one basket) and store in it the TOTP secret (at time of enrollment) so you don't need to export it later if you should want to change platforms (or should Authy disappear or become paid or something.)
 
I have not come across a reason not to use Authy. As Paul said, the only downside is the inability to export the TOTP codes. An alternative to electronically storing the TOTP codes at the time of enrollment would be to print them out and store in a safe place.

If you are already using Authy and have not stored the TOTP codes then at your convenience you could remove the 2FA from your accounts, one at a time, re-register 2FA, save the codes this time, and register them in Authy again.

I have Authy installed on two devices so an quite comfortable with the security and backup provided.
 
I chose Authy for the backup password, but didn't save the codes. It will take me thirty minutes to switch to some other authenticator. For web sites that won't support TOTP, I use a Google Voice number dedicated for 2FA (not my main Goog acct).
For the Social Security Administration, I have given up. They mailed me a postal letter to switch to 2FA that expired before it arrived. They have all the technology, they are just stupid or extremely cautious, not sure which. More and more govt sites are using 2FA which is very good news.
 
I've been using Authy. I've never used others outside of work but so far no complaints or problems with Authy. I have it installed on my iPhone but almost always use it from my Apple watch instead.
 
Authy brings convenience in being able to share your 2FA across devices - which helps if you replace a device or it gets lost/stolen.

It's a classic tradeoff between security and privacy.

Once backups are enabled, you'll be prompted to set a password that is used to create a secure key for encrypting all of your configured Authy 2FA account tokens. Once Authy encrypts your tokens, they can be synchronized to another device or Authy app instance. Synchronized tokens can then be unencrypted with the password you entered when enabling backups.

I don't see how they are encrypting this, the methods and rounds; this might be something that could be attacked.
 
don't see how they are encrypting

Looks to me like they're doing all the right things.
 
  • Like
Reactions: Lob
some plucking done, 1000 rounds is not bad but @Steve pointed out LastPass using 5000 back in SN624 (https://www.grc.com/sn/sn-624.htm). Surely bending over so much for these low-oomph devices is ill-advised? After all, that SN episode is from 2017...... in theory, it does not need doing on every use, I expect, so time is not a pressure. Here's what they say:
  • Your password is then salted and run through a key derivation function called PBKDF2, which stands for Password-Based Key Derivation Function 2. PBKDF2 is a key stretching algorithm used to hash passwords in such a way that brute-force attacks are less effective. The details of how this is done are quite important:
    • We use a secure hash algorithm that is is one of the strongest hash functions available. It’s a one-way function – it cannot be decrypted back and is one of the strongest hash functions available.
    • We use 1000 rounds. This number will increase as the low range Android phone’s processor power increases.
    • We salt the password before starting the 1000 rounds.
    • The salt is generated using a secure random value.
  • Using the derived key, each authenticator key is encrypted with Advanced Encryption Standard AES-256, in Cipher Block Chaining (CBC) mode along with a different initialization vector (IV) for each account. To make each message unique, an IV must be used in the first block.
    • If any Authenticator keys are 128 bits or less, we pad them using PKCS#5.
  • Only the encrypted result, salt, and IV are sent to Authy. The encryption/decryption key is never transmitted.
 
This number will increase as the low range Android phone’s processor power increases

Normally you store the number of rounds somewhere (if it's variable) so it's possible that they set a low default and then as you use it the device times how long it takes and scales. (This is what SQRL does.)

Never the less, more rounds isn't much extra strength... What you need, as ever, is a good strong and long password with good entropy.
 
Never the less, more rounds isn't much extra strength... What you need, as ever, is a good strong and long password with good entropy.

Thank you! This is not surprising coming from Paul, but still encouraging as I've spent years trying to convince people of the folly of long key derivation in the face of exponential password strength.

An industry-minimum strength for key derivation is good and memory-hard algorithms are even better, but adding multiple seconds worth of rounds on low-power devices is just a waste of time and electricity that needlessly inconveniences the user. If you want to inconvenience your user, then do it the *right* way; include an algorithm in your password creation routine that rejects short and simplistic passwords. Heck, even run it through HIPB and reject any that are found there as well if you've got an Internet connection. It's the *password* that matters (ITPS).

Adding 1 random ASCII character to your password requires 95X more work to brute force and that work applies only to an attacker, not to you; it's likely <8% more work for you to type the extra character. Extended key derivation is for suckers and I can't help but feel that Steve beguiled himself into doing this with SQRL via Enscrypt. The 5-second default derivation time in SQRL is not the end of the world, but it's pointless as it's a substantial delay that must be done every single time you use the password. It makes users *feel* more secure and that makes the hair stand up on the back of my head. I'm *VERY* glad that Steve included a setting to adjust the derivation time in his SQRL Windows client; I use 1 second, which is the minimum.

To put this another way, >95% passwords can be categorized as either easily guessed or not easily guessed. An easily guessed password cannot be protected even with a PBKDF2 delay of 60 seconds, while a difficult-to-guess password will be secure even with only 500ms of PBKDF2 delay. Forcing users to wait for 5 seconds during key derivation will add relevant protection to 1-5% of your users, yet 100% of users will lose 5 seconds of their lifetimes every single time they log in or decrypt the resource.

It doesn't really matter if Authy is using 1,000 rounds. 5,000 would *only* make brute force 5X harder. You'd have to bump it to 95,000 rounds just to simulate 1 extra random password character. Yes, I understand that most users aren't using random characters for their "master" passwords, but that's why you add inconvenience to password creation rather than the KDF; minimum requirements, though imperfect and arbitrary, are far better than extended key stretching.


However...

The docs don't say if Authy is using PBKDF2 with SHA1 or SHA256/512, but hopefully the latter so that's what I'll assume. If so, an Intel i7 from 2016 can do ~500,000 PBKDF2-SHA256 hashes in 1 second in *Python* (much more in C). I've found no present-day ARM metric, but I found a test where a Nexus One phone could do ~3,000 PBKDF2-SHA1 hashes per second. SHA256 is ~250% faster, so let's just assume that a modern phone can do 10,000 PBKDF2-SHA256 hashes per second. Anything less than ~1.5 second on the slowest devices is fine. so it seems very likely that Authy could use at least 10X more iterations than their docs claim. They should definitely bump this to 10,000 as an easy update and then implement Argon2id as a real upgrade.

Hardened key derivation was a great idea that is certainly necessary for password systems, but it should only be done to a degree that does not impose upon usability at all. If the user has to patiently wait for key derivation, then you're doing it wrong.

If readers want to see what happens when users and developers get suckered into key derivation strength and "secret" extra factors, take a look at VeraCrypt's ridiculous PIM system. That's a rant for another thread, though.
 
Last edited:
In actuality it would be better if they randomized the number of rounds (within a range) as this would be kind of like an extra bit of SALT. (Usually the SALT is stored in the clear, as would the number of rounds be.) In the bizarre chance that two users had the same SALT but a different number of rounds, it would act like they had different SALTs.
 
I disagree with this one

I did say:
In the bizarre chance that two users had the same SALT
So you really do agree with me. I guess what I was thinking would be if the user has a bad password but a good SALT and a known number of rounds it's theoretically possible the attacker could work a kind of reverse rainbow table where they take the known weak passwords list, and use predictable SALTs and iterations and hope. In that bizarre case, maybe the randomization of number of rounds *might* be an extra protection... maybe...

But I agree that it was a very long shot.
 
I apologize, as I removed my post before seeing that you had replied. I edited that post numerous times and eventually decided that my argument was flawed. I still don't think unrecorded, random iteration counts are worthwhile, but I don't disagree with your post as randomized iterations counts do affect the birthday attack in ways contrary to my argument. I have since altered my deleted post to the following:


The birthday attack helps us determine the likelihood of users sharing the same salt. 128-bit salts are often used with PBKDF2, so for a 128-bit salt there's a 0.0000000000001% chance of a collision after 820,000,000,000 rounds; an unlikely, but conceivable event. According to Wikipedia, this is equivalent to the uncorrectable bit error rate of a typical hard disk and thus suggests that the risk of storage corruption outweighs the risk of a 128-bit hash collision up to 820 billion rounds.

For a 256-bit salt, things get far more favorable, as the same 1*10^-15 chance of a birthday collision is reached after 1.5x10^31 rounds, so it's not terribly conceivable that two 256-bit salts can collide unless the underlying hardware and/or primitives are broken. That's 15,000,000,000,000,000,000,000,000,000,000 rounds to reach a 0.0000000000001% chance of collision.

I guesstimate all BitCoin hashes ever performed to be ~1.3x10^28 as of this post. Each BitCoin round is a double hash of SHA256, so let's just assume BitCoin has performed 2.6x10^28 SHA256 hashes. By this measure, the world will need to continue mining BitCoin at today's rate for 1,140 years before reaching a 1*10^-15 chance of a birthday collision. The BitCoin hashing rate is likely to continue to grow, but it literally cannot grow exponentially for decades to come. While we can't predict what will happen, it seems that our 256-bit hashes have quite a lot of rounds left before random collisions become conceivable.


If 2 users have the same password and salt, differing iteration counts or other KDF parameters should indeed avoid key collision. I don't think that random iteration counts are a good solution as every authentication requires key validation for every hash iteration within the accepted range until finding a match. Also, the birthday attack still suggests that the odds of sharing the same iteration count are extremely high relative to the salt collision. This is a bit like 2 unrelated people sharing an extremely rare disease (only 2 cases ever known) that also have the same birthday; the shared birthday is not as surprising as it seems, but the odds were still against it.

It remains my contention that there are no worthwhile iteration/parameter tricks to play with key derivation. You:

1. Use an acceleration-resistant hash function with fixed parameters that achieve some standardized minimum of resource and/or time consumption.
2. Go fishing. <g>
 
Last edited:
. . . An alternative to electronically storing the TOTP codes at the time of enrollment would be to print them out and store in a safe place.

If you are already using Authy and have not stored the TOTP codes then at your convenience you could remove the 2FA from your accounts, one at a time, re-register 2FA, save the codes this time, and register them in Authy again . . .
Correct me if I am mistaken, but does printing out the QR codes or the numerical string associated work for re-registering with the same or a different app?

I think I tried using both my wife's phone and mine when we tried using google authenticator to register a common family email and we would never get the same code presented. I think we even tried snapping the QR code at the same time and we still got different code being created. So I am not sure printing them out and saving them in a safe place would be of any use . . . unless we were doing something wrong that I am unaware of.

Comments?
 
work for re-registering with the same or a different app
Yes, it should work. The formula for generating a code is basically HASH(SecretFromQRCode appended with Current Time rounded in 30 second intervals) -> convert hash to a 6 digit code.

Accordingly if the time is EXACTLY the same on both devices (with a margin of 20 seconds or so) then they both should generate the same codes. There is some slack in the verification protocol to allow for time "slop" and accept codes that are slightly old or slightly in the future (according to the server's clock.)

You probably had two devices that didn't have exact enough matching time and thus you saw different codes. Had you written them down in sequence from both devices, you would probably had eventually seen one repeating the other.