If you have spent any time or effort building a Web site, you have had to deal with the password problem. The password problem has many layers, but the juiciest ones are these:

  1. People don't like passwords, so they will use the easiest password they can get away with.
  2. You cannot force anybody to use a proper password manager; only the dorkiest of users use them anyway because "proper password manager" is a dorky phrase just dripping with sarcasm and aspersion.
  3. Unless you turn over all responsibility of authentication to a third party, you are on the hook if one of your employees, contractors or family members lose a laptop or get malware that exposes your userbase data.

The password problem is this: the more Web-ified, Internet-able or otherwise connected the world becomes, the more important it is for every person to have an unbreakable, unique, non-traceable identification.

It's perhaps the hardest problem in computer science that isn't addressed in any real way. How do you determine this person is the person who should have the authority to do what this person is doing?

The current done thing is to: enact password protocols; require secondary protocols to augment the password protocols; and enforce rigorous access control protocols. This is practiced semi-regularly, based on how difficult it is to implement and how much the controlling administrator needs to shield their legal exposure.

The done thing should be: each person has full access to their account and data, and they can choose who accesses what from their data.

As it turns out, this is not trivial to implement, nor is it universally agreed upon as a solution.

The classic solution to this is various authoritative authentication services, either federated or centralized. OpenID is one version of the federated system, while examples of a central authority would be Microsoft's Live.com, Facebook or Twitter. OpenID services may or may not be included in some of the centralized authorities, so there is certainly some overlap. Centralized authorities, however, often include some level of personal data acquisition, and the rules that govern the redistribution of said data is opaque at best.

A few years ago, Steve Gibson proposed SQRL as an alternative. The idea has some merit, and perhaps is the best of a bad lot in many ways. Fundamentally it offers selective authoritative authentication that the user is somebody, though not necessarily a particular person. Its major failing, and its primary benefit, is that it puts the burden on the user to maintain an ID application. This is a benefit because this is where the responsibility should remain. The failing comes in when a user must have access to a piece of software, mobile phone, or some other piece of hardware or hardware/software combination for this to work.

An example: say you are overseas where your mobile phone does not work with the local carriers and you are in an Internet cafe. You cannot log in to your travel blog to make an update without the SQRL app. You may have a SQRL app on your laptop, but you either did not bring it, or you purposely didn't install a SQRL application on your laptop in the event it gets stolen. Your mobile phone likely has biometric and other security protections that a laptop does not, so this is not an insane idea. We'll probably never know never know how many compromised data breaches are the result of somebody losing a laptop, but it's definitely greater than zero.

One solution to this would be to provide some kind of federated SQRL service in much the same mold as OpenID. When you are disconnected from your mobile phone and your laptop, you can still use a plain old username/password to access the SQRL functions you need as a user. The problem with this is manifold: your borrowed computer may be compromised, so now you've given your username/password to God knows who; one of the purposes of SQRL is to make the process as simple as serving a QR code or URL, and forcing a network transaction to a third-party broker breaks the concepts behind the protocol; federated services suffer from the "Heartbleed problem", wherein a failure to keep up with security updates by one actor may affect all of the others through simple neglect.

Closed systems do not suffer from these problems. This is not to say that they don't have their own set of problems, but lack of enforced compliance is not one of them. I've used intranet and VPN systems that enforce 2FA and strict password protocols to the point that ordinary applications do not work properly and changes must be made to my workflow. Whether this makes for a more secure environment rather than a more annoying environment depends entirely on who gets blamed for what when things go wrong.

Let's start from scratch. In the good old days, when being on the Internet meant you had an account on a Unix server at a university, a simple username and password were adequate. Computer time was expensive, so it benefitted the university to pay somebody to look at logs and ensure that multiple people were not using the same account, and that password guessing was limited to one guy at one addressable station hammering password attempts in a noticeable and ejection-worthy manner. Shitty passwords weren't a problem for purely physical reasons. Usernames were canonical, if ephemeral. Whether by design or happenstance, the Original Neckbeards hit on the correct thing.

Fast forward to the early- to mid-90s. This same protocol extended quite well to the dial-up public Internet phase. There were certainly exceptions--for example, AOL was a famous loci of prodigious spam--but your standard local ISP managed to keep shenanigans to a minimum in much the same way as a university computer lab proctor.

At some point, your email address became your canonical ID. There's no fixed point when this occurred, but it was in full swing by the late 90s/early 00s. Oh, sure, you may have a username/password, but if you forgot one or both, we'll let you know by sending it to your email account on file. This is, fundamentally, the same ring of trust that was previously occupied by wizened Unix admins and their junior acolytes in the computer lab. The failure here lies in the inherently untrustable nature of email accounts and the intractable instinct of email administrators who do not want to field tech support calls from people who can't get their email. Email became confused with authentication with no prior knowledge or reason to trust it as authoritative. You were your email address because your easily discovered address and your easily guessable or recoverable password were considered, for no reason at all, proof.

Thus was born 2FA and a passel of protocols to deal with the inherent security problems of email.