Not addressing the Cryptography Lag risks increasing the frequency and severity of data breaches, potentially resulting in millions of dollars in company damage.
But what is the cryptography lag? Why does it exist? And what can we do to catch up?
Defining the Cryptography Lag
We’re witnessing an explosion of companies in the AI space. These are all made possible by decades of artificial intelligence research, all starting in a summer workshop in Dartmouth in 1956. The same has not been true for cryptography — even though one of the four proposers of the AI workshop was Claude Shannon, who was also instrumental in the foundation of modern cryptography.
This ‘lag’ in production systems behind academic cryptography is something we’re actively working on reducing at Evervault. There are, however, a few reasons why the lag has existed.
Quantitative reasons for the lag
1. Cost of encryption
Historically, using cryptography in an application meant the application’s performance declined. But, with Moore’s Law, the cost of deploying encryption has decreased to make it widely available and cheap, and clients can process more complex algorithms.
A decade or so ago, developers saw HTTPS as too expensive. Most services now run over HTTPS to ensure secure communication between client and server (encryption in transit), but as TLS terminates at the server, it doesn’t protect against a compromised server. We’ve designed Evervault so that plaintext data never touches your server, meaning plaintext data cannot be breached even if your server is compromised.
Moore’s Law has also meant that brute force attacks become easier for attackers and that they can go after longer encryption keys. But encryption has an answer to Moore’s Law:
“The sizes of encryption keys are measured in bits, and the difficulty of trying all possible keys grows exponentially with the number of bits used. Adding one bit to the key doubles the number of possible keys, adding ten increases it by a factor of more than a thousand.”
— Minimal Key Lengths for Symmetric Ciphers to Provide Adequate Commercial Security (1996)
2. Practical limits
Some theoretical cryptography remains impractical due to the cost of encryption. Two notable examples are fully-homomorphic encryption (performing arbitrary computations on encrypted data while it remains encrypted and without needing a secret key) and program obfuscation (encryption of programs while keeping their functionality intact).
3. Standardization and time
New mathematics and proprietary cryptography are red flags when deciding to use encryption products and services.
Time is the true test of cryptography; any scheme that stays in use for decades is, provably, the best option. Time allows the cryptography community to test and study new schemes rigorously. Our cryptography schemes are outlined in our docs.
In this sense, the Cryptography Lag is necessary. If the lag did not exist, insecure cryptography would be put into production prematurely — making users and applications vulnerable. Cryptographic standardization exists to avoid the insecurity of untested systems. 
Qualitative reasons for the lag
1. Patenting cryptosystems
New cryptosystems are often patented. For example, Hellman, Diffie, and Merkle filed a patent for public-key cryptography. 
While patenting cryptosystems may contribute to the Cryptography Lag (because a monopoly is granted to inventors when it comes to using the cryptosystem), there’s a way in which filing a patent is better than not filing a patent.
Under patent law, a monopoly is granted for a set time in exchange for disclosure to the public on how to make or practice the invention, i.e. the inventor must disclose to the public how their design works. In cryptography, public knowledge of how a cryptosystem works is essential.
Indeed, a corollary of Kerckhoff’s Principle (that a cryptosystem should be secure even if everything about the system, except the key, is public knowledge) is that a developer should only use cryptosystems about which everything is known.
Encryption is a dual-use technology with both civil and national security applications and implications. As a result, there have historically been governmental restrictions on civil applications of encryption.
Recent examples include the EU Council’s Security through encryption and security despite encryption and the introduction of the Lawful Access to Encrypted Data Bill into the US Senate.
2. Ease of misuse.
For a long time, encryption was hard to use (and cost too much) and easy to misuse. This meant that developers did not use it. We’ve designed Evervault to be easy to use and hard to misuse. You can see an overview of Evervault vs. in-house encryption here.
Removing the lag
Historically, the Cryptography Lag made sense. The cost of encryption was too high; the theoretical limits of cryptography were far beyond the practical limits, and cryptography was both too complex and too easy to misuse. But, as the cost of encryption approaches zero, we’re headed to a future where no apps and developers touch data in plaintext. Everything can be encrypted.
“To put everything online “in the cloud,” unencrypted, is to risk an Orwellian future.”
At Evervault, we’ve built the first encryption platform. We equip developers with easy-to-use tools, making it easy to encrypt, process, and share data without ever touching it in plaintext. If you’d like to help reduce the Cryptography Lag, we have curated a collection of seminal cryptographic papers. Even better, you can start encrypting today by creating an Evervault account.
 NIST standardized the Data Encryption Standard (DES) in 1977 after a process starting in 1973. The Advanced Encryption Standard (AES) replaced DES in 2001 after a process beginning in 1997. On July 5th, 2022 NIST announced its choice for the first four quantum-resistant cryptographic algorithms.
 Hellman, Diffie, and Merkle’s patent expired in 1997 — and may not have been valid because they publicly disclosed the idea of public-key cryptography over a year before filing the patent (which would have made the patent invalid).