I’ve been asked a few questions about the FFX modes of AES that NIST recently specified in their SP 800-38G, Recommendation for Block Cipher Modes of Operation: Methods for Format-Preserving Encryption. Here’s an attempt to answer some of them.
Isn’t FFX very slow?
No, not really. Someone even asked me if it used “extensive computing resources.”
The bottom line is that symmetric encryption is very, very fast. Two of our excellent QA engineers even wrote a blog post that talks about this very issue here. FF1 is slower than other modes of AES but that probably really doesn’t matter. To encrypt using AES-CBC probably takes no more than a few thousand clock cycles for typical data. Fewer if you use a highly optimized version, like one that takes full advantage of every possible optimization, like the hardware acceleration that the AES-NI instructions and similar technologies give you.
For typical data, about 80 percent of the time that AES-CBC takes is setting up the key and initialization vector that the encryption algorithm needs. Only about 20 percent of the time is the AES encryption itself. An FF1 encryption takes roughly the equivalent of about 10 AES-EBC calls, so it might take about two or three times as long as it takes to create a random IV for use in AES-CBC. That’s still very fast.
And it’s so fast that it’s essentially lost in the noise when compared to other parts of an encryption operation that enterprise software does: things like setting up a TLS connection, making a network connection or logging events in a database. So if you’re just doing lots of FF1 encryption operations, you might notice a very minor use of computing resources. But if you look at the time required to fetch a key from a key server and log that event, you need to be doing many, many, many FF1 encryption operations before the time to do them starts getting to be comparable – or even noticeable.
Does FFX only work with 128-bit AES keys?
No. A cursory look at the SP 800-38G document shows that that’s simply not the case. Any AES key is fine for use by the FFX modes: 128-bit, 192-bit or 256-bit. This puzzling question seems to be caused by people misinterpreting “128-bit block cipher” as incorrectly meaning one that uses a 128-bit key instead of meaning one that operates on a 128-bit block. AES is a 128-bit block cipher, but it can use keys comprising 128, 192 or 256 bits.
Are the FFX modes limited in how many characters they can handle or the types of characters that they can handle?
No. The FF3 mode does have some significant limitations on the size of the plaintext (roughly 192 bits) and tweak (64 bits), but those limitations don’t apply to the FF1 mode, which can encrypt fields up to 232, or a bit over 4 billion, characters. If that particular limitation becomes a problem, then you might want to look at your database schema. There are about 500,000 characters in a book, so that’s enough to encrypt about 8,000 books! Typical fields in databases aren’t more than 8,000 book’s long.
Another vendor tells me that they have a proprietary way to encrypt that’s more secure than the FFX modes. Should I believe them?
Almost certainly not. Since Auguste Kerckhoffs published La Cryptographie Militaire in 1883, we’ve known that keeping an encryption algorithm secret is a bad idea. This idea has survived over 130 years and is still widely-accepted by the encryption community. It’s one of the few ideas that have survived that long, so it’s a good one, and ignoring it is probably a very bad idea, no matter how clever the designer of the secret algorithm is.
In fact, the reason that the generally accepted minimum level of review that a new encryption algorithm needs to undergo today is having a peer-reviewed proof of its security published. Without that, no new encryption idea should be taken seriously.
By Luther Martin
The reason that proofs are now required is that people make mistakes. Even the very smart guys who invent new encryption algorithms. And because they realize that, they don’t rely on their own judgement that a new approach is or is not secure. Instead, they rely on a rigorous, mathematical proof of security. That’s much less prone to error than even the smartest person.
So that fact that a new approach has not survived even minimal peer-review should be a huge red flag that indicates that it probably has surprises that its designer or designers don’t know about, and you should probably have a very, very compelling reason to accept the huge risk that comes with using it to protect your data.
How important is the recognition of FFX by NIST?
The current gold standard for encryption and data protection is the US government’s FIPS 140-2, Security Requirements for Cryptographic Modules. And although many security professionals take great delight in debating exactly how useful or meaningful this standard is, the reality is that complying with it is required by many users of encryption. Many government agencies require the use of FIPS 140-2 validated encryption, and the FIPS 140-2 standard is referred to in standards and regulations that cover many other industries. Using FIPS 140-2 validated encryption lets you easily comply with all of these.
Using encryption that does not comply with applicable standards can leave you open to having your auditors declare that you’ve had a data breach and need to notify your customers of this fact, even if you are using encryption that has not met the standard. This is often mentioned as ‘safe harbor’ in various regulations related to data privacy in respect to breach notification. So from the practical point of view, the publication of SP 800-38G and the ability to get implementations of FFX modes FIPS 140-2 validated is of great importance.
Is FFX encryption the best/only way to protect sensitive data?
Not necessarily. Encryption, tokenization, masking or de-identification all have their own uses. Which one is the best really depends on many, many factors. But for cases where encryption turns out to be the best approach, the format-preserving capability of the FFX modes can give you an easy way to get lots of very useful capabilities. No matter the choice, the first rule of data security is don’t roll your own, and the second rule is don’t rely on people rolling their own to tell you it’s secure.
About the Author
Luther Martin, HPE Distinguished Technologist, is a frequent contributor to blogs and articles. Recent contributions include White-box Cryptography in the April ISSA journal, and Is the Need for Speed Real? and FFX Modes of the AES Encryption Algorithm Specified in NIST’s SP 800-38G on the Voltage.com blog.