There’s the bare minimum that’s required by law, then there’s actual security.
For the sake of this discussion, we can assume that Xinja is implementing the following:
- PCI DSS 3.2.1
- Full adoption of the entire standard, no compromises
- The procedure of how/when new standards are adopted
- Clear OAIC mandatory breach notification procedures
If you’re not, let us know, so we can cancel our accounts before you lose your banking license
You’ve taken an excellent first step by asking the community for feedback. There are so many areas to address from how you secure your endpoints (employee machines, server infrastructure, containers, compute instances), identity management, log and event correlation/analytics, front-end application security, how you handle internal secrets (passwords), and the way you deal with a data breach.
Even how you handle user authentication for corporate users is essential. E.g., let’s say you use a single shared secret for your corporate Wi-Fi, which is problematic and you didn’t think it was a problem but you publish this information irrespective. Likely someone can flag this with Xinja and say “hey, 802.1X authenticate your users with EAP-TTLS via internal CA issued certificates and 2SV using hardware tokens” and you’d avoid a rookie mistake.
So far, some things that are top of my mind.
Caveat emptor: I, nor can anyone else, can know everything. The following is not an exhaustive list and is a starting point. It also hasn’t been exhaustively checked for errors, so feedback and corrections are welcomed and entirely warranted, just like security practice in general.
- Publish how you handle everything which may include:
- Breach notification / rectification
- Details on the cyber-kill chain if you are breached, and PII has been exfiltrated, or customer accounts defrauded
- The infosec community is split on whether you should publish details of bug or breach if you are 100% sure (i.e., whatever evidence you have would hold up to public scrutiny) that no data was harmed as a result.
- How can be 100% sure? You can’t. Publish everything, be open.
- Encryption and hashing techniques in use
- Your code (see below)
- Have two sets of public documentation
- Set 1: Easy to understand for general consumption
- Set 2: Detailed information that forms the collateral you stand behind and can be publically scrutinised for flaws
Adopt full transparency in every conceivable way you can (i.e., it won’t hurt your market advantage or weaken your security posture, and validate these reasons with the public themselves, not an internal stakeholder’s pride).
- Follow all their recommendations, don’t make compromises (unless legislature means you have to and then publically lobby for a change in the legislature)
- Also, review the attack vectors published via the security community (both black hat and white hat), and those leaked into the public domain via Wikileaks and other sources of information (e.g., you can guarantee every single leaked NSA tool has iterated and improved since the lid blew on the five eyes operation).
- Never put all your eggs in one basket
- A heterogeneous solution that may or may not use commercial products is better than reliance and lock-in with a homogenous one
- Treat every solution with a healthy level of scepticism, nobody can be 100% confident any given solution is infallible
- Consider publishing your code, both front and back, as open source
- Your code is not your secret sauce, the services you provide your customers are, and software features can always be copied whether you release your code or not, to develop it in the open in the first place (and make headlines the following day)
- A git repo that provides public scrutiny both saves you $ on often-useless security “experts” and provides the opportunity for other developers to issue pull requests where they’ve bugs (that may or may not impact security, either way)
- All of the rest of this post is for naught if you incorrectly implement the protocol, standard, function, or whatever
- More eyes on the problem are better, even Google and now Microsoft (!!) develop significant products first in the open source community, and it’s prevalent amongst other vendors too
There are several use cases here where blockchain would be useful:
- Personally identifying a customer instead of the ridiculously easy-to-bypass low bar of “name, address, DOB, mobile #” that is often adopted
- The key pair generated means you have a robust method of authentication that is not a password (but cannot be used in isolation, see below)
- Implementation for customer transactions could be used to detect fraud, validate peer-to-peer transfers, etc.,
- An ICO could be a way to raise investment capital but also provide a natural investment entry point for customers rather than direct holdings on the ASX/NASDAQ
- If Australian banking regulations get in the way, so petition to change them where it makes sense to do so
Let’s assume public/private keys on a blockchain aren’t too hard for every customer, and you revert to passwords, which are vulnerable for two significant reasons:
- Poor implementation by a provider
- Poor selection by the user
The first you can defend against with infosec best practise but the second is more difficult.
The idiom “you can lead a horse to water” applies here but you shouldn’t give up since you can at least try and influence good user behaviour.
- Increase mandatory entropy for PINs which means adopt 8-digit minimums, not 4-digit
- When customers complain that’s too hard to remember, have an easy to understand reference that they can review, so they appreciate why you’ve adopted 8-digits
- Check passwords against those obtained from previous data breaches, I suggest using Troy Hunt’s publically available service as (as far as I know) it’s the most extensive public database of breached passwords, at 512 million passwords
- NIST recommends this as well
- Defend against rainbow table attacks or decryption by using a one-way hash that includes a large salt
- Implement key stretching as part of your hashing function (or key stretching)
- PBKDF2 with SHA-512 digests, not least because easy access to task-specific hardware to break <512-byte SHA digests exist due to crypto-mining ASICs
- Adopt infosec community minimum password complexity which means it’s a password that an entire C&C botnet has difficulty breaking, not what you think is hard
- This may mean a 6-word passphrase (or whatever minimum entropy is required at the time) is fine and won’t necessarily contain an uppercase letter, number, or special character (since humans are incredibly predictable and often put those things in the same place, see password re-use databases for examples of all the passwords people consider to be good, but aren’t)
MFA / 2SV (Multi factor authentication / 2-step verification)
- No SMS, ever, even as a backup. See the next topic.
- OTP provided by your app (i.e., a push notification that requires approval similar to Apple or Google’s new best-practise)
- Email approval required for new location/device logins in addition to the 2SV
- Allow users to use Google Authenticator and implement 8-digit codes (not 6). Google made a recent GitHub commit to allowing 6, 7 or 8 digits. Use the max.
- U2F FIDO 2 adoption, perhaps even consider making it mandatory once your balance is over a certain amount
- See yubico.com
- Don’t use it, even as an account recovery method and do not accept it as a way to identify a customer’s account. It was never secure and is an easy attack vector for even the most inexperienced attacker.
- Two most common attack vectors are mobile number port-out scams and SIM jacking/swapping
- NIST no longer recommends SMS-based 2SV
PII (personally identifying information)
- How it’s stored
- Whom it’s shared with
- Easy PII controls (deny, permit, completely and securely erase)
- Easy = not deep inside some obscure part of an app to find and even the most unsophisticated user will be able to handle
- If you enable 3rd party integration, ensure your 3rd parties adopt the same rigorous security standards you do
- Don’t offshore PII
- Multi-cloud deployments are fine, but do not deploy into availability zones that aren’t within Australian borders (i.e., to get to them a packet transits an undersea cable with a landing point that isn’t on Australian soil)
- Encrypt it at-rest, publish how it’s done
- Encrypt it in-flight, publish how it’s done
- Particularly important, as many businesses consider a physical optical link between two locations under their, or a trusted 3rd party, control is secure. It’s a fallacy, an optical fibre can be tapped and data siphoned.
Web/app access and in general
Adopt latest and highest standards, iterate as new ones become available, immediately retire old ones, have an easy, publically published, policy and method for how you do it
- TLS 1.3 for HTTPS certificates issued
- Elliptical curve (ECDSA) instead of RSA for certificates issued by public CAs and internally
- Key management where no single person holds all the parts to a “super user” password
- E.g., no single individual in the business should have an admin/root or “superuser” password; it should be stored using a key store such as Hashi’s Vault
- Vault, and I’m sure other projects, ensure that no single “superuser” password is broken up between numerous people and that they hold one 1 part of the key. Typically you might have half a dozen people with parts of the key, and you require a quorum of 4 of them to complete it