File Security in the Age of Ransomware

File Security in the Age of Ransomware: What should Boards and CIOs ask about their Security Architecture

image from: https://www.clearswift.com/blog/2017/05/17/how-top-cyber-security-teams-neutralize-ransomware-attacks

In the face of spam, ransomware (like WannaCry), and viruses, not to mention malicious human actions, what does a Board, CIO, or Security Professional need to know at their fingertips to further protect their environment?

  • What are the minimum questions to ask to start to generate a Security Risk Management Profile?
  • What are the overall strategies they should employ and at what risk levels?
  • How is this file security affects Disaster Recovery (Someone deletes the network drive) and/or Business Continuity (An fire destroys the building the home office so we need new places to access our files.) also known as DR/COOP

The sections below illustrate the minimal one should know (and do) to mitigate risk and realize the full security architecture an organization can/should employ. Security is a large topic (heavily intertwined with DR/COOP so we only discuss the software piece of security architecture – not per say network security, insider threats, or datacenter hardware security(like raid or backup power).

Architecture Strategies from Minimal to Comprehensive; Ideally an organization should employ all the below, but costs will play a big part in what is instituted.

  1. Backup (Now!) with Multi Access to the Data
  2. Ensure one of your backups strategy is Versioning (no, really)
  3. Use a 3-2-1 Backup Strategy
  4. Use an Adaptive Security Architecture Solution Strategy
  5. Visualize it and Review Quarterly

The questions at the end of each section are suggestive and:

  • meant to help establish an organizational (or individual) risk profile. It isn’t an exhaustive list but it is enough to get started.
  • Get the board thinking (but not over prescribing) Afterwords, the Board or CIO should agree with the risk profile created and if not, add more resources (people, money or time) to raise the risk profile to an acceptable level.

Lastly, the Board or CIO should demand tested and verified actions derived from the questions. Why? Because Untested, Unverified, backup strategies are basically non-existing backups.

I. BackUp (Now!) with Multiaccess to the Data

Back. Up. Now.

I’m not being cheeky; I’m being realistic. First, an organization should back up. It’s that simple. Most tech professionals with a phone have: a Google Account, and a Drop box account, and a Box account, and use the AT&T, Sprint, and Samsung or Verizon clouds, and have set their devices set to auto backup. Why? Because they know two things 1) it’s cost effective (free in some cases) 2) you will need to recover your data. It’s not if, it’s a when.

However, for an organization backup is expensive, so because it is cost prohibitive to use 2 or 3 clouds we transfer the risk to the cloud provider by asking them: how they back up and how best they can restore: Basically, all the questions we should ask ourselves.

This risk transfer doesn’t absolve the asking organization of due diligence required for security. Below are questions to establish any Board or CIO should ask to start to establish their risk profile.

  • Question: Do we back up? (Only acceptable answer is yes)
  • Question: When was the last back up? How far back can we go
  • Question: When was the last time we did a restore? Was it Successful?
  • Question: How long would it take to retrieve a file, a drive, a environment?
  • Question: Are any of the Copies a) offline or/and b) read only
  • Question: Is there a way to get the files from another computer , terminal or location? How secure it that location? Have we tested it and when?

How does this help with virus and ransomware?

  • Having a back up is self explanatory — a bad file or deleted file can be retrieved and work can continue in a timely manner.
  • Along with the correct credentials — If you don’t have access from one location, access can be gained from another location; Or if a computer that won’t boot, one can go to another computer that will boot and then get the files that way.

II. Version!

Versioning

As we know auto backup/auto versioning is the holy grail for distracted consumers and employees. Humans make mistakes and rarely do anything consistently day to day. There are entire industries around fixing or mitigating human mistakes. How then to employ a good backup strategy in the face of human distraction? Include Versioning as part of the security architecture.

I know, in todays day and age who doesn’t version? Well, it has been my experience that some organizations believe auto replication of files (an exact copy in multiple places) is also a backup. It is a back up, it is not a backup strategy. Why? It’s great if a file replicates somewhere; It’s bad if any corrupt file or encrypted file auto replicates and wipes out a good copy.

Your version strategy should be part of a good strategy.

  • Question: Do we version our backups (Only acceptable answer is yes)
  • Question: How many versions and for how long?
  • Question: Are they true versions(copies) or bit versions? (important to know if you lose the original file can you reconstruct the file from bits)
  • Question: Do we automatically replicate to every backup device?
  • Question: Are all the replication methods 2 way sync?
  • Question: Do we offline any versions, quarterly, annually?
  • (Include questions from Section 1)

How does this help with virus and ransomware?

  • Versioning means you can go back to a file that wasn’t corrupted or encrypted by ransomeware and retrieve it, But you can do that with a backup Versions are usually saved more frequently than full backups.
  • One way sync or replication means that if a file is deleted it doesn’t automatically delete from all devices. There are good and bad reasons for one way replication so a CIO should be aware of the pros and cons.

III. Employ a 3-2-1. Backup Strategy

Image from: http://www.lucidica.com/blog/how-to-guides/3-2-1-back-up/

Now we get into the really heavy (and often times expensive strategies). The rise of large clouds (and the ability to shift that risk to the cloud provider) has helped control the cost of this strategy. Cloud providers say they have backups in geographically dispersed locations, but do they truly? Does the board understand the “different geographic distributions” and the protections it offers? What about if:

  • data is backed up within different nodes, but within the same datacenter
  • data is backed up within different node sections, and different data centers
  • data is backed up within the same region clusters of data centers
  • data is backed in different countries along with edge nodes for quick replications

A good rule of thumb to use is: Data Node→ Datacenter Distribution — Geographic Location→ Whole or Part. Ex. A SQL Database is replicated: in parts, in a different rack, across three datacenters, across two US region, but not internationally, and each part of the database is itself replicated in different nodes in each datacenter so it can be reconstructed if 2 of the datacenters are lost.

The complexity only rises from there: there are probably a dozen combinations in between the above example all of which are just as expensive. Add more dollars to add offline copies stored somewhere else (or at a different cloud provider). Therefore serious architecture strategies need to have their ducks in a row:

  • 3 — Ensure three (3) copies of the data — best is also to ensure one copy additional versions; ensure one copy uses only 1 way sync; one is a true readonly backup.
  • 2 — Store backups in at least two (2) mediums of storage — ex. cloud, tape, offline drive, two companies, even two different file types, anything but make it two)
  • 1 — Ensure one (1) of the copies is offline and/or away from the origin site — External drive or tape or another air-gapped computer somewhere under lock and key, or just another cloud provider

The board should keep thing simple with the questions below.

  • Question: Do we use a 3 -2- 1 strategy?
  • Question: How often do we take backups off site/offline
  • Question: Where are they stored; who has access?
  • Question: Can the data be reconstructed if a region shuts down
  • Question: Are you complying with HIPPA, PII, SOX rules for storage and destruction when stored?
  • Question: When was the last audit of the “offline” storage medium, and how often is the Asset Management Database Updated to relfect that change.

How does this help with virus and ransomware?

  • This helps mitigate when the cloud disappears. Be it “Insert cloud provider” or if parts of your own company is down, no one can work” problem which is becoming common
  • Helps to mitigate a potential DR/COOP stop work. If the CFO needs your financial statements for a board meeting the same day an global ransomeware attack is happening its better to say: I’m going to get the “other” backups to you in 10 minutes while the security team works this problem.
  • Helps to keep business moving. A global or large organization doesn’t completely grind to a halt because a region cannot access it’s files.

IV. Employ an Adaptive “Anti”- (anti-virus, anti-ransom, etc) Solution Architecture strategy

Image from: https://blogs.manageengine.com/it-security/2016/04/07/enhancing-it-security-with-adaptive-security-architecture-part-1.html

At the most fundamental level, security is hard. But it is harder because we are unaware of the multitude of things our systems do as part of their day to day existence. Part of an “Anti” solutions strategy is to use the systems themselves to automatically help organizations close the vulnerabilities which we know exist as part of any security architecture.

Any “Anti” or Adaptive Solution strategy should use three main methods as part of a security architecture and yes, there are items in the market place which offers the three in one package. Still, one must know what seek.

  • Use an Edge Blocker — not just a firewall, but an enhanced or security appliance to block ports, and network communication for known vulnerabilities made specifically for this purpose. Use the security appliance to monitor INSIDE your network also. Some systems actually can review a database of known vulnerabilities and include these vulnerability fixes as part of their security scans and architecture.
  • Use a file signature appliance or security solution — The core files of a system or technology are often first to be corrupted because they are trusted by the system. This method versions or ensures a file has a certain signature, and if it doesn’t, overwrites it with a known good file. For example, my WordPress often tells me if I’ve modified a core file. Also, adjusting non-core files usually mitigates the problems associated with upgrading.
  • Behavior (usually called Heuristics ) identifiers or blockers — These activate or watch for when a system starts doing something linked to virus, spam or ransom behaviors, and blocks the offending program. These have been around forever, but they are smart and getting smarter (I know, bad puns). These also often link to a heuristics database which all major “anti” solution providers now offer.

Question:

  • Question: How does our Security Architecture protect core and non-core files; Is this process automatic? And If not, can we make it automatic?
  • Question: How often does the system seek to update itself? Every day, every 6 hours? Every 30 Minutes?
  • Question: Are we able to upgrade our core systems without breaking our own configurations or customizations; If we can’t, can we live without that system? (If the answer is now, it’s time to get new systems which can be upgraded)
  • Question: Who get’s the Alerts and what is the current alert status?

How does this help with virus and ransomware?

  • Humans are slow, and we need sleep. By the time we wake up half the world could be encruypted or suffering a global virus attack. Computers don’t sleep and if they can update themselves the organization is better protected.
  • Keeping the systems apart from the data is good risk management. A security update may introduce a new vulnerability, and the ability to roll back those mistakes is an important part of security. Knowing your window of operation is useful in knowing how long you have to react.

V. Visualize and Review

Visibility is key to making information decisions.

No security plan is useful if it is not regularly reviewed; and less useful if it’s not seen at all. Use the questions above to generate a report which any person can understand and then distribute that report during quarterly meetings. Monthly security threat and war room data should be reviewed monthly.

Image From: https://www.pinterest.com/arbornetworks/9th-annual-worldwide-infrastructure-security-repor/?lp=true

The board and/or CIO should have all the questions and their answers displayed on a real-time dashboard or report which they can view on demand.

  • They should review the risk profile quarterly and ask how it can be maintained or improved. Failure to do so is not being serious about security architecture.
  • What shouldn’t be part of the Executive level report? Firewall threats blocks and stopped or other day to day metrics. It should list: Last backup test (Red, Yellow, Green), Project to enhance or Mitigate (and status Red, Yellow, Green); Last incidents and lessons learned; clear understanding of strategies; Impact of Risks; Known time to recover

The above is expensive, but comprehensive security strategy is hard, expensive and neverending… but for pete’s sake, back up your data.

Did you enjoy this post? Recommend it, byFollow Me on Medium @Albert Mowatt and clicking the heart icon below.