The 3-2-1 backup rule: Has cloud made it obsolete?


A fundamental of backup is 3-2-1 – often referred to as “the 3-2-1 rule”.

But what is the 3-2-1 rule? Is it still of value to all organisations, especially in an era defined by increasing use of the cloud for backup and disaster recovery?

This article looks at the 3-2-1 rule for backup, defines what it meant as originally intended and asks whether it has been superseded by recent developments.

The conclusion we’ll come to is that the principles it embodies are good ones, and if it isn’t directly applicable to current scenarios, it does provide some essential guidelines about how we should protect data in the 2020s.

Defining the 3-2-1 rule

The term 3-2-1 was coined by US photographer Peter Krogh while writing a book about digital asset management in the early noughties.

3: The rule said there should be three copies of data. One is the original or production copy, then there should be two more copies, making three.

2: The two backup copies should be on different media. The theory here was that should one backup be corrupted or destroyed, then another would be available. Those threats could be IT-based, which dictated that data be held in a discrete system or media, or physical.

1: The final “one” referred to the rule that one copy of the two backups should be taken off-site, so that anything that affected the first copy would not (hopefully) affect it.

Shortcomings of the 3-2-1 rule

This set of rules is fairly limited, if taken as originally intended.

The idea of three copies is fine. It seems to fit the bill of a minimum viable number to ensure recovery in case of disaster. But the two backup copies being on different media is full of potential limitations and pitfalls today.

The idea was that the first of the two is for fairly rapid recovery, so would be accessible from the main production system.

The second, so says the rule, should be on different media. Back when photographer Krogh coined the idea, the intention was to ensure a gap – logical, if not physical – between copies so that data corruption or tangible damage affecting one would not affect the other.

That seems like a lot of trouble to go through for an organisation that might need rapid access to backups for recovery, test & dev, and analytics. Different file systems and protocols may also create more layers of complexity and expense in compliance terms where stored data needs to be treated similarly across all retained instances.

1 becomes 2

Mostly, though, the idea of different media as a necessity looks pretty redundant in the light of the development of the cloud, which potentially collapses point three – the “1” – into point two.

In other words, the ability to move data off-site to the cloud is available cheaply and with sufficient bandwidth in ways that were not really realistic when 3-2-1 was devised.

Tape still has its place, but overwhelmingly – due to slow access times – that is in archiving. It is potentially a good insurance against ransomware, too, with its in-built “air gap” to core systems. But yes, access is slow, so its use cases are limited.

So, in place of what was once tapes in the car boot/trunk, we now have the cloud. It is quite clearly off-site, so fulfils point three.

And although a cloud tier connected to an organisation’s datacentre is not necessarily on a different type of media or storage mode (often object), it can fulfil the same purpose of the original point two – to play host to a copy that is incorruptible or undamageable, should the first be so affected.

But there is a big can in there, to coin a phrase. Cloud storage and cloud backups that sync with on-premise systems can be affected by ransomware and other nasties.

Storing backup in the cloud is a good idea, for the physical distance it places between it and on-site copies. But to ensure a logical gap, the backup must be done right, with the correct security and access rules, immutability of data, and point-in-time restore.

All of which makes the original “2” look redundant, and possibly something that is only possible for individuals and organisations operating at small scale and with low recovery time objective (RTO) requirements.

Pulling out the principles of 3-2-1

The advent of intelligent and rapid ways of making secondary copies of an organisation’s data to other systems on-site, off-site and in the cloud means that much of what was literally intended by 3-2-1 backup is redundant.

Instead, we can perhaps draw out the principles within 3-2-1 and make use of them in the era of cloud, ransomware, and so on.

Firstly, multiple copies is essential.

Obviously, there is the production copy. This may be copied via various means – snapshots, replication, continuous data protection and/or various suppliers’ disaster recovery failover products – to a discrete system that can be activated in case of serious outage at the first. This could also be in the cloud.

But, in addition to any rapidly restorable failover copy, there should also be true backups.

Snapshots and the like can provide quick access to files and past system states, but they are more costly to store – and therefore don’t usually go back as far – and if they have been compromised, they will be useless. Backups provide copies that are retained for longer and are taken at less frequent intervals, say once a day, so there will potentially be clean copies from further back in time available.

Secondly, off-site copies are also essential. The principles of disaster recovery dictate that secondary copies that you may need to rely on should be as isolated as possible from things that could catastrophically affect the primary site. Secondary sites in the same organisation and the cloud fulfil this need.

But the old requirement in 3-2-1 for data to be on different media is not really practical. As we’ve seen, a second site or the cloud can do what this rule was intended for, but only where security and access are up to the job.


So, what are we left with from 3-2-1?

The principles seem to be that:

  • There is a primary copy.
  • There should be a secondary copy, which can be snapshots or a failover system, but there should also be a true backup.
  • A secondary copy should be off-site (or in another cloud location?). This can be the backup, or the failover or snapshots.

Primary-secondary-off? Or 1-2-&-off?

Not necessarily very snappy – but the principles are there.


Please enter your comment!
Please enter your name here