Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Tape Backup and Restore

Hi,

 

i need some experience from you. For our 2 Step Backup System i have something like this:

 

Setup:

Backupserver connected to Data Storage, Backup Storage and Tapeloader (LTO 5 Dualmagazin 80TB). All Connections are 10GBE.

One "Old" 8 Bay Storage with 4x1gb Bond Network. This Storage has a Raid 0 of 2 SSDs with 2TB space an a Raid 5 with 20TB of space.

 

Whats going on:

  • Evey Night the Backup server do a B2D from Data Storage to Backup Storage. (In one week 6 increments an one fullbackup)
  • One a week we the Backupserver do a B2T from Backup Storage to the Tapeloader

 

Where is the Problem:

Because we we have vbk files around 2,5tb the restore of one full Backup from Taploader to a Network storage does not work well.

You could try to copy single 2,5 tb file over a 4x1gb bond to a storage an you will see even with an ssd raid 0 on the other side the

performance is creepy like hell. The Nas where i try to restore the file is not upgradable (Failure of invest i know i dont bought it). 


What i plan:

Setup a Raid 0 with 2,3 or 4 Seagate Ironwulf 6TB Disks in the Backup Server to get rid of the Network problem. So what i am looking 

for is if somebody has expiernce with this discs in Raid 0. What i need is a stable high write speed. My Tapeloader could Restore with 450MB/s,

thats the Target or something like this at the raid perfromance. 

 

I found a test of the 8TB Version. https://www.hardwareluxx.de/index.php/artikel/hardware/storage/40112-hdd-test-08-2016.html?start=4

The seq. write speed is beste at the first 30% so in my opinion the 6 tb should have full performance in the same area. Lets say 220MB/s on the first 2tb.

In a raid Zero it should be 440MB in the First 4TB split with 2tb on each drive. In theory that is what i want. 

 

So in theroy the 10gbe nic from tapeloader should be capable of 1,2gb sec. The Raid 0 with two drives should be capable of 440MB/s and my

Tapeloader 450M/s. So the single thinks looks great. Restore should take 2500gb/0,44gb= 1,5h limited by drive an tape.

 

So what do you think? Will this work?

 

 

 

Link to post
Share on other sites
45 minutes ago, leadeater said:

What backup software/tools are you using?

Veeam Backup & Replicaton 

Link to post
Share on other sites
On 3/13/2017 at 0:06 PM, Tsx said:

Hi,

 

i need some experience from you. For our 2 Step Backup System i have something like this:

 

Setup:

Backupserver connected to Data Storage, Backup Storage and Tapeloader (LTO 5 Dualmagazin 80TB). All Connections are 10GBE.

One "Old" 8 Bay Storage with 4x1gb Bond Network. This Storage has a Raid 0 of 2 SSDs with 2TB space an a Raid 5 with 20TB of space.

 

Whats going on:

  • Evey Night the Backup server do a B2D from Data Storage to Backup Storage. (In one week 6 increments an one fullbackup)
  • One a week we the Backupserver do a B2T from Backup Storage to the Tapeloader

 

Where is the Problem:

Because we we have vbk files around 2,5tb the restore of one full Backup from Taploader to a Network storage does not work well.

You could try to copy single 2,5 tb file over a 4x1gb bond to a storage an you will see even with an ssd raid 0 on the other side the

performance is creepy like hell. The Nas where i try to restore the file is not upgradable (Failure of invest i know i dont bought it). 


What i plan:

Setup a Raid 0 with 2,3 or 4 Seagate Ironwulf 6TB Disks in the Backup Server to get rid of the Network problem. So what i am looking 

for is if somebody has expiernce with this discs in Raid 0. What i need is a stable high write speed. My Tapeloader could Restore with 450MB/s,

thats the Target or something like this at the raid perfromance. 

 

I found a test of the 8TB Version. https://www.hardwareluxx.de/index.php/artikel/hardware/storage/40112-hdd-test-08-2016.html?start=4

The seq. write speed is beste at the first 30% so in my opinion the 6 tb should have full performance in the same area. Lets say 220MB/s on the first 2tb.

In a raid Zero it should be 440MB in the First 4TB split with 2tb on each drive. In theory that is what i want. 

 

So in theroy the 10gbe nic from tapeloader should be capable of 1,2gb sec. The Raid 0 with two drives should be capable of 440MB/s and my

Tapeloader 450M/s. So the single thinks looks great. Restore should take 2500gb/0,44gb= 1,5h limited by drive an tape.

 

So what do you think? Will this work?

 

 

 

The issue here is Tapes are still extremely slow no matter how fast the connection is.  We finally retired our LTO6 because we are doing a COLO DC now, and no person would be there to swap tapes, so we utilize all disks and replication between sites with CommVault.  But the tape system was extremely slow even on 8gb Fiber channel.  If you want speed, ditch the tape.

Link to post
Share on other sites
Just now, BigCountryGeek said:

The issue here is Tapes are still extremely slow no matter how fast the connection is.  We finally retired our LTO6 because we are doing a COLO DC now, and no person would be there to swap tapes, so we utilize all disks and replication between sites with CommVault.  But the tape system was extremely slow even on 8gb Fiber channel.  If you want speed, ditch the tape.

I dont have a Problem with tape switching. I use a Tapeloader with 24 LTO5 Tapes. They are automaticly switching. Veeam is capable of ordering all tapes to different virtual archives. Works extremly nice.

Link to post
Share on other sites
2 minutes ago, Tsx said:

I dont have a Problem with tape switching. I use a Tapeloader with 24 LTO5 Tapes. They are automaticly switching. Veeam is capable of ordering all tapes to different virtual archives. Works extremly nice.

We have over 500 tapes.  24 tapes would never be enough to keep years of data.  Sorry.  Replication/Disk Storage with dedupe and compression is where it's at today, and not Tapes.  I know what Veeam can do, and it doesn't come close to features of CommVault.

Link to post
Share on other sites
Just now, BigCountryGeek said:

We have over 500 tapes.  24 tapes would never be enough to keep years of data.  Sorry.  Replication is where it's at today, and not Tapes.  I know what Veeam can do, and it doesn't come close to features of CommVault.

Dont want to to query your knowlage but i know what a lot of data means. 24 Tapes each two weeks, 56 weeks a year, 5 years of history. You can sum this up yourself, i think it will enough. But to go with my probl. again. We have to Backup at 40TB a week so if i downt want to dig a deep whole and pay some money for a darkfiber to the next data center, i will have no chance to backup into a data center. It is possible to buy this connection but yeah i try to fix the problem with the tape first swing and dig with hammer an shovel :D.

 

And as you see above i backup two ways B2T and B2D.

 

So technically the tapeloader should work with a speed of 500gb up to 2tb at an hour. We have arount 250gb/hr so i try to find the prob. The Data is precompressed so no prob. with change data size and amount.

 

Here are the spec. http://www.overlandstorage.com/products/tape-libraries-and-autoloaders/neos-t24.aspx#Specifications

 

Link to post
Share on other sites
12 minutes ago, Tsx said:

Dont want to to query your knowlage but i know what a lot of data means. 24 Tapes each two weeks, 56 weeks a year, 5 years of history. You can sum this up yourself, i think it will enough. But to go with my probl. again. We have to Backup at 40TB a week so if i downt want to dig a deep whole and pay some money for a darkfiber to the next data center, i will have no chance to backup into a data center. It is possible to buy this connection but yeah i try to fix the problem with the tape first swing and dig with hammer an shovel :D.

 

And as you see above i backup two ways B2T and B2D.

 

So technically the tapeloader should work with a speed of 500gb up to 2tb at an hour. We have arount 250gb/hr so i try to find the prob. The Data is precompressed so no prob. with change data size and amount.

 

Here are the spec. http://www.overlandstorage.com/products/tape-libraries-and-autoloaders/neos-t24.aspx#Specifications

 

Well, we back up 60 TB a week.  We have to keep our data for 7 years. :)  We have two data centers as well.  About to move our 2nd one to Austin actually into a COLO.

 

Currently we use a Quantum i500 series.  We moved it local to our corporate office due to retention till we can get past the 7 year time.  I know our system can write around 400 mb a second. I honestly never payed attention to it's throughput, but I know it was never close to disk, but it also was never bad.  It could do over 20 tapes over one weekend with two drives.  LTO6 ones.

 

https://iq.quantum.com/exLink.asp?5609380OJ69R28I27491408&DS00340A&view=1

 

Plus the cost savings with using DISK is very high.  With Dedupe, and Compression so good, it barely uses any additional disk for monthly backups since most of the data hasn't changed, just new data.

Link to post
Share on other sites
7 hours ago, Tsx said:

Veeam Backup & Replicaton 

Since you using Veeam I would suggest moving to the latest version and upgrading the backup Server to Windows Server 2016. Then for the actual disk backup store use ReFS since Veeam can use some really nice magic with ReFS to do Synthetic Fulls without any disk I/O at all, metadata pointers.

 

One of the biggest problems with traditional Veeam Deduplication is re-hydrating the data when moving it to tape, I have seen very powerful servers with large disks arrays have slower throughput than the destination tape library which is a problem for tape scrubbing.

 

https://www.veeam.com/blog/advanced-refs-integration-coming-veeam-availability-suite.html

https://www.vladan.fr/veeam-9-5-and-microsoft-refs-win-win-for-a-virtualization-admins/

Link to post
Share on other sites
7 hours ago, BigCountryGeek said:

Well, we back up 60 TB a week.  We have to keep our data for 7 years. :)  We have two data centers as well.  About to move our 2nd one to Austin actually into a COLO.

 

Currently we use a Quantum i500 series.  We moved it local to our corporate office due to retention till we can get past the 7 year time.  I know our system can write around 400 mb a second. I honestly never payed attention to it's throughput, but I know it was never close to disk, but it also was never bad.  It could do over 20 tapes over one weekend with two drives.  LTO6 ones.

 

https://iq.quantum.com/exLink.asp?5609380OJ69R28I27491408&DS00340A&view=1

 

Plus the cost savings with using DISK is very high.  With Dedupe, and Compression so good, it barely uses any additional disk for monthly backups since most of the data hasn't changed, just new data.

We backup around 60TB-90TB a day with Commvault, long term disk retention becomes an issue once you get to a certain scale which is something tapes don't suffer from.

 

We keep a years worth of data on disks replicated to two data centers sitting on 500TB 4 node Netapp 3220 filers at each DC, you can image the data size before dedup :). Media agent sizing, SSDs and DDB unique and secondary block counts starts to get very large and you have to be really careful of QI times. There comes a point that keeping more Synthetic Fulls doesn't use much if any disk space is all well and good but it's actually the DDB that will be the road block stopping you from being able to do it (not without adding another new SP copy with it's own DDB).

 

We have around 1400 LTO-5 tapes and two Quantum i80 libraries with 2 drives per library, those running at full speed can actually write out a fairly large amount of data in a short amount of time (500 GB/hr per drive).

 

Currently I still believe a mix of disk and tape for backups of large amounts of data or long term retention (or both) is the best strategy.

Link to post
Share on other sites
15 hours ago, leadeater said:

Since you using Veeam I would suggest moving to the latest version and upgrading the backup Server to Windows Server 2016. Then for the actual disk backup store use ReFS since Veeam can use some really nice magic with ReFS to do Synthetic Fulls without any disk I/O at all, metadata pointers.

 

One of the biggest problems with traditional Veeam Deduplication is re-hydrating the data when moving it to tape, I have seen very powerful servers with large disks arrays have slower throughput than the destination tape library which is a problem for tape scrubbing.

 

https://www.veeam.com/blog/advanced-refs-integration-coming-veeam-availability-suite.html

https://www.vladan.fr/veeam-9-5-and-microsoft-refs-win-win-for-a-virtualization-admins/

Tanks a lot for this hint i will take my time to test this in our system and give some feedback here.

 

 

Link to post
Share on other sites
15 hours ago, leadeater said:

We backup around 60TB-90TB a day with Commvault, long term disk retention becomes an issue once you get to a certain scale which is something tapes don't suffer from.

 

We keep a years worth of data on disks replicated to two data centers sitting on 500TB 4 node Netapp 3220 filers at each DC, you can image the data size before dedup :). Media agent sizing, SSDs and DDB unique and secondary block counts starts to get very large and you have to be really careful of QI times. There comes a point that keeping more Synthetic Fulls doesn't use much if any disk space is all well and good but it's actually the DDB that will be the road block stopping you from being able to do it (not without adding another new SP copy with it's own DDB).

 

We have around 1400 LTO-5 tapes and two Quantum i80 libraries with 2 drives per library, those running at full speed can actually write out a fairly large amount of data in a short amount of time (500 GB/hr per drive).

 

Currently I still believe a mix of disk and tape for backups of large amounts of data or long term retention (or both) is the best strategy.

Especially your last sentence is what i think too. Choose differfent kind of backup technology brings the real safety. Best example is in days like ransomeware can encrypt network dirves as fast as eating an apple tapes cant be attacked by this. Of course when your backup writes the encrypt data you have a problem but this could be prevented.

Only work with harddrives is even if you mirror you data to a data center not without risk.

 

And at least tapes are small in relation to capacity and have a nice durability in relation to hdd or ssd especially worm disks.

 

 

Link to post
Share on other sites
18 hours ago, leadeater said:

We backup around 60TB-90TB a day with Commvault, long term disk retention becomes an issue once you get to a certain scale which is something tapes don't suffer from.

 

We keep a years worth of data on disks replicated to two data centers sitting on 500TB 4 node Netapp 3220 filers at each DC, you can image the data size before dedup :). Media agent sizing, SSDs and DDB unique and secondary block counts starts to get very large and you have to be really careful of QI times. There comes a point that keeping more Synthetic Fulls doesn't use much if any disk space is all well and good but it's actually the DDB that will be the road block stopping you from being able to do it (not without adding another new SP copy with it's own DDB).

 

We have around 1400 LTO-5 tapes and two Quantum i80 libraries with 2 drives per library, those running at full speed can actually write out a fairly large amount of data in a short amount of time (500 GB/hr per drive).

 

Currently I still believe a mix of disk and tape for backups of large amounts of data or long term retention (or both) is the best strategy.

How on earth do you have 60-90TB a day of incremental changes?  Do you do fulls every day?  I work for one of the largest banks in the USA, and never seen that.

 

As long as you have people manning your data centers, that is okay to still use tape. But like our upper management has said, we are not in the data center business and they are moving it out to colo's.  So we had to switch to disk for all backups including long term.   If you ask CommVault, most companies are switching to just disk not tapes any more for long term.  It is more expensive, but at least it doesn't require the man power and physical space tapes do.

Link to post
Share on other sites
2 hours ago, Tsx said:

Especially your last sentence is what i think too. Choose differfent kind of backup technology brings the real safety. Best example is in days like ransomeware can encrypt network dirves as fast as eating an apple tapes cant be attacked by this. Of course when your backup writes the encrypt data you have a problem but this could be prevented.

Only work with harddrives is even if you mirror you data to a data center not without risk.

 

And at least tapes are small in relation to capacity and have a nice durability in relation to hdd or ssd especially worm disks.

 

 

Ransomware can't infect a dedicated backup disk storage.  The disk storage is not attached to any thing that would allow ransomware to attack it unless a user logs into the mediaagent server and gets it infected, that is the only way, and we don't allow  users to touch those servers and they are extremely locked down.

Link to post
Share on other sites
On 3/14/2017 at 6:06 AM, Tsx said:

My Tapeloader could Restore with 450MB/s,

Where are you getting this figure from??

 

https://en.wikipedia.org/wiki/Linear_Tape-Open#Generations

 

The standard implies 140MB/s max,  looking at my Quantum i40 I am only seeing it alternate between 50MB/s and 90MB/s with the odd spike to 133MB/s.

 

 

On 3/14/2017 at 6:06 AM, Tsx said:

One "Old" 8 Bay Storage with 4x1gb Bond Network. This Storage has a Raid 0 of 2 SSDs with 2TB space an a Raid 5 with 20TB of space.

Ditch the bonding,  Just put a 10Gb card in it, 

 

I would also ditch the raid 0 idea,  Take the SSD's out and put a couple more hard drives in, migrate it from raid 5 to raid 6.

 

You are trying to solve a problem that is unsolvable, as you are trying to obtain a speed that is ~ x5 higher than the Tape drive will actually do.

 

If you are restoring from tape rather than disk, either everything has turned to sh#t, ie your server room caught fire or you are doing it as an exercise to test your backups.

 

What is the actual real problem?

Link to post
Share on other sites
5 hours ago, BigCountryGeek said:

How on earth do you have 60-90TB a day of incremental changes?  Do you do fulls every day?  I work for one of the largest banks in the USA, and never seen that.

 

As long as you have people manning your data centers, that is okay to still use tape. But like our upper management has said, we are not in the data center business and they are moving it out to colo's.  So we had to switch to disk for all backups including long term.   If you ask CommVault, most companies are switching to just disk not tapes any more for long term.  It is more expensive, but at least it doesn't require the man power and physical space tapes do.

Only fulls we do every day is SQL backups. I did however include Synthetic Fulls with that figure as we have a few running every day, we stagger/split them so they don't all just on the same day else that'll totally kill the media agents and the DDBs. To me it still counts since it hits the DDB when they run, and index cache, so you have to factor that in.

 

We are licensed for 241TB of capacity and 160TB of Snap & Replication, a decent percentage of that is taken up by the SQL data which as above fulls every night and tlogs every 30 minutes for the important stuff.

 

Yea a lot are moving to disk only and on a pure which one would I rather work with that is easy, disk as tape is bloody annoying in comparison. We are looking at changing our tape backup to AWS Glacier or Azure Cold Storage though, cost permitting blah blah etc.

 

Edit:

Oh and we have 40Gb DC distribution and 10Gb ToR and the media agents have 4 10Gb connections, two teams one for production network and one for backup network to the disk libraries. The disk libraries have 8 10Gb connections, each node has a 10Gb team so 40Gb usable bandwidth.

Link to post
Share on other sites
17 hours ago, leadeater said:

Only fulls we do every day is SQL backups. I did however include Synthetic Fulls with that figure as we have a few running every day, we stagger/split them so they don't all just on the same day else that'll totally kill the media agents and the DDBs. To me it still counts since it hits the DDB when they run, and index cache, so you have to factor that in.

 

We are licensed for 241TB of capacity and 160TB of Snap & Replication, a decent percentage of that is taken up by the SQL data which as above fulls every night and tlogs every 30 minutes for the important stuff.

 

Yea a lot are moving to disk only and on a pure which one would I rather work with that is easy, disk as tape is bloody annoying in comparison. We are looking at changing our tape backup to AWS Glacier or Azure Cold Storage though, cost permitting blah blah etc.

 

Edit:

Oh and we have 40Gb DC distribution and 10Gb ToR and the media agents have 4 10Gb connections, two teams one for production network and one for backup network to the disk libraries. The disk libraries have 8 10Gb connections, each node has a 10Gb team so 40Gb usable bandwidth.

Very strong setup.  Yeah, we only do Synthetics on Friday from all the incrementals during the week, and run 15 minute SQL tlog backups with one full of those a day.  Licensing, about 120TB for us combined for both Archive/Data.  We obviously don't have the data you do, but the connections we use is enough for us now.

 

40GB at the UCS for all the VM's to our Nexus 7k, and our Media Agents at each site have one 10gb fiber to the Nexus 7k.  Then it has 4 8gb fiber to our FC switches that are then to our Dell PowerVault SAN storages.  In 2020, we plan to get something more robust with speed, but as of now this does well and I am unsure how much we will utilize it in the future with management discussing moving a lot of our file storage into the cloud (sharepoint) as they have told us we aren't using onsite backups for that any longer......

Link to post
Share on other sites

if you are trying to backup virtual HDD's, one thing i'm going to say

DON'T DO IT!

due to how they work, the pagefile that opereating systems is seen as slow RAM to the system and acts like RAM, meaning it's constantly being accessed and used, meaning backing it up is not possible

****SORRY FOR MY ENGLISH IT'S REALLY TERRIBLE*****

Been married to my wife for 3 years now! Yay!

Link to post
Share on other sites
2 hours ago, samiscool51 said:

if you are trying to backup virtual HDD's, one thing i'm going to say

DON'T DO IT!

due to how they work, the pagefile that opereating systems is seen as slow RAM to the system and acts like RAM, meaning it's constantly being accessed and used, meaning backing it up is not possible

Um what? Veeam was born to backup virtual machines and for almost it's entire software life is all that it could do. When a VM gets backed up a snapshot is taken so you have a consistent and point in time state of the VM then after the backup is finished the snapshot is removed.

 

You are on point about the pagefile of sorts and some storage vendors recommend to have a dedicated disk for the page file, but that isn't about backups and is more to do with replicating the storage to another location. There is little point replicating pagefile data and has a very high change rate yet a Windows server will start without a page file and simply create a new one making the entire exercise of replicating that data pointless.

 

The downside to configuring your virtual machines this way is management overhead and the impact this also has with more virtual disks per VM, particularly in the case of backups since you need to exclude that virtual disk from backup or your should at least for the same reasons for not bothering to replicate it.

Link to post
Share on other sites

My experience was from my internship with the network coordinator in college. We moved off tapes but yeah mostly cause we tried everything we could to speed it up but it was gonna always hit the same max. Also the way that organization ran it's WAN was more like a stupidly large LAN which created nightmares for data transfer. Anyways I'd think really you may need to eventually look at an overhaul of your backup and disaster recovery system if your having issues. I don't see much faster your gonna go than you are using this system and the problem isint gonna be solved as i see it by what amounts to throwing bandwidth at the problem.

Link to post
Share on other sites
7 hours ago, leadeater said:

Um what? Veeam was born to backup virtual machines and for almost it's entire software life is all that it could do. When a VM gets backed up a snapshot is taken so you have a consistent and point in time state of the VM then after the backup is finished the snapshot is removed.

 

You are on point about the pagefile of sorts and some storage vendors recommend to have a dedicated disk for the page file, but that isn't about backups and is more to do with replicating the storage to another location. There is little point replicating pagefile data and has a very high change rate yet a Windows server will start without a page file and simply create a new one making the entire exercise of replicating that data pointless.

 

The downside to configuring your virtual machines this way is management overhead and the impact this also has with more virtual disks per VM, particularly in the case of backups since you need to exclude that virtual disk from backup or your should at least for the same reasons for not bothering to replicate it.

Glad you replied, I was like what?  lol!

 

And actually VMWARE does the work for Veeam of doing the snaps. :)  There are no issues doing snaps these days with VMs.  

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×