Netapp SnapVault is a heterogeneous disk-to-disk backup solution for Netapp filers and heterogeneous OS systems (Windows, Linux , Solaris, HPUX and AIX). Basically, Snapvault uses netapp snapshot technology to take point-in-time snapshot and store them as online backups. In event of data loss or corruption on a filer, the backup data can be restored from the SnapVault filer with less downtime. It has significant advantages over traditional tape backups, like
How snapvault works?
When snapvault is setup, initially a complete copy of the data set is pulled across the network to the SnapVault filer. This initial or baseline, transfer may take some time to complete, because it is duplicating the entire source data set on the server – much like a level-zero backup to tape. Each subsequent backup transfers only the data blocks that has changed since the previous backup. When the initial full backup is performed, the SnapVault filer stores the data on a qtree and creates a snapshot image of the volume for the data that is to be backed up. SnapVault creates a new Snapshot copy with every transfer, and allows retention of a large number of copies according to a schedule configured by the backup administrator. Each copy consumes an amount of disk space proportional to the differences between it and the previous copy.
Snapvault commands :
Initial step to setup Snapvault backup between filers is to install snapvault license and enable snapvault on all the source and destination filers.
Source filer – filer1
filer1> license add XXXXX
filer1> options snapvault.enable on
filer1> options snapvault.access host=filer2
Destination filer – filer2
filer2> license add XXXXX
filer2> options snapvault.enable on
filer2> options snapvault.access host=filer1
Consider filer2:/vol/snapvault_volume as the snapvault destination volume, where all backups are done. The source data is filer1:/vol/datasource/qtree1. As we have to manage all the backups on the destination filer (filer2) using snapvault – manually disable scheduled snapshots on the destination volumes. The snapshots will be managed by Snapvault. Disabling Netapp scheduled snapshots, with below command.
filer2> snap sched snapvault_volume 0 0 0
Creating Initial backup: Initiate the initial baseline data transfer (the first full backup) of the data from source to destination before scheduling snapvault backups. On the destination filer execute the below commands to initiate the base-line transfer. The time taken to complete depends upon the size of data on the source qtree and the network bandwidth. Check “snapvault status” on source/destination filers for monitoring the base-line transfer progress.
filer2> snapvault start -S filer1:/vol/datasource/qtree1 filer2:/vol/snapvault_volume/qtree1
Creating backup schedules: Once the initial base-line transfer is completed, snapvault schedules have to be created for incremental backups. The retention period of the backup depends on the schedule created. The snapshot name should be prefixed with “sv_”. The schedule is in the form of “[@][@]”.
On source filer:
For example, let us create the schedules on source as below - 2 hourly, 2 daily and 2 weekly snapvault . These snapshot copies on the source enables administrators to recover directly from source filer without accessing any copies on the destination. This enables more rapid restores. However, it is not necessary to retain a large number of copies on the primary; higher retention levels are configured on the secondary. The commands below shows how to create hourly, daily & weekly snapvault snapshots.
filer1> snapvault snap sched datasource sv_hourly 2@0-22
filer1> snapvault snap sched datasource sv_daily 2@23
filer1> snapvault snap sched datasource sv_weekly 2@21@sun
On snapvault filer:
Based on the retention period of the backups you need, the snapvault schedules on the destination should be done. Here, the sv_hourly schedule checks all source qtrees once per hour for a new snapshot copy called sv_hourly.0. If it finds such a copy, it updates the SnapVault qtrees with new data from the primary and then takes a Snapshot copy on the destination volume, called sv_hourly.0. If you don’t use the -x option, the secondary does not contact the primary and transfer the Snapshot copy. It just creates a snapshot copy of the destination volume.
filer2> snapvault snap sched -x snapvault_volume sv_hourly 6@0-22
filer2> snapvault snap sched -x snapvault_volume sv_daily 14@23@sun-fri
filer2> snapvault snap sched -x snapvault_volume sv_weekly 6@23@sun
To check the snapvault status, use the command "snapvault status" either on source or destination filer. And to see the backups, do a "snap list" on the destination volume - that will give you all the backup copies, time of creation etc.
Restoring data : Restoring data is as simple as that, you have to mount the snapvault destination volume through NFS or CIFS and copy the required data from the backup snapshot.
You can also try Netapp Protection manager to manage the snapvault backups either from OSSV or from Netapp primary storage. Protection manager is based on Netapp Operations manager (aka Netapp DFM). It is a client based UI, with which you connect to the Ops Manager and protect your storages.
- Reduce backup windows versus traditional tape-based backup
- Media cost savings
- No backup/recovery failures due to media errors
- Simple and Fast recovery of corrupted or destroyed data
How snapvault works?
When snapvault is setup, initially a complete copy of the data set is pulled across the network to the SnapVault filer. This initial or baseline, transfer may take some time to complete, because it is duplicating the entire source data set on the server – much like a level-zero backup to tape. Each subsequent backup transfers only the data blocks that has changed since the previous backup. When the initial full backup is performed, the SnapVault filer stores the data on a qtree and creates a snapshot image of the volume for the data that is to be backed up. SnapVault creates a new Snapshot copy with every transfer, and allows retention of a large number of copies according to a schedule configured by the backup administrator. Each copy consumes an amount of disk space proportional to the differences between it and the previous copy.
Snapvault commands :
Initial step to setup Snapvault backup between filers is to install snapvault license and enable snapvault on all the source and destination filers.
Source filer – filer1
filer1> license add XXXXX
filer1> options snapvault.enable on
filer1> options snapvault.access host=filer2
Destination filer – filer2
filer2> license add XXXXX
filer2> options snapvault.enable on
filer2> options snapvault.access host=filer1
Consider filer2:/vol/snapvault_volume as the snapvault destination volume, where all backups are done. The source data is filer1:/vol/datasource/qtree1. As we have to manage all the backups on the destination filer (filer2) using snapvault – manually disable scheduled snapshots on the destination volumes. The snapshots will be managed by Snapvault. Disabling Netapp scheduled snapshots, with below command.
filer2> snap sched snapvault_volume 0 0 0
Creating Initial backup: Initiate the initial baseline data transfer (the first full backup) of the data from source to destination before scheduling snapvault backups. On the destination filer execute the below commands to initiate the base-line transfer. The time taken to complete depends upon the size of data on the source qtree and the network bandwidth. Check “snapvault status” on source/destination filers for monitoring the base-line transfer progress.
filer2> snapvault start -S filer1:/vol/datasource/qtree1 filer2:/vol/snapvault_volume/qtree1
Creating backup schedules: Once the initial base-line transfer is completed, snapvault schedules have to be created for incremental backups. The retention period of the backup depends on the schedule created. The snapshot name should be prefixed with “sv_”. The schedule is in the form of “[@][@]”.
On source filer:
For example, let us create the schedules on source as below - 2 hourly, 2 daily and 2 weekly snapvault . These snapshot copies on the source enables administrators to recover directly from source filer without accessing any copies on the destination. This enables more rapid restores. However, it is not necessary to retain a large number of copies on the primary; higher retention levels are configured on the secondary. The commands below shows how to create hourly, daily & weekly snapvault snapshots.
filer1> snapvault snap sched datasource sv_hourly 2@0-22
filer1> snapvault snap sched datasource sv_daily 2@23
filer1> snapvault snap sched datasource sv_weekly 2@21@sun
On snapvault filer:
Based on the retention period of the backups you need, the snapvault schedules on the destination should be done. Here, the sv_hourly schedule checks all source qtrees once per hour for a new snapshot copy called sv_hourly.0. If it finds such a copy, it updates the SnapVault qtrees with new data from the primary and then takes a Snapshot copy on the destination volume, called sv_hourly.0. If you don’t use the -x option, the secondary does not contact the primary and transfer the Snapshot copy. It just creates a snapshot copy of the destination volume.
filer2> snapvault snap sched -x snapvault_volume sv_hourly 6@0-22
filer2> snapvault snap sched -x snapvault_volume sv_daily 14@23@sun-fri
filer2> snapvault snap sched -x snapvault_volume sv_weekly 6@23@sun
To check the snapvault status, use the command "snapvault status" either on source or destination filer. And to see the backups, do a "snap list" on the destination volume - that will give you all the backup copies, time of creation etc.
Restoring data : Restoring data is as simple as that, you have to mount the snapvault destination volume through NFS or CIFS and copy the required data from the backup snapshot.
You can also try Netapp Protection manager to manage the snapvault backups either from OSSV or from Netapp primary storage. Protection manager is based on Netapp Operations manager (aka Netapp DFM). It is a client based UI, with which you connect to the Ops Manager and protect your storages.
No comments:
Post a Comment