Hello Together,
today I want to show you how to set up a NetApp SVM-DR-SM.
Before we start
What is a SVM-DR-SM?
SVM-DR-SM stands for “Storage Virtual Machine Disater Recovery SnapMirror” (I think we better stick to the short cut.)
The Scenario is as follows:
I run two Data Centers approx. 1.5 Kilometres apart. DataCenter1 has two ESX hosts and a NetApp AFF A220. The DataCenter2 includes another ESX host and a NetApp FAS 2720.
Since I do not need “High Availability”, but “only” one Way to bring my virtual machines back online in the DataCenter2 as quickly as possible, I decided to set up an SVM-DR.
Preliminary work
In advance, I’ve already put the AFF A220 and FAS 2720 in a peering relationship. This means that the physical clusters can see each other via IP protocol.
Step 1: Create an SVM (Storage Virtual Machine)
First, I create an SVM. To do this, I click Storage (1) on the source system (AFF A220) in The OnCommand System Manager, then SVMs (2) then Create(3).
Step 2: Configure SVM 1
I give a name for the SVM(1), determine which protocols to speak(2), I’ll give the SVM an aggregate on which to store their Data(3) and click Submit&Continue(4)
Step 3: Configure the SVM
Now I determine how the SVM’s IP address should be provided for data communication. Since I have not defined any Subnets, I choose “Assign IP Address: Without a Subnet” (1)
In the following pop-up window, I enter the IP address and the subnet mask that my SVM is supposed to use for NFS communication.
Now I still need to assign a physical adapter to this IP address on the cluster. To do this, I click on Browse(1), Select a Node in the cluster(2), select an adapter(3) and click on OK(4)
Now I say that I would like to do an NFS Export called “MYTEST” (1) and that the NFS volume should be 1 TB(2) and click on Submit&Continue(3)
Now I’m asked for a password for my new SVM.
And now we already have a new SVM (Storage Virtual Machine) running in our AFF A220 Cluster.
Step 4: Set up the SVM-DR Relationship
To set up the SVM-DR relationship, I am now going to the destination cluster so the FAS 2720. Here I click on Protection(1), then on SVM DR Relationship (2) and on Create(3)
In the next window, select the type of relationship in my Case Mirror & Vault, both a Mirror of the data and several older stands on the target system. A “simple Mirror” always keeps source and target on the same stand. This is especially unfortunate when I have a virus infestation. Then, in the dumbest case, I infected both “Live System” and the “Disaster Recovery Site”. That’s why I choose both Mirror and Vault. Enter the source cluster (2), choose my SVM which I would like to have in an SVM-DR relationship (3), select the target cluster(5), say how the target SVM should be called(5) and set the hook for “Yes including network Interfaces”(6). Point (6) is especially important to me as I want the SVM on the “Disaster Recovery System (FAS 2720)” to come online with exactly the same settings and IP addresses as the original SVM. After that, I click on Save(7)
And we already have an SVM-DR relationship.
In the background, a SnapMirror relationship of all the volumes that belong to the source SVM is now established and a baseline of the Data is transferred.
On the Target System (FAS 2720) I now see another SVM called NFS-TEST SVM _ dest. This SVM is stopped! Why?
It would be a pretty silly thing, of course, if two SVMs with the same IP address were hustle around the net. Therefore, you can also see the Logical Interfaces (LIF) on the target system, i.e. the NETWORK Adapters of the SVM in the status disabled.
The SVM on the target system and also the LIFs are taken online in a disaster case or at your request.
How switching from a live system to a secondary system with NetApps SVM-DR technology looks real, I’ll show you in the next Blog Post. “NetApp SVM-DR switching In case of disaster or maintenance”
Greetings
The Schmitz
DISCLAIMER: This post represents my personal observations and is not officially by NetApp or other authorized. Subject to misinterpretation or misunderstanding.
Very good write up, thanks for sharing!! Danke!!