today I would like to show you the establishment of a fabric pool in NetApp ´ s ONTAP 9.4.
Before we start a couple of introductory words about fabric pools. Fabric pools are aggregates (logical disk groups) that are so to say cloud hybrid. A fabric pool always consists of a local SSD aggregate in your data center or your FAS/AFF and a cloud part. This cloud part can be either a NetApp storage grid or an Amazon S3 bucket or a Microsoft Azure blob store.
The uses of a fabric pool are very extensive. For example, if I don’t want to take my “valuable” SSDs with old/cold blocks, the fabric pool will automatically move those blocks to the cloud.
This will give you more room for hot data in your SSD area. The advantage is that I don’t have to hold any hardware for the cold data and I usually don’t know how big the cold data area is. Keyword Classic tiering, somewhere it is always a little too tight. By tiering into the cloud, you’ve already checked out the space problem.
Another use may be that I don’t really want to have the snapshots I create on my NetApp system in the live system. So I can then set the fabric pool to only move snapshot data to the cloud and not touch the real data. Considering that most of you have an active working set of average 20%, you can calculate the savings on hardware with one hand.
But let’s get started.
First we log in to our NetApp on the OnCommand System Manager. Then navigate to the external capacity tiers section under Storage/aggregates & disks. There we find the three mentioned providers StorageGrid, Amazon and Microsoft. In my example, I choose the Amazon S3 Bucket as an external Tier. I give a name for the external animal, enter the S3 server, fill in the accesskey and the SecretKey and specify how my S3 container is called on Amazon. In this case “Mybucket“. After that I choose which network adapter to use my NetApp cluster for communication and click on save. and Booom… Our first object store on my NetApp is created. Now there is a small button “attach aggregates“. We just click on it… Now I select my AGGR1 and click on save.That’s it. So quickly you have created a fabric pool and linked it to your SSD aggregate. Your NetApp now has a virtually permanent leg in the cloud. To make it a little more vivid, I create a new volume… … and specify that this volume should be on my AGGR1. You can see here that this is a fabric pool aggregate. I call my volume FabricPoolVolume and put a cifs share called FabricPool on it. Now I copy some data into the volume and we look later what happens with it. The Access or Writing to the volume is always done on the local SSDs and will be outsourced later. Now you may not want everything that is in the aggregate to somehow be tiered into the cloud. So I can set the tiering per volume. Either auto, snapshot-only, or none. The selection point none is very interesting. When setting to none no further upload will be made and cloud data will be written back to local flash when it is read from cloud the first time. In the properties of the aggregate you will also see how much data has migrated into the cloud and thus also how big your cold data area is. In my example, you don’t see much on the NetApp, because my “cold” data is so small that NetApp doesn’t seem worthy of presentation… But I look directly at the S3 Bucket (the cloud target) so I see that the NetApp has already outsourced 44 MB.
I hope I could give you a little impression of the fabric pool of NetApp and look forward to your comments.
Disclaimer: This post represents my personal observations and is not officially authorized by NetApp or others. misinterpretations or misunderstandings reserved.