Finally it’s time. I have the first NetAppHCI system for testing and playing. The most important first… Yes the NetAppHCI logo lights up as soon as the chassis is powered;) For my test environment, I have four small compute nodes and four small storage nodes. This means that the storage nodes are equipped with 6x 480GB SSDs and the compute nodes with 16 CPU cores and 384 GB RAM. It should be noted that even with the “small” variant of the compute node, all RAM slots are fully occupied. This allows all memory channels to be fully exploited. In addition, I think NetApp can buy the 24x 16GB latches cheaper than 12x 32GB bars. Wait what? 24x 16GB latch in the small variant? That’s 384 GB of RAM, but at the time I had written in my first idea that there are only 256 GB in the “small” version! Well, NetApp probably put a scoop on it;) So here is the current overview of the equipment of the compute nodes: A look at the stickers now also lets you guess which hardware manufacturer is behind the server components 😉 First I connected to each node with a monitor and a keyboard and have assigned a fixed IP address for the on board management so that I can easily switch to any node via the browser. From onboard management or even IPMI, I copied all the MAC addresses of the adapters and created two scopes in a DHCP server. One for the 1GbE adapter and one for the 10 GBE adapters. With the MAC addresses I then made reservations in the respective DHCP area. You are not forced to make reservations in a DHCP. However, it makes it easier for you to allocate the adapters and the IP addresses immensely. You can also give the 10 GBE adapters directly the appropriate MTU size of 9000. To do this, simply set DHCP option 26 to 9000 in the DHCP scope options (for the 10 GBE adapters). After everything is wired (which looks like this in my lab)… … you simply surf with your browser on one of the IP addresses that you have assigned for the 1GbE adapters and arrive at the HCI Welcome screen: We click on “Get Started” and we are already in the NDE (NetApp deployment engine)Here we confirm that all the requirements have met and click Continue. Then we confirm the EULA… … and say in this case that we want to deploy a new vcenter. I’m using IP because I don’t have a DNS server running in the lab environment. Then we assign a user name and password for our vcenter and for our storage cluster The next step is to see which nodes the NetApp deployment engine has already found. Attention here only the nodes that appear in the same or Accessible IP range as the node from which you started the deployment engine. There is no magic here that simply finds nodes by vodoo. Now you choose which nodes you want to have in your new environment. Solidfire typically you need to use at least 4 storage nodes, with the ESX nodes you can start with at least two. Then are asked for a pack of IP addresses. In my case, I did this in advanced mode to have more control over the environment. In basic mode, you simply enter a range and the deployment engine automatically distributes the IPs to the nodes. Then there is another review…. … and then you just wait until the deployment has gone through. In my case it took just under 30 minutes until everything was ready. Only now the deployment engine will roll out the ESX operating system on the compute nodes. This is due to internal SSDs which are stuck on the board of the node like RAM. After the deployment engine reaches 100%, you can close it and log in to your new vcenter. The deployment engine has automatically created a cluster for you, the ESX hosts joined in there, a manging VM created via which the vcenter plugins are provided… … and datastores were automatically created and mounted to all your ESX nodes. That was the installation:)
Stumbling blocks:
- Pay meticulous attention to the fact that the 10 GBE adapters have all actually set an MTU of 9000. Don’t forget to put your switch on MTU 9000 as well. Otherwise, the deployment fails and you have to reset the storage nodes to factory settings and do all the fun from scratch.
- Don’t forget that the 1GbE adapters also need to see the 10 GBE adapters on the same network, otherwise the deployment engine fails to create the vcenter server and you have to manually bring your hosts into vcenter and deploy the management VM.
What’s next?
Of course I started to play a bit with the system. I was particularly interested in the topic of vVols. Unlike other VVol implementations, the Vasa provider runs on all storage nodes. So you don’t have to be afraid that if your vasa VM is gone you all vVols loses. This is already very smart solved by NetApp. With a few clicks I have created a VVOL container… … Created a VM on it… … And as excpected, for each VMDK file a separate virtual volume is created. So that was my first NetAppHCI installation. It is rare in the IT industry, but here the technology holds what the marketing promises. I look forward to feedback and your questions. I’ll do some more testing and keep you informed.
Disclaimer: This post represents my personal observations and is not officially authorized by NetApp or others. misinterpretations or misunderstandings reserved.
Excellent article André!
thx
Good read!
thx
Excelent one 😉 just one question about vvol and the provisionned space …. as u can see in ure screenshot u provisionned a 500TB datastore which seems to be the entire size of the SF with undoable volume efficiency.. how can we reduce this size because on vvol wizard it doesn’t ask for size… can be frustrating to see the total size of an datastore reducing as we put some data on it. —-><—
Hi David, the 500TB is the LOGICAL Size of the vVOL container representing the whole entire size of the Solidfire with all possible calculated efficiency savings. At the moment you cannot reduce the vVOL container size it is “by design”. The container itself is in the first moment an empty case. The container does not know how big your vVOL vmdk will be. When you deploy more and more vms on it you will see a reduction of available capacity. It surely depends on the real efficiency savings. In my screenshot you can see a very fresh installed system with only two or three vms. So for you can’t see any impact on the Container size. Also the VMWare Plugin will alert you when trying to overprovision or running out space.
Thanks for this!
Do you have the procedure for a storage node factory reset?
My deployment failed and I want to start again.
Hi Lyle, you need the RTFI (Return to Factory Installation) ISO file from the NetApp Software Download page. Boot from this ISO and follow the onscreen instructions.
https://mysupport.netapp.com/products/hci/1.4/downloads.html (NetApp-Account-Login required)
Hi DerSchmitz,
is it okay to practice HCI with out network switch?. trying to check one chassis 2 storage node and 2 compute node. don’t have a network switch yet for connectivity.
Thanks for this article.
Hi Bruce,
a network switch is mandatory. You cannot direct connect the nodes with each other. Furthermore 4 Storage nodes is “today” the minimum configuration. If you are NetApp Partner, you can play a little bit in https://LabOndemand.netapp.com