Finally it’s time. I have the first NetAppHCI system for testing and playing. The most important first… Yes the NetAppHCI logo lights up as soon as the chassis is powered;) For my test environment, I have four small compute nodes and four small storage nodes. This means that the storage nodes are equipped with 6x 480GB SSDs and the compute nodes with 16 CPU cores and 384 GB RAM. It should be noted that even with the “small” variant of the compute node, all RAM slots are fully occupied. This allows all memory channels to be fully exploited. In addition, I think NetApp can buy the 24x 16GB latches cheaper than 12x 32GB bars. Wait what? 24x 16GB latch in the small variant? That’s 384 GB of RAM, but at the time I had written in my first idea that there are only 256 GB in the “small” version! Well, NetApp probably put a scoop on it;) So here is the current overview of the equipment of the compute nodes: A look at the stickers now also lets you guess which hardware manufacturer is behind the server components 😉 First I connected to each node with a monitor and a keyboard and have assigned a fixed IP address for the on board management so that I can easily switch to any node via the browser. From onboard management or even IPMI, I copied all the MAC addresses of the adapters and created two scopes in a DHCP server. One for the 1GbE adapter and one for the 10 GBE adapters. With the MAC addresses I then made reservations in the respective DHCP area. You are not forced to make reservations in a DHCP. However, it makes it easier for you to allocate the adapters and the IP addresses immensely. You can also give the 10 GBE adapters directly the appropriate MTU size of 9000. To do this, simply set DHCP option 26 to 9000 in the DHCP scope options (for the 10 GBE adapters). After everything is wired (which looks like this in my lab)… … you simply surf with your browser on one of the IP addresses that you have assigned for the 1GbE adapters and arrive at the HCI Welcome screen: We click on “Get Started” and we are already in the NDE (NetApp deployment engine)Here we confirm that all the requirements have met and click Continue. Then we confirm the EULA… … and say in this case that we want to deploy a new vcenter. I’m using IP because I don’t have a DNS server running in the lab environment. Then we assign a user name and password for our vcenter and for our storage cluster The next step is to see which nodes the NetApp deployment engine has already found. Attention here only the nodes that appear in the same or Accessible IP range as the node from which you started the deployment engine. There is no magic here that simply finds nodes by vodoo. Now you choose which nodes you want to have in your new environment. Solidfire typically you need to use at least 4 storage nodes, with the ESX nodes you can start with at least two. Then are asked for a pack of IP addresses. In my case, I did this in advanced mode to have more control over the environment. In basic mode, you simply enter a range and the deployment engine automatically distributes the IPs to the nodes. Then there is another review…. … and then you just wait until the deployment has gone through. In my case it took just under 30 minutes until everything was ready. Only now the deployment engine will roll out the ESX operating system on the compute nodes. This is due to internal SSDs which are stuck on the board of the node like RAM. After the deployment engine reaches 100%, you can close it and log in to your new vcenter. The deployment engine has automatically created a cluster for you, the ESX hosts joined in there, a manging VM created via which the vcenter plugins are provided… … and datastores were automatically created and mounted to all your ESX nodes. That was the installation:)
- Pay meticulous attention to the fact that the 10 GBE adapters have all actually set an MTU of 9000. Don’t forget to put your switch on MTU 9000 as well. Otherwise, the deployment fails and you have to reset the storage nodes to factory settings and do all the fun from scratch.
- Don’t forget that the 1GbE adapters also need to see the 10 GBE adapters on the same network, otherwise the deployment engine fails to create the vcenter server and you have to manually bring your hosts into vcenter and deploy the management VM.
Of course I started to play a bit with the system. I was particularly interested in the topic of vVols. Unlike other VVol implementations, the Vasa provider runs on all storage nodes. So you don’t have to be afraid that if your vasa VM is gone you all vVols loses. This is already very smart solved by NetApp. With a few clicks I have created a VVOL container… … Created a VM on it… … And as excpected, for each VMDK file a separate virtual volume is created. So that was my first NetAppHCI installation. It is rare in the IT industry, but here the technology holds what the marketing promises. I look forward to feedback and your questions. I’ll do some more testing and keep you informed.
Disclaimer: This post represents my personal observations and is not officially authorized by NetApp or others. misinterpretations or misunderstandings reserved.