Azure Stack on NVMe

I’m reposting information that has since been taken down as Microsoft doesn’t currently support this method – meaning use at your own risk.

A. Enable NVMe and other bus type support
———————–

In order to support NVMe, the first thing you need to do is modify the Pre-check and allow the bus type “NVMe”

1. Mount MicrosoftAzureStackPOC.vhdx in the downloaded package.

2. Open X:\AzureStackInstaller\PoCDeployment\Invoke-AzureStackDeploymentPrecheck.ps1 with PowerShell ISE.

3. In line 62, add the highlighted part.

$physicalDisks = Get-PhysicalDisk | Where-Object { $_.CanPool -eq $true -and ($_.BusType -eq ‘RAID’ -or $_.BusType -eq ‘SAS’ -or $_.BusType -eq ‘SATA’ -or $_.BusType -eq ‘NVMe’) }

By default Azure Stack Technical Preview 1 only supports HDD or SSD+HDD configuration. It doesn’t support All-Flash (All SSD)* or NVMe+SSD. The reason is when we enabled the storage space direct (“Enable-ClusterS2D”), we didn’t specify any parameter. Storage Space Direct will use those NVMe drives and SSDs as cache devices instead of persistent storage. In order to support the following storage configuration, we need to modify the deployment scripts and append different parameters. For more information, please refer to Claus Joergensen’s blog.
http://blogs.technet.com/b/clausjor/archive/2015/11/19/storage-spaces-direct-under-the-hood-with-the-software-storage-bus.aspx

  • B: NVMe+HDD
  • C: NVMe+SSD
  • D: All NVMe

*Notes: There is one exception here. If you’re using non pass thru bus type (e.g., RAID, iSCSI or File Backed Virtual), Storage Space Direct could not recognize the media type. It will mark all the disks as “Unspecified” instead of HDD or SSD. In that case, Storage Space Direct will not use those drives as cache device even if you didn’t disable the S2DCacheMode.

B. Enable the Tiered Storage with different bus type
————————

In NVMe+HDD configuration, we don’t need specify any parameter. However since NVMe drives’ bus type is NVMe and normally HDD’s bus type is SAS or SATA. By default deployment script will only deploy Azure Stack on the drives with same bus type. So besides to enable NVMe support in Pre-Check, you also need to modify the script “X:\AzureStackInstaller\PoCFabricInstaller\CreateStoragePool.ps1” and remove the highlighted part.

Line 23:

$pdisks = $pdisks | ? { $_.CanPool -eq $true -and $_.BusType -eq $DiskType }
LogDisks “Disks have been picked up” $pdisks

for ($i=0; $i -lt 3 -and $pdisks -eq $null; $i++) {
  sleep 5
  $pdisks = $clussubsystem | Get-PhysicalDisk | ? { $_.CanPool -eq $true -and $_.BusType -eq $DiskType }
}

Now you may deploy Azure Stack TP1 w/ NVMe+HDD configuration.

C. Deploy on Tiered Storage with NVMe and SSD drives
————————

In order to support NVMe+SSD configuration, besides follow the steps in the section A and B, you also need to modify the script X:\AzureStackInstaller\PoCFabricInstaller\CreateFailoverCluster.ps1 and add the highlighted part.

Line 123:

Enable-ClusterS2D -S2DCacheDevice NVMe

D. Deploy on All-NVMe Storage
————————

In order to support All NVMe configuration, you need to follow the steps in Section A besides follow the steps in section A first, then modify the script X:\AzureStackInstaller\PoCFabricInstaller\CreateFailoverCluster.ps1 and add the highlighted part.

Line 123:

Enable-ClusterS2D -S2DCacheMode Disable

Now you may dismount the DataImage VHD (MicrosoftAzureStackPOC.vhdx) and kick off the deployment.

Testing Azure Stack

I have been testing (or trying to test) Azure Stack for a number of weeks. To do that testing I’ve been using a Dell R930 with four Intel Xeon E7-8890 v3 CPUs, 512 GB of memory, two 300 GB 10K SAS boot drives and eight 400 GB NVMe disks. That’s a lot of horsepower and fast disk so you would think it would easily run Azure Stack.

The first problem was the NVMe disk. TP1 of Azure Stack was restricted to SSD disk types and does not support NVMe. Fortunately, I found a blog post with instructions on how to modify the prerequisite check PowerShell script to get past that issue. From there I ran into an issue with SR-IOV. Azure Stack requires the BIOS to have SR-IOV enabled for the Network Interface Cards (NIC). It also requires a single NIC and the R930 had four.Here’s the PowerShell to check for SR-IOV enabled:

SRIOV

Notice that Ethernet4 is the port I have enabled and it supports SR-IOV while Ethernet 2 and 3 show the condition MissingACS. That could mean several things including a missing driver. I went with port 4 since it seemed to be functional. With these fixes in I still kept running into an issue at step 17 of the install process which is where Failover Clustering is configured. I submitted the problem to the Azure Stack team and they had me run a log gathering tool and send them the logs.

Turns out my use of the 192.168.1.0 subnet with a mask of 255.255.0.0 caused a conflict with some of the Azure Stack subnets. Changing my DHCP server to provide a 255.255.255.0 mask fixed the problem and the install ran to completion.

Finished

Now I just have to find the time to do some testing!

Blogging from Word

I have had a long standing wish to be able to blog directly from Microsoft Word. Today I ran across a twitter link with instructions on how to do just that. If you’re using the latest version of Microsoft Word you’ll have to search for the Blog template but other than that the instructions were spot on.

Here’s a clip of what it looks like:

Recommended

Hyper-V Virtual Machine Connection has stopped working

I ran into this little dialog running Hyper-V on a Windows 8.1 desktop machine. Specifically this is a Windows 8.1 VM created as a Gen2 machine. Launching a remote desktop by double-clicking on the screen preview from Hyper-V manager would launch just fine. The problem showed up not long after login and basically made the VM useless.

2015-01-10_10-37-16

I was able to find a thread on the Windows Server TechNet forum which held the answer – or at least a work around. You should see the dialog box below when you launch the RDC session.

2015-01-10_11-28-25

Click on the button beside Show Options and the dialog expands to a tabbed interface labeled with Display and Local Resources. Clicking on the Local Resources tab shows the following:

2015-01-10_10-32-41

The fix is to uncheck the Printers box under Local devices and resources.

Hyper-V MVP!

Thanks to the efforts of my friend Doug Spindler I was nominated for Microsoft MVP in Hyper-V and yesterday the award arrived by e-mail.

MVP

Dear James Ferrill,
Congratulations! We are pleased to present you with the 2015 Microsoft® MVP Award! This award is given to exceptional technical community leaders who actively share their high quality, real world expertise with others. We appreciate your outstanding contributions in Hyper-V technical communities during the past year.

My full legal name is James Paul Ferrill, Jr. – hence the Dear James.

With that award I need to get back to blogging on things Hyper-V so look for new material starting soon with a post on Hyper-V targeted DHCP.

LSI Driver Update

I booted up my SuperMicro CiB system to start doing some performance testing and was greeted with this:

2014-05-03_15-02-34

A right-click on the LSI Adapter shows this:

2014-05-03_15-05-10

A few Bing searches later and I found an updated driver here. A few clicks later and the yellow triangle with exclamation point is gone and I can see the disks again.

Yay!

System Center Demo VHDs at Home

Here’s a quick tip on running the Microsoft System Center 2012 R2 demo VHDs on a home lab I discovered the hard way. I have a cable modem and wireless router arrangement connecting my home network to the internet. The wireless router has IP address 192.168.1.1 and handles DHCP for my network. It also handles the DNS duties passing DNS requests to OpenDNS which I use instead of the ISP’s DNS.

I have a Windows Server 2012 virtual machine at IP address 192.168.1.60 functioning as a domain controller for the contoso.com domain. So when I want to join a computer to that domain I have to manually set its DNS address to 192.168.1.60. This means any computer getting an IP address from the DHCP service on the router will get a DNS address of 192.168.1.1 and won’t be able to join the domain.

This weekend I attempted to get the System Center App Controller VHD up and running and after about four or five tries I was about to give up when I read through the documentation one more time and noticed this statement:

Also under Domain Information, specify the user name and password of an account that has domain administrator rights and permissions. This account will be used to join the virtual machine to the specified domain.

Followed by:

Windows configures the virtual machine and restarts. Then Windows logs on using the credentials that you specified previously, to install and configure App Controller and other necessary software.

I would get a login screen but the credentials I entered on this page wouldn’t work:

SC2012AC

Turns out the domain join step wasn’t working because the DNS wasn’t pointing to the AD machine. To fix that I changed my router to point to 192.168.1.60 for DNS and then on the AD machine added forwarders to the OpenDNS IP addresses. To do that launch the DNS manager from Server Manager, right-click on the name of your server and choose properties. From there click on the Forwarders tab and add the new entries using the Edit button.

DNS Properties

With that done the System Center 2012 App Controller VHD installed without a hitch. Once up and running I could connect using a browser from my main workstation. Here’s a screen shot from the home page:

AppController Home Page

I’m coauthoring a book for Microsoft Press with my son Tim and needed this working to write about it. It’s the Exam Ref counterpart to this book http://www.amazon.com/Training-Guide-Designing-Implementing-Infrastructure/dp/0735674884 which hasn’t been published yet and will get updated to 2012 R2 before it goes to print.

Finding the BMC

The DataON CiB-9220 is built around Quanta hardware. How do I know you might ask? Well, I was hunting for the management IP address using the Colasoft MAC Scanner. Here’s a screenshot of running the program against my home 192.168.1.0 network:

2014-01-01_10-49-29

I knew there were two IP addresses from the same manufacturer that should respond but nothing else from that vendor – at least in theory. I could see .105 and .143 with the same initial six digits 08:9E:01. Point a web browser at one of those addresses and here’s what you get:

2014-01-01_10-55-57

I’ll post more on the BMC itself later.

DataON CiB-9220

The DataON CiB-9220 is in the house! I’ve been wanting to get my hands on this baby for a while. DataON announced just a few weeks back that the CiB-9220 is now on the certified products list for Windows Server 2012 R2. I’ll be posting tidbits over the next month or so as I do my testing for an InfoWorld.com review.

DNS-9220_CiB_2U_12Bay_FrontSideView

From the front view you see twelve 3.5 inch drive bays. Our review unit came with eight Seagate ST4000NM0023 4TB SAS drives plus four STEC S842E400M2 400 GB SAS SSD drives. Two fully independent compute nodes with dual-socket Xeon E5-2620 v2 CPUs and up to 256 GB of memory per node.

I’ve only run a few tests on this box so far but from what I’ve seen it’s a screamer. More to come l8r.

SCVMM 2012 R2 and Gen2 VMs

If you’ve been wanting to learn more about Microsoft’s System Center 2012 R2 Virtual Machine Manager you should download the evaluation VHD. You’ll find it here on the Microsoft download center. The download includes 20 files required to create the VHD and a Word document on how to get things running.

Be sure to follow the instructions closely to get everything working properly. I can confirm that you must join the virtual machine to a domain and login with domain credentials for everything to work. I tried to get the VHD running on a stand alone server for a demo a few weeks back and it fails during the VMM configure process.

Once you do get it installed you’ll see that SCVMM 2012 R2 does support the creation of both Gen1 and Gen2 hardware profiles:

NewHardwareProfile

The prerelease version of SCVMM 2012 R2 did not have this capability.