Using Shared Access Signatures in Azure Templates

Did you know you can create and retrieve a Shared Access Signature (SAS) for an Azure Storage account from within an Azure Resource Manager (ARM) template?  Yeah . . . me neither!  It’s been something I’ve wanted to do on several occasions.  I usually resort to creating the storage account and SAS via some other means (Azure Portal, PowerShell, CLI, etc.), and then pass the account name and SAS token to the ARM template as parameters. Doable, but the process felt clunky.

I recently learned of a relatively new template resource function, listAccountSas.  I’m not sure when this function was added, but a quick look at the doc history seems to indicate August 2018. The official documentation does explain the basic functionality of listAccountSas, so please be sure to review that documentation.  However, there are a few topics that deserve additional detail, which is what I will cover in this post.

Input Parameters and Return Values

First, listAccountSas accepts an optional “functionValues” parameter.  For an Azure Storage account SAS, this object parameter defines scope (resource types, expiration, etc.).

{
  "signedServices": "bt",
  "signedPermission": "acluw",
  "signedExpiry": "2018-11-30T00:00:00Z",
  "signedResourceTypes": "co"
}

The documentation page suggests that “functionValues” is optional.  However, I’m not sure how the function would work without it.  I can only assume it is optional to support some future use case.  (Note: Service Bus also uses a Shared Access Signature, but it isn’t really an “account”SAS).

Next, listAccountSas returns an object.  What exactly does that mean?  The documentation page leaves it an exercise to the user to figure out. Well, I’ll show you right now:

{
  "accountSasToken": "sv=2015-04-05&ss=bt&srt=co&sp=acluw&se=2018-11-30T00%3A00%3A00.0000000Z&sig=xxxxxxxxxxxxx"
}

Most places that you want to use a SAS in an ARM template require the SAS token to be in a string format.  Therefore, to use the provided SAS token, you need to use the accountSasToken property from the resulting object.  For example:

listAccountSas(parameters('storageName'), '2018-02-01', parameters('accountSasProperties')).accountSasToken

In my opinion, listAccountSas seems like a pseudo-wrapper around the ListAccount SAS REST API for Azure Storage.  Have a look at the REST API to learn its expected request and response, and you’ll be able to apply that information to using listAccountSas as well.

Using listAccountSas

Again, the official documentation explains the basics of using listAccountSas.  There is even an example template.  What I want to see is how I would really use listAccountSas as part of an ARM template, not just to get the output value of the function itself.

For example, one case where I needed to create and use an Azure Storage account SAS was when setting up the Linux Diagnostic extension on a Virtual Machine Scale Set (VMSS) as part of an Azure Service Fabric cluster.  To configure the diagnostic extension, you will need to provide a storage account name and SAS token.  As previously mentioned,the common way to accomplish this was to create the storage account and SAS first, and then provide those values as input parameters to the ARM template. 

No more, I say!

Using the listAccountSas function, it is possible to create and use the SAS from within the template.  If needed, you can create the necessary storage account from within the same template as well.  In the case of the diagnostic extension, the configuration would be as follows:

"protectedSettings": {
      "storageAccountName": "[variables('applicationDiagnosticsStorageAccountName')]",
      "storageAccountEndPoint": "https://core.windows.net/",
      "storageAccountSasToken": "[listAccountSas(variables('applicationDiagnosticsStorageAccountName'), '2018-02-01', 
parameters('applicationDiagnosticsStorageAccountSasProperties')).accountSasToken]",
      "sinksConfig": {}
}

Summary

I’m very pleased to see listAccountSas added as a resource function for ARM templates.  This makes fully automated deployments for Azure resources much easier to develop and maintain.

Finally, I owe a special “thank you” to Robin for reviewing this post and helping to make it better.

Advertisements

How to Setup NVIDIA Driver on NV-Series Azure VM

I recently had the opportunity to assist on a project where a partner was using N-Series Azure VMs.  My part of this effort was developing a script to automate the setup of the VMs. To perform the VM setup and configuration, an ARM template was used.  The ARM template approach was used because doing so provided consistency with several other ARM templates being used for other parts of the project.

Setting up Azure VMs using ARM templates is common. There are many articles, blog posts, and sample templates available to help get started.  That isn’t itself especially interesting. The interesting part, at least for me, was the N-Series aspect. N-Series VMs require a separate step to install the NVIDIA driver to take advantage of the GPU capabilities of the VM.  There are instructions on how to install the driver, but those instructions assume you like to remote into the VM each time you create a VM, and then run an installation program. That’s tolerable if doing it only a few times. Any more than that, and it’s time for automation.

The v370.12 driver (which is the current version linked via the Azure documentation page) uses a self-extracting file to first extract the setup components to a directory, and then executes the setup program.  By scouring a few other blogs on performing a silent install of NVIDIA drivers, I could piece together the necessary switches to provide to the installation program to perform a silent install.

> 370.12_grid_win8_win7_server2012R2_server2008R2_64bit_international.exe -s -noreboot -clean

This tells the installation program to install silently, to not perform a reboot after the installation is complete, and to perform a clean install (restores all NVIDIA settings to the default values).

Now I need to work that it into a PowerShell script to execute via a custom script extension. By doing so, I can let ARM do its thing by provisioning the VM and related resources (NIC, Virtual Network, IP address, etc.), and then invoke a PowerShell script to install the NVIDIA driver.

The custom script extension will execute a few different steps:

  1. Download the NVIDIA driver setup file from Azure Blob storage. I put the setup file in blob storage to make sure that this specific one is the one to be used.
  2. Download a PowerShell script which will execute the NVIDIA driver setup program with parameters to do so silently.
  3. Wait for the installation program to finish
  4. Force a reboot of the VM

It should be noted that the driver installation and GPU detection can take a couple of minutes.

As you can see in the following snippets, the custom script extension and related PowerShell script are fairly trivial.

ARM Template Custom Script Extension

{
      "type": "extensions",
      "name": "CustomScriptExtension",
      "apiVersion": "2015-06-15",
      "location": "[resourceGroup().location]",
      "dependsOn": [
      	"[variables('vmName')]"
],
"properties": {
      	"publisher": "Microsoft.Compute",
      "type": "CustomScriptExtension",
      	"typeHandlerVersion": "1.8",
      "autoUpgradeMinorVersion": true,
      	"settings": {
            	"fileUris": [
                  	"[concat(variables('assetStorageUrl'), variables('scriptFileName'))]",
                        "[concat(variables('assetStorageUrl'), variables('nvidiaDriverSetupName'))]"
]
},
            "protectedSettings": {
            	"commandToExecute": "[concat('powershell -ExecutionPolicy Unrestricted -File ', variables('scriptFileName'), ' ', variables('scriptParameters'))]",
                  "storageAccountName": "[parameters('assetStorageAccountName')]",
                  "storageAccountKey": "[listKeys(concat('Microsoft.Storage/storageAccounts/', parameters('assetStorageAccountName')), '2015-06-15').key1]"
      	}
}
}

PowerShell script executed by the Custom Script Extension

<# Custom Script for Windows to install a file from Azure Storage #>
param(
    [string] $nvidiaDriverSetupPath
)

# ----- Silent install of NVidia driver -----
& ".\$nvidiaDriverSetupPath" -s -noreboot -clean

# ----- Sleep to allow the setup program to finish. -----
Start-Sleep -Seconds 120

# ----- NVidia driver installation requires a reboot. -----
Restart-Computer -Force

In this scenario, I also need to get the assets used by the custom script extension – the NVIDIA driver setup file and PowerShell script (which will execute the NVIDIA driver setup file) – uploaded to Azure Blob storage.  That can easily be accomplished with the same PowerShell script used to deploy the ARM template.  That script will perform the following tasks:

  1. Create a new resource group
  2. Create a new storage account and container
  3. Upload the NVIDIA driver setup file and related PowerShell script to the newly created storage account
  4. Execute the ARM template

You can find the full ARM template, custom script, and deployment script on my GitHub project which accompanies this post.

In order to verify it all worked, I can RDP into the VM and verify the driver installation.

This slideshow requires JavaScript.

What about unsigned drivers?

An earlier version of the NVIDIA driver, v369.95, was not digitally signed.  It was also provided as a ZIP file instead of an EXE (like v370.12).   To use this version of the NVIDIA driver, a few changes to the setup script are necessary. First, the file contents need to be extracted/unzipped.  That’s doable via some PowerShell in the script executed via the custom script extension.  Getting around the lack of a digitally signed driver is a bit more . . . interesting. If you were to install the driver manually, you would receive a prompt from Windows asking you to confirm that installing the driver is REALLY what is desired.

NVIDIA-security-prompt.png

Completing the manual installation will result in a certificate installed to the VM’s Trusted Publisher certificate store.  The certificate can then be exported and saved to Azure Blob storage.

cert-manager.png

I can use that certificate as part of the automated install process. By using the certutil.exe program it is possible to install the certificate into the Trusted Publisher store on a new VM.  This step can be included in the PowerShell script executed via the custom script extension.

An example of this approach can be found at https://github.com/mcollier/setup-nvidia-drivers-azure-vm/tree/driver-369.95.

Alternative Approach

An alternative approach is to create a custom VM image with the necessary NVIDIA driver already installed.  The advantage with this approach is you don’t have to go through the custom script step. However, any new VM deployed from such an image will still need to go through a reboot after GPU detection following the first startup. You can also add additional software or configuration as needed.  The disadvantage is you’re then accepting responsibility for keeping the VM patched on a regular basis. If you use an image provided by Microsoft, those images are patched on a regular (often at least once per month) basis.

Resources

Here are some resources which helped me in coming up with the solution presented above.