Friday 27 July 2018

Hosting a C# Webhook Listener in Azure

In previous blog entries we’ve been introduced to webhooks in Alma, and we’ve learned how to build and host a webhook listener in the public cloud using the AWS API Gateway and Lambda service. In this article, we will build a webhook listener in C# and host it on the Microsoft Azure cloud service.
As in previous examples, our listener will need to perform the following:
  • Answer a challenge from Alma
  • Validate the signature of incoming requests
  • Process events according to the event type
In this example, we will be processing webhooks that are fired by Alma when a job ends. If the job is of type export, we download the resulting file from the FTP server, extract the contents, and create a new file on Dropbox for each exported BIB. This workflow simulates a typical business scenario of additional processing that must happen outside of Alma for records meeting certain criteria.

Getting Started

A webhook listener is simply a REST endpoint which accepts requests at a particular URL. To build a REST endpoint in .NET, we can use the ASP.NET Web API project type. Using Visual Studio 2015 (any edition), click File -> New Project and select ASP.NET Web Application.
Azure Webhooks - New Project
Click OK, and then select Web API in the next window.
Azure Webhooks - Web API
Now add a WebhooksController.cs file to the Controllers folder. We’re ready to begin writing our listener code.

Answer Challenge

When registering a new webhook integration profile, Alma needs to verify that there is a valid listener at that URL which expects to receive requests. To do so, Alma sends a GET request with a challenge parameter. It expects our listener to respond with the provided challenge. So we create a new method in our controller and specify HttpGet as the method attribute. The method creates a simple dynamic object and returns the object to the calling application:
[HttpGet]
public IHttpActionResult Challenge(string challenge)
{
   dynamic response = new ExpandoObject();
   response.challenge = challenge;
   return Ok(response);
}

Process requests

Alma sends webhook events as POST requests to our listener. So we create a new method in our controller and specify HttpPost as the method attribute. The method accepts a single parameter- a parsed JSON object (JObject). We tell ASP.NET to populate the parameter using the request body with the FromBody attribute. 
[HttpPost]
public async Task<IHttpActionResult> ProcessWebhook([FromBody]JObject body)
Now we’re ready to validate the signature and process the event.

Validate Signature

We extract the signature value from the X-Exl-Signature header. Then we perform a SHA256-based message hash on the body using the shared secret configured in Alma. We compare the value to the one received in the header. If the value doesn’t match, we return an unauthorized error code, fearing the message has been tampered with.
string signature = 
   Request.Headers.GetValues("X-Exl-Signature").First();
if (!ValidateSignature(
      body.ToString(Newtonsoft.Json.Formatting.None),
      signature,
      ConfigurationManager.AppSettings["WebhookSecret"])
   )
{
   return Unauthorized();
}
The ValidateSignature method creates a SHA256 hash object initialized with the shared secret, uses the object to hash the message body, and then converts the hash to base64.
private bool ValidateSignature(string body, 
   string signature, string secret)
{
   var hash = new System.Security.Cryptography.HMACSHA256(
      Encoding.ASCII.GetBytes(secret));
   var computedSignature = Convert.ToBase64String(
      hash.ComputeHash(Encoding.ASCII.GetBytes(body)));
   return computedSignature == signature;
}

Processing the Event

We now retrieve the webhook action from the body. Based on the action type, we call a method which processes those types of events. The method is available on a WebHookHandler model.
string action = body["action"].ToString();
switch (action.ToLower())
{
   case "job_end":
      WebhookHandler handler = new WebhookHandler();
      await handler.JobEnd(body["job_instance"]);
      return Ok();
   default:
      return BadRequest();
}

Performing the application logic

In our simulation of real business logic, we will be processing bibliographic export jobs. We will download the exported file from the FTP server, extract the bibliographic records, transform each record using XSLT, and upload a new file per record to our Dropbox account.
We use Newtonsoft’s JSON Linq syntax to extract the filename from the job instance. GThe job instance includes a number of counter values. We’re looking for the one which has a type of “c.jobs.bibExport.link”. The Linq syntax to find that counter is as follows:
jobInstance.SelectToken("$.counter[?(@.type.value=='c.jobs.bibExport.link')]");
Assuming the job instance includes an export file, we download the file from the configured FTP site. We then parse the XML file and extract each record. For each record, we perform an XSLT transformation, and then use the Dropbox API to upload the resulting files to our Dropbox account. 

Deploying to Azure

We can test our listener locally by posting messages to the webservice from any REST client (such as Postman or the Advanced REST Client). Once the service is working correctly, we want to publish it to Azure. We log in to the Azure portal and select New, then choose Web App. After providing a unique name and accepting the defaults for the other settings, Azure deploys our new, empty web application. 
Azure Webhooks - Overview

Application Settings

Some of our application settings are stored in the web.config file. These include the host and directory for the FTP site from which we download the exported file. However, there are other settings which should not be stored in source control, including the FTP username and password and the Dropbox token. We store those in a separate file that we’ve called web.secrets.config. We add a reference to that file from our web.config:
<appSettings file="Web.secrets.config">
   <add key="FtpHost" value="ftp.exlibris-usa.com" />
   <add key="FtpDir" value="public/TR_INTEGRATION_INST/export/" />
</appSettings>
Since that file doesn’t get deployed to our web app, we need to set those values in another way. Azure provides the ability to set application settings in the portal. The settings configured there override any values in the various configuration files.
Azure Webhooks - Settings

 

Publish from Visual Studio

You can use Git to publish an application to Azure, or you can use the tooling provided within Visual Studio. Under the Project menu, select the Publish to Azure option. Log in with a valid Azure account, and select the web app we created in the previous section. The application is deployed and is available at the custom URL. We test the application by performing a GET request with a challenge parameter and validating that our service has echoed back the challenge.

Putting it together

Now that our application is deployed, we are ready to configure Alma. Create a new Webhook integration profile and provide the URL of our service hosted on Azure. Specify a secret for the signature, being sure to use the same secret configured in our application settings. Specify JSON as the format, and save the integration profile.
Whenever a job finishes, Alma will call out to our webhook on Azure. We can test this by running a job within Alma and watching the real time log stream in the Azure portal. We see the challenge being logged when we register the webhook integration profile, and we see the job event coming in when the job we ran completes.
Azure Webhooks - Log

Leveraging the public cloud is a cost effective way to deploy services which extend Alma’s core functionality. The cloud allows us to focus on the desired functionality without the need for locally-hosted infrastructure. Given the choices in the market today, there are good options for any development stack. 
All of the code for this example is available in this Github repository

How to use Azure Storage without SDK

Important Note (updated on June 2018) :
Now you can access Azure blob by Azure AD token without the following shared key. (You can simply run backend integration using Managed Service Identity (MSI) or Azure AD service principal.)

See this announcement in team blog.
==========
If you want to get, add, and update objects in Azure storage (Blob, Table, Queue, Files), of course, you can manipulate these objects using Azure SDK (Node.js, .NET, PHP, Python, Java, etc).
Actually the easiest way to access Azure Storage using programming language is to use the Azure SDK (libraries), but what if we suddenly encounter the case that we cannot depend on these libraries ? For example, the case of using the programming language which is having no Azure SDK library, the case of distribution issue due to the size or other reasons, etc…

Calling REST APIs

In such a case, you can directly call REST API. This REST API provides all fundamental operations against Azure Storage, and Azure SDK is also calling these REST APIs in the bottom.
MSDN : Azure Storage Services REST API Reference
https://msdn.microsoft.com/library/azure/dd179355
For example, if you want to get (download) the blob (which is https://tsmatsuzsttest0001.blob.core.windows.net/container01/tmp.txt), you just send the following HTTP request.
GET https://tsmatsuzsttest0001.blob.core.windows.net/container01/tmp.txt
User-Agent: Test Client
x-ms-version: 2015-07-08
x-ms-date: Tue, 05 Jul 2016 06:48:26 GMT
Authorization: SharedKey tsmatsuzsttest0001:{shared key}
Host: tsmatsuzsttest0001.blob.core.windows.net
It’s very simple !
But one pain point is “How to get the shared key ?”, then I explain this how-to as follows.

How to create shared key (signature) using access key

The shared key is the signature derived (computed) from symmetric key called “Storage Access Key”, and you can get this key from Azure Portal.
When you’re using Microsoft technologies, this kind of computed signature is almost the base64 encoded string of HMAC with SHA256 algorithm. (see Azure ADPower BI EmbeddedAzure DocumentDB, Azure Batch services, etc)
The Azure Storage shared key (signature) is also the same manner !
The following programming example shows how to get this signature (shared key).
We assume that the access key is “93K17Co74T…” as follows.
PHP
<?php
$accesskey = "93K17Co74T2lDHk2rA+wmb/avIAS6u6lPnZrk2hyT+9+aov82qNhrcXSNGZCzm9mjd4d75/oxxOr6r1JVpgTLA==";
$inputvalue = . . . (show you later);

// create base64 encoded signature
$hash = hash_hmac('sha256',
  $inputvalue,
  base64_decode($accesskey),
  true);
$sig = base64_encode($hash);

// show result
echo $sig;
?>
Node.js (JavaScript)
var http = require('http');
var crypto = require("crypto");

http.createServer(function (req, res) {
  var accesskey = "93K17Co74T2lDHk2rA+wmb/avIAS6u6lPnZrk2hyT+9+aov82qNhrcXSNGZCzm9mjd4d75/oxxOr6r1JVpgTLA==";
  var inputvalue = . . . (show you later);

  // create base64 encoded signature
  var key = new Buffer(accesskey, "base64");
  var hmac = crypto.createHmac("sha256", key);
  hmac.update(inputvalue);
  var sig = hmac.digest("base64");

  // show result
  res.writeHead(200,
    { 'Content-Type': 'text/plain; charset=utf-8' });
  res.write(sig);
  res.end();
}).listen(8000);
C# (.NET)
using System;
using System.Text;
using System.Security.Cryptography;

static void Main(string[] args)
{
  var accesskey = "93K17Co74T2lDHk2rA+wmb/avIAS6u6lPnZrk2hyT+9+aov82qNhrcXSNGZCzm9mjd4d75/oxxOr6r1JVpgTLA==";
  var inputvalue = . . . (show you later);

  // create base64 encoded signature
  var hmac = new HMACSHA256();
  hmac.Key = Convert.FromBase64String(accesskey);
  byte[] sigbyte = hmac.ComputeHash(Encoding.UTF8.GetBytes(inputvalue));
  var sig = Convert.ToBase64String(sigbyte);

  // show result
  Console.WriteLine(sig);
  Console.ReadLine(); // Wait
}
As you can see, the input value (signed value) is needed for signing. This input value differs from each services (Azure Storage Services, Azure DocumentDB, Azure Batch Services, etc), and I explain about the case of Azure Storage Services.

How to create input value (challenge value) for Azure Storage

Next I show you how to get the input value for Azure Storage Services.
The input value (challenge value) is the bytes of UTF-8 string constructed from HTTP Request envelope of REST call.
This format is the following. (Please be sure to use “\n” as the line feed character, not “\r\n”.)
Notice : I will explain the details of {Canonicalized Header String} and {Canonicalized Resource String} later.
{HTTP VERB}\n
{Header value of Content-Encoding}\n
{Header value of Content-Language}\n
{Header value of Content-Length}\n
{Header value of Content-MD5}\n
{Header value of Content-Type}\n
{Header value of Date}\n
{Header value of If-Modified-Since}\n
{Header value of If-Match}\n
{Header value of If-None-Match}\n
{Header value of If-Unmodified-Since}\n
{Header value of Range}\n
{Canonicalized Header String (repeated)}\n
{Canonicalized Resource String of URI path}\n
{Canonicalized Resource String of query parameters (repeated)}
The canonicalized header is the HTTP header which starts with “x-ms-” (x-ms-date, x-ms-version, etc).
The canonicalized resource of uri path is like /{storage account name}/{container}/{blob} separated by slash (/), if you access to the uri “https://{storage account}.blob.core.windows.net/{container}/{blob}”.
The canonicalized resource of query parameters is like {parameter name}:{parameter value} separated by colon (:).
For instance, we assume the following HTTP request of REST call.
Notice : The following request will fail, because it’s including both If-Modified-Since and If-Match. (These header is not supported in this REST API request.) Sorry, but this is just the sample for your understanding.
PUT https://test01storage.blob.core.windows.net/container01/tmp.txt?timeout=20&paramtest=value1
User-Agent: Test Client
x-ms-version: 2015-07-08
Content-Type: text/plain; charset=UTF-8
Content-Language: ja
Content-Encoding: gzip
Content-MD5: aQI49bNvDYLLD0DrOMtETw==
x-ms-blob-type: BlockBlob
x-ms-client-request-id: 80f5bd4a-56ed-4ffa-9d04-afd73fda5c9c
x-ms-date: Tue, 05 Jul 2016 01:46:24 GMT
If-Match: etg23vfj
If-Modified-Since: Mon, 27 Jul 2016 01:46:24 GMT
Host: tsmatsuzsttest0001.blob.core.windows.net
Content-Length: 3000

. . . Body (byte) . . .
Then the input value (challenge value) is the following string bytes. (Please be sure to use “n” as a line feed character.)
PUT
gzip
ja
3000
aQI49bNvDYLLD0DrOMtETw==
text/plain; charset=UTF-8

Mon, 27 Jul 2016 01:46:24 GMT
etg23vfj



x-ms-blob-type:BlockBlob
x-ms-client-request-id:80f5bd4a-56ed-4ffa-9d04-afd73fda5c9c
x-ms-date:Tue, 05 Jul 2016 01:46:24 GMT
x-ms-version:2015-07-08
/test01storage/container01/tmp.txt
paramtest:value1
timeout:20
As you know, this mechanism (logic) prevents the malicious users to change the HTTP request without access key. Because the signature should be re-written when the HTTP request envelop (byte) is changed.
Notice : You must also care about “Date” header. The web proxy often changes this “Date” header in HTTP request. As a result, the call for REST API would fail. (Because the signature is invalid.)
It’s better to use “x-ms-date” header instead of “Date” header, when you use REST API.
There are other several notice. (Please refer “MSDN : Authentication for the Azure Storage Services” for details.) :
  • Each query parameter name and value must be url-decoded.
  • Sort the canonicalized headers lexicographically by header name
  • The characters of canonicalized header name (x-ms-…) should all be lowercase. (On the contrary, the canonicalized header value can include the uppercase.)
  • Replace any breaking white space of the canonicalized header value to a single space
  • Trim (left and right) the canonicalized header value.
  • If the api version is prior to 2015-02-21 and Content-Length is blank, the Content-Length shoud be zero (0).
  • If there are the same (repeated) query parameters, sort all values lexicographically and include them in a comma-separated list.

Programming Examples (PHP, JavaScript, C#)

I show you the programming example to get the shared key.
Assuming that we want to publish the next HTTP request (as REST call).
GET https://tsmatsuzsttest0001.blob.core.windows.net/container01/tmp.txt
User-Agent: Test Client
x-ms-version: 2015-07-08
x-ms-client-request-id: 9251fa41-0ca4-4558-84ac-44ab027b8f1e
x-ms-date: Tue, 05 Jul 2016 06:48:26 GMT
Host: tsmatsuzsttest0001.blob.core.windows.net
In this case, you can get shared key (base64 encoded signature) using the next programming example. (Please change the access key with your own.)
PHP
<?php
$accesskey = "93K17Co74T2lDHk2rA+wmb/avIAS6u6lPnZrk2hyT+9+aov82qNhrcXSNGZCzm9mjd4d75/oxxOr6r1JVpgTLA==";

// construct input value
$inputvalue = "GET\n" . /*VERB*/
  "\n" . /*Content-Encoding*/
  "\n" . /*Content-Language*/
  "\n" . /*Content-Length*/
  "\n" . /*Content-MD5*/
  "\n" . /*Content-Type*/
  "\n" . /*Date*/
  "\n" . /*If-Modified-Since*/
  "\n" . /*If-Match*/
  "\n" . /*If-None-Match*/
  "\n" . /*If-Unmodified-Since*/
  "\n" . /*Range*/
  "x-ms-client-request-id:9251fa41-0ca4-4558-84ac-44ab027b8f1e\n" .
  "x-ms-date:Tue, 05 Jul 2016 06:48:26 GMT\n" .
  "x-ms-version:2015-07-08\n" .
  "/tsmatsuzsttest0001/container01/tmp.txt";

// create base64 encoded signature
$hash = hash_hmac('sha256',
  $inputvalue,
  base64_decode($accesskey),
  true);
$sig = base64_encode($hash);

// show result
echo $sig;
?>
Node.js (JavaScript)
var http = require('http');
var crypto = require("crypto");

http.createServer(function (req, res) {
  var accesskey = "93K17Co74T2lDHk2rA+wmb/avIAS6u6lPnZrk2hyT+9+aov82qNhrcXSNGZCzm9mjd4d75/oxxOr6r1JVpgTLA==";

  // construct input value
  var inputvalue = "GET\n" + /*VERB*/
    "\n" + /*Content-Encoding*/
    "\n" + /*Content-Language*/
    "\n" + /*Content-Length*/
    "\n" + /*Content-MD5*/
    "\n" + /*Content-Type*/
    "\n" + /*Date*/
    "\n" + /*If-Modified-Since*/
    "\n" + /*If-Match*/
    "\n" + /*If-None-Match*/
    "\n" + /*If-Unmodified-Since*/
    "\n" + /*Range*/
    "x-ms-client-request-id:9251fa41-0ca4-4558-84ac-44ab027b8f1e\n" +
    "x-ms-date:Tue, 05 Jul 2016 06:48:26 GMT\n" +
    "x-ms-version:2015-07-08\n" +
    "/tsmatsuzsttest0001/container01/tmp.txt";

  // create base64 encoded signature
  var key = new Buffer(accesskey, "base64");
  var hmac = crypto.createHmac("sha256", key);
  hmac.update(inputvalue);
  var sig = hmac.digest("base64");

  // show result
  res.writeHead(200,
    { 'Content-Type': 'text/plain; charset=utf-8' });
  res.write(sig);
  res.end();
}).listen(8000);
C# (.NET)
using System;
using System.Text;
using System.Security.Cryptography;

static void Main(string[] args)
{
  var accesskey = "93K17Co74T2lDHk2rA+wmb/avIAS6u6lPnZrk2hyT+9+aov82qNhrcXSNGZCzm9mjd4d75/oxxOr6r1JVpgTLA==";

  // construct input value
  var inputvalue = "GET\n" + /*VERB*/
    "\n" + /*Content-Encoding*/
    "\n" + /*Content-Language*/
    "\n" + /*Content-Length*/
    "\n" + /*Content-MD5*/
    "\n" + /*Content-Type*/
    "\n" + /*Date*/
    "\n" + /*If-Modified-Since*/
    "\n" + /*If-Match*/
    "\n" + /*If-None-Match*/
    "\n" + /*If-Unmodified-Since*/
    "\n" + /*Range*/
    "x-ms-client-request-id:9251fa41-0ca4-4558-84ac-44ab027b8f1e\n" +
    "x-ms-date:Tue, 05 Jul 2016 06:48:26 GMT\n" +
    "x-ms-version:2015-07-08\n" +
    "/tsmatsuzsttest0001/container01/tmp.txt";

  // create base64 encoded signature
  var hmac = new HMACSHA256();
  hmac.Key = Convert.FromBase64String(accesskey);
  byte[] sigbyte = hmac.ComputeHash(Encoding.UTF8.GetBytes(inputvalue));
  var sig = Convert.ToBase64String(sigbyte);

  // show result
  Console.WriteLine(sig);
  Console.ReadLine(); // Wait
}
This example returns “sGX7uEBy8i9ldZtx8nLDeD3vX3AI/LB/3msK0oL7oMI=”.
As a result, you must set the authorization header as follows.
GET https://tsmatsuzsttest0001.blob.core.windows.net/container01/tmp.txt
User-Agent: Test Client
x-ms-version: 2015-07-08
x-ms-client-request-id: 9251fa41-0ca4-4558-84ac-44ab027b8f1e
x-ms-date: Tue, 05 Jul 2016 06:48:26 GMT
Authorization: SharedKey tsmatsuzsttest0001:sGX7uEBy8i9ldZtx8nLDeD3vX3AI/LB/3msK0oL7oMI=
Host: tsmatsuzsttest0001.blob.core.windows.net

Using Shared Key Lite

You can also use the light-weight input value, when you use the signature called “Shared Key Lite”. This type of shared key uses the following input value.
Blob, Queue
{HTTP VERB}\n
{Header value of Content-MD5}\n
{Header value of Content-Type}\n
{Header value of Date}\n
{Canonicalized Header String (repeated)}\n
{Canonicalized Resource String of URI path}\n
{Canonicalized Resource String of query parameters}
Table
{HTTP VERB}\n
{Header value of Content-MD5}\n
{Header value of Content-Type}\n
{Header value of Date}\n
{Canonicalized Resource String of URI path}\n
{Canonicalized Resource String of query parameters}
The programming example is the same, and you must set Authorization header as follows. (use “SharedKeyLite” instead of “SharedKey”.)
Authorization: SharedKeyLite {account name}:{signature}

Calling REST API using Shared Access Signature (SAS) URI

You can also use the URI-based signature called Shared Access Signature (SAS) as following.
This only uses uri fragment and doesn’t need the specific HTTP header values (Authorization header, etc) for the secure communication. You can get this uri using Azure Portal (pushing “Generate SAS” button as following picture shows), and just put this uri in your code.
This is very portable !
This uri is used for “sharing” the resource for other users, but it’s having the expiration (not permanent).
For example, we assume that we get the following URI. (The operations using Shared Access Signature URIs should only be performed over an HTTPS connection.)
https://tsmatsuzsttest0001.blob.core.windows.net/?sv=2015-04-05&ss=bfqt&srt=sco&sp=rwdlacup&se=2016-07-08T04:41:20Z&st=2016-06-29T04:41:20Z&spr=https&sig={signature}
Each query string (sv, ss, srt, etc) means as follows. (Please see “MSDN : Constructing an Account SAS” for details.)
sv(signed) api version
ss(signed) service (b=blob, f=files, q=queue, t=table)
srt(signed) resource types (s=service, c=container, o=blob object)
sp(signed) permissions (r=read, w=write, d=delete, l=list, a=add, etc)
se(signed) expire time
st(signed) start time
spr(signed) protocol
sip(signed) allowed ip addresses
This signature also expires when the expire time arrives, and you can also get this Shared Access Signature (SAS) with your programming code using the same manner like shared key (previously explained).
The input value (challenge) is the following.
{account name}\n
{signed permissions}\n"
{signed service}\n"
{signed resource type}\n"
{signed start time}\n"
{signed expire time}\n"
{signed allowed ip addresses}\n"
{signed protocol}\n"
{signed version}\n"
Notice : There exist service-level SAS and account-level SAS, and the signed input value differs from each other. In this example, we are talking about the account-level SAS. (For service-level SAS, please see “MSDN : Constructing a Service SAS“.)
For instance, you can get the signature using the following programming example (PHP).
<?php
$accesskey = "93K17Co74T2lDHk2rA+wmb/avIAS6u6lPnZrk2hyT+9+aov82qNhrcXSNGZCzm9mjd4d75/oxxOr6r1JVpgTLA==";

// construct input value
$inputvalue = "tsmatsuzsttest0001\n" . /* account name */
  "rwdlacup\n" . /* signed permissions */
  "bfqt\n" . /* signed service */
  "sco\n" . /* signed resource type */
  "2016-06-29T04:41:20Z\n" . /* signed start time */
  "2016-07-08T04:41:20Z\n" . /* signed expire time */
  "\n" . /* signed ip */
  "https\n" . /* signed protocol */
  "2015-04-05\n"; /* signed version */

// create base64 encoded signature
$hash = hash_hmac('sha256',
  $inputvalue,
  base64_decode($accesskey),
  true);
$sig = base64_encode($hash);

// show result
echo $sig;
?>
It returns “+XuDjuLE1Sv/FrJTLz8YjsaDukWNTKX7e8G8Ew+5aps=”. As a result, the complete SAS URI would be the following. (The signature must be url-encoded.)
https://tsmatsuzsttest0001.blob.core.windows.net/?sv=2015-04-05&ss=bfqt&srt=sco&sp=rwdlacup&se=2016-07-08T04:41:20Z&st=2016-06-29T04:41:20Z&spr=https&sig=%2BXuDjuLE1Sv%2FFrJTLz8YjsaDukWNTKX7e8G8Ew%2B5aps%3D
As you can see, this type of signature (account-level SAS) is not including resource data. Then you can also get the blob using the same signature as follows. (This example is “tmp.txt” in the container named “container01”.)
https://tsmatsuzsttest0001.blob.core.windows.net/container01/tmp.txt?sv=2015-04-05&ss=bfqt&srt=sco&sp=rwdlacup&se=2016-07-08T04:41:20Z&st=2016-06-29T04:41:20Z&spr=https&sig=%2BXuDjuLE1Sv%2FFrJTLz8YjsaDukWNTKX7e8G8Ew%2B5aps%3D

Develop for Azure Files with .NET

This tutorial demonstrates the basics of using .NET to develop applications that use Azure Files to store file data. This tutorial creates a simple console application to perform basic actions with .NET and Azure Files:
  • Get the contents of a file
  • Set the quota (maximum size) for the file share.
  • Create a shared access signature (SAS key) for a file that uses a shared access policy defined on the share.
  • Copy a file to another file in the same storage account.
  • Copy a file to a blob in the same storage account.
  • Use Azure Storage Metrics for troubleshooting
To learn more about Azure Files, see Introduction to Azure Files.
Tip
Check out the Azure Storage code samples repository
For easy-to-use end-to-end Azure Storage code samples that you can download and run, please check out our list of Azure Storage Samples.

Understanding the .NET APIs

Azure Files provides two broad approaches to client applications: Server Message Block (SMB) and REST. Within .NET, these approaches are abstracted by the System.IO and WindowsAzure.Storage APIs.
APIWhen to useNotes
System.IOYour application:
  • Needs to read/write files via SMB
  • Is running on a device that has access over port 445 to your Azure Files account
  • Doesn't need to manage any of the administrative settings of the file share
Coding file I/O with Azure Files over SMB is generally the same as coding I/O with any network file share or local storage device. See this tutorial for an introduction to a number of features in .NET, including file I/O.
WindowsAzure.StorageYour application:
  • Can't access Azure Files via SMB on port 445 due to firewall or ISP constraints
  • Requires administrative functionality, such as the ability to set a file share's quota or create a shared access signature
This article demonstrates the usage of WindowsAzure.Storage for file I/O using REST (instead of SMB) and management of the file share.

Create the console application and obtain the assembly

In Visual Studio, create a new Windows console application. The following steps show you how to create a console application in Visual Studio 2017, however, the steps are similar in other versions of Visual Studio.
  1. Select File > New > Project
  2. Select Installed > Templates > Visual C# > Windows Classic Desktop
  3. Select Console App (.NET Framework)
  4. Enter a name for your application in the Name: field
  5. Select OK
All code examples in this tutorial can be added to the Main() method of your console application's Program.cs file.
You can use the Azure Storage Client Library in any type of .NET application, including an Azure cloud service or web app, and desktop and mobile applications. In this guide, we use a console application for simplicity.

Use NuGet to install the required packages

There are two packages you need to reference in your project to complete this tutorial:
You can use NuGet to obtain both packages. Follow these steps:
  1. Right-click your project in Solution Explorer and choose Manage NuGet Packages.
  2. Search online for "WindowsAzure.Storage" and click Install to install the Storage Client Library and its dependencies.
  3. Search online for "WindowsAzure.ConfigurationManager" and click Install to install the Azure Configuration Manager.

Save your storage account credentials to the app.config file

Next, save your credentials in your project's app.config file. Edit the app.config file so that it appears similar to the following example, replacing myaccount with your storage account name, and mykey with your storage account key.
XML
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <startup>
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
    </startup>
    <appSettings>
        <add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=myaccount;AccountKey=StorageAccountKeyEndingIn==" />
    </appSettings>
</configuration>
Note
The latest version of the Azure storage emulator does not support Azure Files. Your connection string must target an Azure Storage Account in the cloud to work with Azure Files.

Add using directives

Open the Program.cs file from Solution Explorer, and add the following using directives to the top of the file.
C#
using Microsoft.Azure; // Namespace for Azure Configuration Manager
using Microsoft.WindowsAzure.Storage; // Namespace for Storage Client Library
using Microsoft.WindowsAzure.Storage.Blob; // Namespace for Azure Blobs
using Microsoft.WindowsAzure.Storage.File; // Namespace for Azure Files
The Microsoft Azure Configuration Manager Library for .NET provides a class for parsing a connection string from a configuration file. The CloudConfigurationManager class parses configuration settings regardless of whether the client application is running on the desktop, on a mobile device, in an Azure virtual machine, or in an Azure cloud service.
To reference the CloudConfigurationManager package, add the following using directive:
C#
using Microsoft.Azure; //Namespace for CloudConfigurationManager
Here's an example that shows how to retrieve a connection string from a configuration file:
C#
// Parse the connection string and return a reference to the storage account.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
    CloudConfigurationManager.GetSetting("StorageConnectionString"));
Using the Azure Configuration Manager is optional. You can also use an API like the .NET Framework's ConfigurationManager class.

Access the file share programmatically

Next, add the following code to the Main() method (after the code shown above) to retrieve the connection string. This code gets a reference to the file we created earlier and outputs its contents to the console window.
C#
// Create a CloudFileClient object for credentialed access to Azure Files.
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();

// Get a reference to the file share we created previously.
CloudFileShare share = fileClient.GetShareReference("logs");

// Ensure that the share exists.
if (share.Exists())
{
    // Get a reference to the root directory for the share.
    CloudFileDirectory rootDir = share.GetRootDirectoryReference();

    // Get a reference to the directory we created previously.
    CloudFileDirectory sampleDir = rootDir.GetDirectoryReference("CustomLogs");

    // Ensure that the directory exists.
    if (sampleDir.Exists())
    {
        // Get a reference to the file we created previously.
        CloudFile file = sampleDir.GetFileReference("Log1.txt");

        // Ensure that the file exists.
        if (file.Exists())
        {
            // Write the contents of the file to the console window.
            Console.WriteLine(file.DownloadTextAsync().Result);
        }
    }
}
Run the console application to see the output.

Set the maximum size for a file share

Beginning with version 5.x of the Azure Storage Client Library, you can set the quota (or maximum size) for a file share, in gigabytes. You can also check to see how much data is currently stored on the share.
By setting the quota for a share, you can limit the total size of the files stored on the share. If the total size of files on the share exceeds the quota set on the share, then clients will be unable to increase the size of existing files or create new files, unless those files are empty.
The example below shows how to check the current usage for a share and how to set the quota for the share.
C#
// Parse the connection string for the storage account.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
    Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create a CloudFileClient object for credentialed access to Azure Files.
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();

// Get a reference to the file share we created previously.
CloudFileShare share = fileClient.GetShareReference("logs");

// Ensure that the share exists.
if (share.Exists())
{
    // Check current usage stats for the share.
    // Note that the ShareStats object is part of the protocol layer for the File service.
    Microsoft.WindowsAzure.Storage.File.Protocol.ShareStats stats = share.GetStats();
    Console.WriteLine("Current share usage: {0} GB", stats.Usage.ToString());

    // Specify the maximum size of the share, in GB.
    // This line sets the quota to be 10 GB greater than the current usage of the share.
    share.Properties.Quota = 10 + stats.Usage;
    share.SetProperties();

    // Now check the quota for the share. Call FetchAttributes() to populate the share's properties.
    share.FetchAttributes();
    Console.WriteLine("Current share quota: {0} GB", share.Properties.Quota);
}

Generate a shared access signature for a file or file share

Beginning with version 5.x of the Azure Storage Client Library, you can generate a shared access signature (SAS) for a file share or for an individual file. You can also create a shared access policy on a file share to manage shared access signatures. Creating a shared access policy is recommended, as it provides a means of revoking the SAS if it should be compromised.
The following example creates a shared access policy on a share, and then uses that policy to provide the constraints for a SAS on a file in the share.
C#
// Parse the connection string for the storage account.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
    Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create a CloudFileClient object for credentialed access to Azure Files.
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();

// Get a reference to the file share we created previously.
CloudFileShare share = fileClient.GetShareReference("logs");

// Ensure that the share exists.
if (share.Exists())
{
    string policyName = "sampleSharePolicy" + DateTime.UtcNow.Ticks;

    // Create a new shared access policy and define its constraints.
    SharedAccessFilePolicy sharedPolicy = new SharedAccessFilePolicy()
        {
            SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24),
            Permissions = SharedAccessFilePermissions.Read | SharedAccessFilePermissions.Write
        };

    // Get existing permissions for the share.
    FileSharePermissions permissions = share.GetPermissions();

    // Add the shared access policy to the share's policies. Note that each policy must have a unique name.
    permissions.SharedAccessPolicies.Add(policyName, sharedPolicy);
    share.SetPermissions(permissions);

    // Generate a SAS for a file in the share and associate this access policy with it.
    CloudFileDirectory rootDir = share.GetRootDirectoryReference();
    CloudFileDirectory sampleDir = rootDir.GetDirectoryReference("CustomLogs");
    CloudFile file = sampleDir.GetFileReference("Log1.txt");
    string sasToken = file.GetSharedAccessSignature(null, policyName);
    Uri fileSasUri = new Uri(file.StorageUri.PrimaryUri.ToString() + sasToken);

    // Create a new CloudFile object from the SAS, and write some text to the file.
    CloudFile fileSas = new CloudFile(fileSasUri);
    fileSas.UploadText("This write operation is authorized via SAS.");
    Console.WriteLine(fileSas.DownloadText());
}
For more information about creating and using shared access signatures, see Using Shared Access Signatures (SAS) and Create and use a SAS with Azure Blobs.

Copy files

Beginning with version 5.x of the Azure Storage Client Library, you can copy a file to another file, a file to a blob, or a blob to a file. In the next sections, we demonstrate how to perform these copy operations programmatically.
You can also use AzCopy to copy one file to another or to copy a blob to a file or vice versa. See Transfer data with the AzCopy Command-Line Utility.
Note
If you are copying a blob to a file, or a file to a blob, you must use a shared access signature (SAS) to authorize access to the source object, even if you are copying within the same storage account.
Copy a file to another file The following example copies a file to another file in the same share. Because this copy operation copies between files in the same storage account, you can use Shared Key authentication to perform the copy.
C#
// Parse the connection string for the storage account.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
    Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create a CloudFileClient object for credentialed access to Azure Files.
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();

// Get a reference to the file share we created previously.
CloudFileShare share = fileClient.GetShareReference("logs");

// Ensure that the share exists.
if (share.Exists())
{
    // Get a reference to the root directory for the share.
    CloudFileDirectory rootDir = share.GetRootDirectoryReference();

    // Get a reference to the directory we created previously.
    CloudFileDirectory sampleDir = rootDir.GetDirectoryReference("CustomLogs");

    // Ensure that the directory exists.
    if (sampleDir.Exists())
    {
        // Get a reference to the file we created previously.
        CloudFile sourceFile = sampleDir.GetFileReference("Log1.txt");

        // Ensure that the source file exists.
        if (sourceFile.Exists())
        {
            // Get a reference to the destination file.
            CloudFile destFile = sampleDir.GetFileReference("Log1Copy.txt");

            // Start the copy operation.
            destFile.StartCopy(sourceFile);

            // Write the contents of the destination file to the console window.
            Console.WriteLine(destFile.DownloadText());
        }
    }
}
Copy a file to a blob The following example creates a file and copies it to a blob within the same storage account. The example creates a SAS for the source file, which the service uses to authorize access to the source file during the copy operation.
C#
// Parse the connection string for the storage account.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
    Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));

// Create a CloudFileClient object for credentialed access to Azure Files.
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();

// Create a new file share, if it does not already exist.
CloudFileShare share = fileClient.GetShareReference("sample-share");
share.CreateIfNotExists();

// Create a new file in the root directory.
CloudFile sourceFile = share.GetRootDirectoryReference().GetFileReference("sample-file.txt");
sourceFile.UploadText("A sample file in the root directory.");

// Get a reference to the blob to which the file will be copied.
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("sample-container");
container.CreateIfNotExists();
CloudBlockBlob destBlob = container.GetBlockBlobReference("sample-blob.txt");

// Create a SAS for the file that's valid for 24 hours.
// Note that when you are copying a file to a blob, or a blob to a file, you must use a SAS
// to authorize access to the source object, even if you are copying within the same
// storage account.
string fileSas = sourceFile.GetSharedAccessSignature(new SharedAccessFilePolicy()
{
    // Only read permissions are required for the source file.
    Permissions = SharedAccessFilePermissions.Read,
    SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24)
});

// Construct the URI to the source file, including the SAS token.
Uri fileSasUri = new Uri(sourceFile.StorageUri.PrimaryUri.ToString() + fileSas);

// Copy the file to the blob.
destBlob.StartCopy(fileSasUri);

// Write the contents of the file to the console window.
Console.WriteLine("Source file contents: {0}", sourceFile.DownloadText());
Console.WriteLine("Destination blob contents: {0}", destBlob.DownloadText());
You can copy a blob to a file in the same way. If the source object is a blob, then create a SAS to authorize access to that blob during the copy operation.

Share snapshots (preview)

Beginning with version 8.5 of the Azure Storage Client Library, you can create a share snapshot (preview). You can also list or browse share snapshots and delete share snapshots. Share snapshots are read-only so no write operations are allowed on share snapshots.
Create share snapshots
The following example creates a file share snapshot.
C#
storageAccount = CloudStorageAccount.Parse(ConnectionString); 
fClient = storageAccount.CreateCloudFileClient(); 
string baseShareName = "myazurefileshare"; 
CloudFileShare myShare = fClient.GetShareReference(baseShareName); 
var snapshotShare = myShare.Snapshot();
List share snapshots
The following example lists the share snapshots on a share.
C#
var shares = fClient.ListShares(baseShareName, ShareListingDetails.All);
Browse files and directories within share snapshots
The following example browses files and directory within share snapshots.
C#
CloudFileShare mySnapshot = fClient.GetShareReference(baseShareName, snapshotTime); 
var rootDirectory = mySnapshot.GetRootDirectoryReference(); 
var items = rootDirectory.ListFilesAndDirectories();
List shares and share snapshots and restore file shares or files from share snapshots
Taking a snapshot of a file share enables you to recover individual files or the entire the file share in the future.
You can restore a file from a file share snapshot by querying the share snapshots of a file share. You can then retrieve a file that belongs to a particular share snapshot and use that version to either directly read and compare or to restore.
C#
CloudFileShare liveShare = fClient.GetShareReference(baseShareName);
var rootDirOfliveShare = liveShare.GetRootDirectoryReference();

       var dirInliveShare = rootDirOfliveShare.GetDirectoryReference(dirName);
var fileInliveShare = dirInliveShare.GetFileReference(fileName);


CloudFileShare snapshot = fClient.GetShareReference(baseShareName, snapshotTime);
var rootDirOfSnapshot = snapshot.GetRootDirectoryReference();

       var dirInSnapshot = rootDirOfSnapshot.GetDirectoryReference(dirName);
var fileInSnapshot = dir1InSnapshot.GetFileReference(fileName);

string sasContainerToken = string.Empty;
       SharedAccessFilePolicy sasConstraints = new SharedAccessFilePolicy();
       sasConstraints.SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24);
       sasConstraints.Permissions = SharedAccessFilePermissions.Read;
       //Generate the shared access signature on the container, setting the constraints directly on the signature.
sasContainerToken = fileInSnapshot.GetSharedAccessSignature(sasConstraints);

string sourceUri = (fileInSnapshot.Uri.ToString() + sasContainerToken + "&" + fileInSnapshot.SnapshotTime.ToString()); ;
fileInliveShare.StartCopyAsync(new Uri(sourceUri));
Delete share snapshots
The following example deletes a file share snapshot.
C#
CloudFileShare mySnapshot = fClient.GetShareReference(baseShareName, snapshotTime); mySnapshot.Delete(null, null, null);

Troubleshooting Azure Files using metrics

Azure Storage Analytics now supports metrics for Azure Files. With metrics data, you can trace requests and diagnose issues.
You can enable metrics for Azure Files from the Azure Portal. You can also enable metrics programmatically by calling the Set File Service Properties operation via the REST API, or one of its analogs in the Storage Client Library.
The following code example shows how to use the Storage Client Library for .NET to enable metrics for Azure Files.
First, add the following using directives to your Program.cs file, in addition to those you added above:
C#
using Microsoft.WindowsAzure.Storage.File.Protocol;
using Microsoft.WindowsAzure.Storage.Shared.Protocol;
Note that while Azure Blobs, Azure Table, and Azure Queues use the shared ServiceProperties type in the Microsoft.WindowsAzure.Storage.Shared.Protocol namespace, Azure Files uses its own type, the FileServiceProperties type in the Microsoft.WindowsAzure.Storage.File.Protocol namespace. Both namespaces must be referenced from your code, however, for the following code to compile.
C#
// Parse your storage connection string from your application's configuration file.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
        Microsoft.Azure.CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the File service client.
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();

// Set metrics properties for File service.
// Note that the File service currently uses its own service properties type,
// available in the Microsoft.WindowsAzure.Storage.File.Protocol namespace.
fileClient.SetServiceProperties(new FileServiceProperties()
{
    // Set hour metrics
    HourMetrics = new MetricsProperties()
    {
        MetricsLevel = MetricsLevel.ServiceAndApi,
        RetentionDays = 14,
        Version = "1.0"
    },
    // Set minute metrics
    MinuteMetrics = new MetricsProperties()
    {
        MetricsLevel = MetricsLevel.ServiceAndApi,
        RetentionDays = 7,
        Version = "1.0"
    }
});

// Read the metrics properties we just set.
FileServiceProperties serviceProperties = fileClient.GetServiceProperties();
Console.WriteLine("Hour metrics:");
Console.WriteLine(serviceProperties.HourMetrics.MetricsLevel);
Console.WriteLine(serviceProperties.HourMetrics.RetentionDays);
Console.WriteLine(serviceProperties.HourMetrics.Version);
Console.WriteLine();
Console.WriteLine("Minute metrics:");
Console.WriteLine(serviceProperties.MinuteMetrics.MetricsLevel);
Console.WriteLine(serviceProperties.MinuteMetrics.RetentionDays);
Console.WriteLine(serviceProperties.MinuteMetrics.Version);
Also, you can refer to Azure Files Troubleshooting Article for end-to-end troubleshooting guidance.