Things you may consider when choosing a Microsoft Azure Region.

With over 50 regions worldwide, Azure has more global regions than any other cloud provider. When you have to choose a region for your Azure resources, consider these four factors:

  1. Not all services are available in all regions
    You have to ensure all resources you want to deploy are available within the desired region. Here is a nice overview of Products available by region.
  2. Prices vary by region
    Not all Azure Regions have the same pricing for the same resources. While most of the time you are looking for the closest Azure Region to your company / customers, you may consider choosing another Region due to the lower pricing model. A good resource to determine the price for a resource for a specific region is the Azure Pricing Calculator.
  3. Latency from your customer to the Azure datacenter
    For performance reason you want to choose the Azure region with the least latency to your company / customers. To measure the latency you can perform a Azure Latency Test on AzureSpeed.com
  4. Location of your customer data (data residency)
    You may have specific compliance or data-residency requirements that will force you to use a specific region. Get data residency details.

Additional information: Learn more about Azure Regions 

 

Serving a HTML Page from Azure PowerShell Function

The new PowerShell language (experimental) support in Azure Function is really handy. Especially if you want to use the Azure PowerShell cmdlets to retrieve any kind of Azure Resources and diplay them in a HTML page. Hosting the cmdlet in Azure Function eliminates the need of a local installed Azure PowerShell module.

Here is a simple PowerShell Function that returns a Hello world HTML page:

# POST method: $req
$requestBody = Get-Content $req -Raw | ConvertFrom-Json
$name = $requestBody.name

# GET method: each querystring parameter is its own variable
if ($req_query_name)
{
    $name = $req_query_name
}

$html = @'

<header>This is title</header>


Hello world


'@

Out-File -Encoding Ascii -FilePath $res -inputObject $html

Invoking the script in a browser doesn’t give us the desired result. The content gets interpreted as XML instead of HTML:

xmlresult

The reason for that is that the Content-Type is set to applicaton/xml:

xmlresponse

The output of an Azure PowerShell function is a file (called $res by default) – so how can we change the content type to text/html?

It turns out (thanks to Mikhail) that we can construct a Repsonse Object using a JSON string (here the node definition) where we can set the content and the content type:

$resp = [string]::Format('{{ "status": 200, "body": "{0}", "headers": {{
"content-type": "text/html" }} }}', $html)
Out-File -Encoding Ascii -FilePath $res -inputObject $resp

If we invoke the function in the browser again we get the desired result:
htmlresponse

And the content-type is set to text/html:
htmldebug

Note that if your HTML contains JSON characters like double quotes or backslashes you will have to escape them. Example:

$html -replace '"', '\"'

Three reasons why you should associate multiple subscriptions with the same Azure Active Directory

In Azure, multiple subscriptions can trust the same Azure Active Directory but each subscription trusts only one directory.

If you create a new Azure subcription, a new Azure Active Directory is automatically created and associated with your subscription. To provide a user access for a resource you can use Role-Based Access Control (RBAC) given that the user is part of the associated Azure Active Directory. You can also add existing users from another Azure Active Directory as guest but I would still recommend to link your subscriptions with the same directory for the following three reasons:

  1. If you use a different directory for your subscription you won’t be able to move resources between your subscription:

    The source and destination subscriptions must exist within the same Azure Active Directory tenant.

  2. You can easy jump to your resources using the “All resources” blade by using the “Filter by Name” search field and don’t have to remember which resource belongs to which subscription:
    filterbyname
  3. If your user is a guest in many directories, your tenant list will grow and switching directories will become a mess:
    subscription

 

Read here how to associate or add an Azure subscription to Azure Active Directory

Configure Azure Cloud Shell to use a profile hosted on GitHub

You may have noticed that you can run the Azure Cloud Shell without the portal as a separate component on https://shell.azure.com/

The shell is really handy since it can be used from everywhere. Today I want to show you how you can load a remote profile that is hosted on GitHub in the Azure Cloud Shell.

A PowerShell profile is used to add aliases, functions or variables to a session every time you start the shell.

The Azure Cloud Shell uses a fileshare stored on your storage account to persist files. This is also true for your profile. You can determine your profile path by entering:

$profile

You will see a path similar to this:

profile

This doesn’t mean that the profile exists, its just the path where Azure Cloud Shell tries to load your profile when you start it. You can determine whether the file actually exists using the Test-Path cmdlet:

Test-Path $profile

The cmdlet should return false if you didn’t already created a profile:

Capture

You could create a profile using the New-Item cmdlet and go to your file share to edit it. But you may like to have a history where you can compare the changes you made. You may also want to use the same profile for different accounts. So how can we connect a profile that is stored in a GitHub repository?

Lets start with adding the actual profile to our GitHub repository. My profile.ps1 contains a single function to print Hello World:

function Show-HelloWorld
{
    Write-Host "hello, world!"
}

Next we have to load the profile. For that purpose I have created another file called Set-Profile.ps1:

$profilePath = 'https://raw.githubusercontent.com/mjisaak/azure/master/profile.ps1'

$downloadString = '{0}?{1}' -f $profilePath, (New-Guid)
Invoke-Expression ((New-Object System.Net.WebClient).DownloadString($profilePath))

The $profilePath contains the URL to the previous created profile.ps1. I do append a query string to the path containg a random guid to prevent the web client from caching the file. This is particularly usefull when we update the profile.ps1 in the GitHub repository and want to load these changes without restarting the shell by dot sourcing the profile.
In line 4,  I download the profile.ps1 as a string and execute it using the Invoke-Expression cmdlet to load it to the runspace.

The last step we need to do is to set the content of the Set-Profile.ps1 to the actual PowerShell profile. We can do this by executing the following snippet in the Azure Cloud Shell:

(New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/mjisaak/azure/master/Set-Profile.ps1') |
  Set-Content $profile -Force

The snippet is using the web client again but instead of executing the code,  it pipes the string to the Set-Content cmdlet to override the profile. I can verfiy that by retrieving the content of $profile. This should output the content of my Set-Profile.ps1:

Capture4

Finally to load the profile we can either restart the PowerShell session or dot source the profile as mentioned earlier:

. $profile

And now we can use all the aliases, variables and function we have defined in the profile that is stored on GitHub:

helloworld

Rename Azure Storage Blob using PowerShell

At the time of writing this there is no API to rename an Azure Storage blob in one operation. You have to copy the blob and delete the original one after the copy process completes.

You can vote for the feature here: Rename blobs without needing to copy them

Until then you can use my convenience Rename-AzureStorageBlob cmdlet:


function Rename-AzureStorageBlob
{
    [CmdletBinding()]
    Param
    (
        [Parameter(Mandatory=$true, ValueFromPipeline=$true, Position=0)]
        [Microsoft.WindowsAzure.Commands.Common.Storage.ResourceModel.AzureStorageBlob]$Blob,

        [Parameter(Mandatory=$true, Position=1)]
        [string]$NewName
    )

  Process {
    $blobCopyAction = Start-AzureStorageBlobCopy `
        -ICloudBlob $Blob.ICloudBlob `
        -DestBlob $NewName `
        -Context $Blob.Context `
        -DestContainer $Blob.ICloudBlob.Container.Name

    $status = $blobCopyAction | Get-AzureStorageBlobCopyState 

    while ($status.Status -ne 'Success')
    {
        $status = $blobCopyAction | Get-AzureStorageBlobCopyState
        Start-Sleep -Milliseconds 50
    }

    $Blob | Remove-AzureStorageBlob -Force
  }
}

It accepts the blob as pipeline input so you can pipe the result of the Get-AzureStorageBlob to it and just provide a new name:

$connectionString= 'DefaultEndpointsProtocol=https;AccountName....'
$storageContext = New-AzureStorageContext -ConnectionString $connectionString

Get-AzureStorageBlob -Container 'MyContainer' -Context $storageContext -Blob 'myBlob.txt'|
    Rename-AzureStorageBlob -NewName 'MyNewBlob.txt'

You can also download the script from my GitHub repository.

Using Azure Key Vault in ASP.NET Core 2.0 with the options pattern

The best way to store secrets in your app is not to store secrets in your app

Almost every web application needs some kind of secrets like a SQL Database connection string or the primary key of a Storage Account in order to communicate with external services.

Certainly we don’t store these secrets within our source code since this would expose them to every developer that has access to the code. In Azure we could store the secrets within the Application Settings in the Azure Portal:

secret.PNG

But if a secret is used in multiple application and we need to change it (e. g. regenerate a storage account key) we would have to do that in multiple places. A better place to store secrets in Azure is the Key Vault.

Instead of storing each secret within our app we store them in the Key Vault and configure our app to access the secrets in the vault. Now we have a single place where we can manage our secrets.

Lets take a look how we can access those secrets in an ASP.NET Core 2.0 web application without introducing a dependency to Key Vault in the class that uses it. To create a vault, store secrets to it and create a service principal for the access policy see Get started with Azure Key Vault.

Our secret is stored in a class called ValueSettings:

public sealed class ValueSettings
{
	public string TestSecret { get; set; }
}

There is a ValuesController with one HttpGet method that returns our secret:

[Route("api/[controller]")]
public class ValuesController : Controller
{
    private readonly ValueSettings _valueSettings;

    public ValuesController(IOptions<ValueSettings> valueSettings)
    {
        _valueSettings = valueSettings.Value;
    }

    [HttpGet]
    public IActionResult Get()
    {
        return Ok(_valueSettings.TestSecret);
    }
}

As you can see in line 6 the controller uses the options pattern to inject the actual settings. The controller doesn’t know where the secret is comming from and doesn’t have any dependencies to Azure Key Vault.

Now lets take a look how we need to configure our application for that. First we need to store the vault settings in our appsettings.json:

{
  "KeyVault": {
    "Vault": "https://myvault.vault.azure.net/",
    "ClientId": "myclientid",
    "ClientSecret": "myclientsecret"
  }
}

We also have a class that represents these settings:

public class KeyVaultSettings
{
    public string Vault { get; set; }
    public string ClientId { get; set; }
    public string ClientSecret { get; set; }
}

Now to configure the key vault we use the AddAzureKeyVault extension method in the Programm.cs:

public class Program
{
    public static void Main(string[] args)
    {
        BuildWebHost(args).Run();
    }

    public static IWebHost BuildWebHost(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .ConfigureAppConfiguration((context, config) =>
            {
                config.SetBasePath(Directory.GetCurrentDirectory())
                    .AddJsonFile("appsettings.json", optional: false)
                    .AddEnvironmentVariables();

                var builtConfig = config.Build();
                var settings = builtConfig.GetSection("KeyVault").Get();

                config.AddAzureKeyVault(
                    settings.Vault, settings.ClientId, settings.ClientSecret);

            })
            .UseStartup()
            .Build();
}

And finally this is how our Startup looks like:

public class Startup
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;   
    }

    public IConfiguration Configuration { get; }

    // This method gets called by the runtime. Use this method to add services to the container.
    public void ConfigureServices(IServiceCollection services)
    {
        services.Configure(Configuration);

        services.AddMvc();
    }

    // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }

        app.UseMvc();
    }
}

You can download the complete example from my GitHub repository.

Web.config for hosting an Angular application on Azure Web App

If you host an Angular application on Microsoft Azure you probably want to define a mimemap for .json and .woff / .woff2 files to get rid of the console errors. Also in order to enable client side routing we have to add a rewrite rule.

This is how my web.config looks like:

<configuration>
    <system.webServer>
        <staticContent>
            <mimeMap fileExtension=".json" mimeType="application/json" />
            <remove fileExtension=".woff"/>
            <mimeMap fileExtension=".woff" mimeType="application/font-woff" />
            <mimeMap fileExtension=".woff2" mimeType="font/woff2" />
     </staticContent>

      <rewrite>
        <rules>
            <rule name="Angular" stopProcessing="true">
                <match url=".*" />
                <conditions logicalGrouping="MatchAll">
                    <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
                    <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
                </conditions>
                <action type="Rewrite" url="/" />
            </rule>
        </rules>
        </rewrite>
    </system.webServer>
</configuration>

To ensure the config gets deployed, put it in the src directory:

vs

And add it to list of assets within the .angular-cli.json:

cli