How to migrate Azure PowerShell from AzureRM to the new Az Module

3 days ago, Microsoft released version 1.0.0 of the new Az Module. Az is a cross-platform PowerShell module to manage resources in Azure that is compatible with both WindowsPowerShell and PowerShell Core.

Why to migrate to Az?

Az is written from ground up in .NET Standard which allows us to use the module in PowerShell Core on Windows, MacOs or Linux platforms. It is the “new” module, all further functionality will be added to the Az module whereas AzureRM will only receive bug fixes.

How to migrate?

Scripts that use the previous AzureRM module won’t automatically work with Az. You can enable a compatibility mode to the AzureRM module using the Enable-AzureRmAlias cmdlet. This allows you to soft migrate existing scripts. Be sure to only enable the mode if you have uninstalled all versions of AzureRM! You can disable the compatiblity mode after you migrated all your scripts using the Disable-AzureRmAlias cmdlet.

You can also have both modules installed at the same time. In this case, don’t enable the compatibility mode! Instead, explicitly import either the Az or the AzureRM modules inside your scripts.

However, It is recommended to uninstall the old AzureRM module before using Az module:

Uninstall the AzureRM module

You can check whether you have any AzureRM module installed using the following cmdlet:

Get-Module -Name AzureRM -ListAvailable

get-module

To uninstall the module you can run the Uninstall-AzureRM cmdlet in an elevated PowerShell prompt.

You may get an error that the term Uninstall-AzureRM is not recognized as the name of a cmdlet:

Uninstall-AzureRM : The term ‘Uninstall-AzureRM’ is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.

In this case, try to find and uninstall “Azure PowerShell” through the Windows system (Start -> Settings -> Apps, or appwiz.cpl). If this also doesn’t work, check this article: Uninstall the AzureRM module

Install the Az Azure PowerShell module

To install the module run the following cmdlet in an elevated session:

Install-Module -Name Az -AllowClobber

You may see the following prompt if this is the first time you use the PSGallery:

psgallery

Just Enter ‘Y’ – Yes or ‘A’ – Yes to All to continue.

You can verify the installation using:

Get-InstalledModule -Name Az -AllVersions

Use the new Az module

To use the new Az module, you first have to sign in using the Connect-AzAccount cmdlet. The interactive login now uses the device login so you have to copy and paste the token into https://aka.ms/devicelogin.

Once you have signed in to an Azure account, you can use the new cmdlets to access and manage your Azrue resoruces. Use Get-Command -Module Az* command to retrieve all available Az cmdlets:

get-command

 

 

 

Configure Azure App Service IP Restrictions using PowerShell

IP Restrictions is a feature I recently start using a lot. It allows me to define a list of IP addresses that are allowed or denied to access my app service. Both IPv4 and IPv6 adresses can be used.

At the moment there is no Azure CLI or PowerShell cmdlet available to set the IP Restrictions programmatically but the values can be set manually with a PUT operation on the app configuration in Resource Manager (REST request) or by using the Set-AzureRmResource cmdlet.

Until there is no Azure cmdlet available to set the IP Restriction Rule, you can use my Add-AzureIpRestrictionRule cmdlet:

function Add-AzureIpRestrictionRule
{
    [CmdletBinding()]
    Param
    (
        # Name of the resource group that contains the App Service.
        [Parameter(Mandatory=$true, Position=0)]
        $ResourceGroupName, 

        # Name of your Web or API App.
        [Parameter(Mandatory=$true, Position=1)]
        $AppServiceName, 

        # rule to add.
        [Parameter(Mandatory=$true, Position=2)]
        [PSCustomObject]$rule
    )

    $ApiVersions = Get-AzureRmResourceProvider -ProviderNamespace Microsoft.Web |
        Select-Object -ExpandProperty ResourceTypes |
        Where-Object ResourceTypeName -eq 'sites' |
        Select-Object -ExpandProperty ApiVersions

    $LatestApiVersion = $ApiVersions[0]

    $WebAppConfig = Get-AzureRmResource -ResourceType 'Microsoft.Web/sites/config' -ResourceName $AppServiceName -ResourceGroupName $ResourceGroupName -ApiVersion $LatestApiVersion

    $WebAppConfig.Properties.ipSecurityRestrictions =  $WebAppConfig.Properties.ipSecurityRestrictions + @($rule) |
        Group-Object name |
        ForEach-Object { $_.Group | Select-Object -Last 1 }

    Set-AzureRmResource -ResourceId $WebAppConfig.ResourceId -Properties $WebAppConfig.Properties -ApiVersion $LatestApiVersion -Force
}

Add your current IP

Usually, I want to add my current IP address to the list of allowed IPs whenever I work outside my company. I use a script where I only have to specifiy the Subscription Id, the App Service name and the Resource Group:

$SubscriptionId = '' 
$AppServiceName = ''
$ResourceGroupName = ''

I use the following piece of code to save my Azure login context so I don’t have to enter my credentials each time I use the script:

$ctxPath = Join-Path $env:APPDATA 'azure.ctx'

if (-not (Test-Path $ctxPath))
{
    Login-AzureRmAccount
    Save-AzureRmContext -Path $ctxPath -Force
}
 
Import-AzureRmContext -Path $ctxPath | out-null
Set-AzureRmContext -SubscriptionId $SubscriptionId | Out-Null

To determine my current IP address I use api.ipify.org:

$clientIp = Invoke-WebRequest 'https://api.ipify.org' | Select-Object -ExpandProperty Content

Finally I add the rule using the above Add-AzureIpRestrictionRule cmdlet. For the rule name I concat my computername with my username (e. g. WD023\mbr):

$rule = [PSCustomObject]@{
    ipAddress = "$($clientIp)/32"
    action = "Allow"  
    priority = 123 
    name = '{0}_{1}' -f $env:computername, $env:USERNAME 
    description = "Automatically added ip restriction"
}

Add-AzureIpRestrictionRule -ResourceGroupName $ResourceGroupName -AppServiceName $AppServiceName -rule $rule

This is how the result looks like:
ipresult

The whole script can be found in my GitHub repository.

Azure DevOps – what’s new?

Today Microsoft announced the relaunch of Visual Studio Team Services (VSTS). VSTS is now renamed to Azure DevOps.

The reason for the name change is that they want to decouple the Suite from the Visual Studio brand and the perception that it is .NET-only whereas Azure is associated with cross-platform – Cloud for all. But of course, Azure DevOps will work for whatever type of cloud you are using.

What’s new?

  • The service URLs moved from

    https://YOURORG.visualstudio.com to https://dev.azure.com/YOURORG

    There are long-term redirects for existing customers in place.

  • The offered services are renamed:
    • Work 👉 Azure Boards
    • Nav-Code Code 👉 Azure Repos
    • Nav-Launch Build and release 👉 Azure Pipelines
    • Nav-Feed Packages 👉 Azure Artifacts
    • Nav-TestTest 👉 Azure Test Plans

  • Azure DevOps GitHub Integration:
    Azure Pipelines is now available in the GitHub marketplace.
  • Azure Pipelines for Open Source projects
    Azure Pipelines includes 10 free concurrent pipelines with unlimited build minutes and users (for Mac, Windows and Linux).

 

Azure App Services: Determine supported dotnet core version

If you try to use the latest .NET Core version 2.1.3 within your Azure Web or API App, you will receive the error code 502.5.

After you enabled logging you will find an error similar to this:

It was not possible to find any compatible framework version
The specified framework 'Microsoft.AspNetCore.App', version '2.1.3' was not found.
  - Check application dependencies and target a framework version installed at:
      D:\Program Files (x86)\dotnet\
  - Installing .NET Core prerequisites might help resolve this problem:
      http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
  - The .NET Core framework and SDK can be installed from:
      https://aka.ms/dotnet-download
  - The following versions are installed:
      2.1.0-rc1-final at [D:\Program Files (x86)\dotnet\shared\Microsoft.AspNetCore.App]
      2.1.2 at [D:\Program Files (x86)\dotnet\shared\Microsoft.AspNetCore.App]

So one way to determine the installed dotnet core versions is to look at the error log. But you can also execute the following command within the Debug Console on the Kudu Engine (Advanced Tools in the Azure Portal):

dotnet --list-sdks

This will displays the installed .NET Core runtimes:

dotnet-list-sdks

Automatically pick the latest dotnet core version

You can simply avoid running into this error by omitting the Version attribute on the Microsoft.ApsNetCore.App PackageReference within your Project file (*.csproj):

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netcoreapp2.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <Folder Include="wwwroot\" />
  </ItemGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.App"/>
  </ItemGroup>
</Project>

Now, as soon as the Azure App Services gets the new 2.1.x dotnet core sdk, your app will automatically start using it.

File upload through ASP.NET Core middleware

In my previous article we discussed the different options to implement file upload for cloud applications.

In this article I want to provide you an example of how to implement a file upload through a middleware.:

middleware_upload

Example: File Upload to Azure Blog Storage using Angular and ASP.NET Core

We will scaffold our application using the Angular template which is part of the .NET-Core CLI and create a component using the Angular CLI.

Prerequisites

* You can either manually create the Storage Account within the Azure Portal or by using the following ARM Template:

Scaffold the project

To scaffold the project we use the dotnet new command:

dotnet new angular --name file-upload

Implement the middleware

To store the uploaded files to Azure Blob Storage we need to specify the connection string within the appsettings.json:

"ConnectionStrings": {
    "StorageAccount": "UseDevelopmentStorage=true"
    },

If you don’t have the Azure Storage Emulator installed you have to replace the value with an actual Azure Storage Account connection string.

Install WindowsAzure.Storage NuGet Package

You can install the NuGet package using the following command:

dotnet add package WindowsAzure.Storage

File upload implementation

We will implement the file upload in a new Controller called AssetController. The controller only exposes a single method called UploadAssetAsync which takes an IFormFile with the name asset (note: IFormFile is suitable for uploading small files, if you have to deal with large files you have to consider implementing streaming or uploading the files directly from the client to a data store. Further information here.).

The UploadAssetAsync method uploads the passed file to the previous specified Azure Blob Storage and returns the URI of the new blob:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Configuration;
using Microsoft.WindowsAzure.Storage;

namespace file_upload.Controllers
{
    [Route("api/[controller]")]
    public class AssetController : Controller
    {
        private readonly IConfiguration _configuration;
        public AssetController(IConfiguration config)
        {
            _configuration = config;
        }

        [HttpPost]
        public async Task UploadAssetAsync([FromForm]IFormFile asset)
        {
            CloudStorageAccount storageAccount = null;
            if (CloudStorageAccount.TryParse(_configuration.GetConnectionString("StorageAccount"), out storageAccount))
            {
                var client = storageAccount.CreateCloudBlobClient();
                var container = client.GetContainerReference("fileupload");
                await container.CreateIfNotExistsAsync();

                var blob = await container.GetBlobReferenceFromServerAsync(asset.FileName);
                await blob.UploadFromStreamAsync(asset.OpenReadStream());

                return Ok(blob.Uri);
            }

            return StatusCode(StatusCodes.Status500InternalServerError);
        }
    }
}

Disable HttpsRedirection in Development

Before we can run the application we should disable the Https redirection for the development environment (note: you can also install a localhost certificate). Otherwise we will get a warning that the site is not secure. This can be done by replacing the following line in the Configure method within the Startup.cs

app.UseHttpsRedirection();

with:

if (!env.IsDevelopment())
{
   app.UseHttpsRedirection();
}

Implement the frontend

We will implement the file upload in a new component. To create the new component we use the Angular CLI:

ng g component fileUpload

Note: Ensure you invoke the CLI commands within the ClientApp directory.

Configure Routing

Now we add routing to our new component within the app.module.ts:

{ path: 'file-upload', component: FileUploadComponent },

Then we add a link to the component inside the nav-menu.component.html:

<li>
    <a>
        <span class='glyphicon glyphicon-cloud-upload'></span> File Upload
    </a>
</li>

Install PrimeNG

We use the PrimeNG NPM package to implement the file upload:

npm install primeng --save 

Add the file upload

The last thing we have to do is to add the file upload to the file-upload.component.html. The name attribute value must match with the IFormFile parameter name in the middleware (in our case “asset“). The url is /api/Asset which is the address of the UploadAssetAsync middleware web method:

 
<p-fileUpload #fubauto mode="basic" name="asset" url="/api/Asset" maxFileSize="1000000" auto="true"
        chooseLabel="Browse"></p-fileUpload>

That is it. We don’t need to implement any further upload mechanism. To start the application we can use the dotnet CLI:

dotnet run

The source code can be found in my GitHub repository.

File upload in Cloud Applications: The Options

Almost every web application requires some form of file upload.  You may want to allow a user to upload a profile picture or to import any kind of data.

Multiple ways to implement the file upload

Depending on the size of the files and the regularity of the upload you have two options to implement the upload:

Directly upload the file to a data store

The fastest and resource friendliest way is to directly upload the file from the client to a data store. This typically requires the client to have the security credentials for the data store:

direct_uploadBut giving security credentials to potential untrusted clients isn’t a realistic approach for most web applications. Instead you want to use a token that provides clients restricted access to a specific resource for a limited validity period. This pattern is known as Valet Key pattern.

Upload the file through a middleware

The second option is to upload the file to your middleware (API) which will handle the movement of the data to the data store.

middleware_upload

This approach prevents us from exposing any information about the underlying data store to the client. We could even change our Data Store (e. g. from Azure Blob Storage to Azure File Storage) without updating the client.

The downside is that it absorbs valuable resources from our middleware like compute, memory and bandwidth.

Angular 6 application hosted on Azure Storage Static website

A few days ago Microsoft announced a new public preview feature for Azure Storage called Static website. It enables you to host a static web app using Azure Storage which is multiple times cheaper then a traditional required Web App. Reason enough to give it a try.

Create a Storage account

To use the Static website feature we need a general purpose V2 Storage Account. You can either manually create the resource within the Azure Portal or by using the following ARM Template:

Enable Static website feature

Unfortunately we can’t enable the Static website feature using an ARM Template (thanks to Gaurav Mantri) because it is not part of the Storage Resource Provider API. So we have to manually enable it after we’ve provisioned the Storage Account.

We can do that within our previous created Storage Account: Select Static website (preview) to configure a container for static website hosting:

enablestaticwebsite

Since our example applicaton will have an index.html,  we can leave the index document name field as it is.

Note: If you have to automate this process, it is still possible to enable this feature using the Azure Storage Services REST API.

After we save our changes we get the primary endpoint URL of our site:

spaurl.png

Create an Angular Single Page Application

We use the Angular CLI to scaffold an Angular sample applicaton that already has routing enabled. The name of our app will be spastore:

ng new spastore --routing

We also create a new component named subcomponent to demonstrate the routing capability:

ng g component subcomponent

And configure the routing module accordingly:

import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';
import { SubcomponentComponent } from './subcomponent/subcomponent.component';

const routes: Routes = [
  { path: 'subcomponent', component: SubcomponentComponent },
];

@NgModule({
  imports: [RouterModule.forRoot(routes, { useHash: true })],
  exports: [RouterModule]
})
export class AppRoutingModule { }

Note that we use the HashLocationStrategy oppsed to the default PathLocationStrategy because the later will result in 404 errors. This is because we can’t define a rewrite URL for our static website and Azure will try to resolve the specified resource – e. g.:
https://spastore.z6.web.core.windows.net/subcomponent

The HashLocationStrategy represents its state in the hash fragment of the browser’s URL. So this will be the new route for our subcomponent:
https://spastore.z6.web.core.windows.net/#/subcomponent

Finally we add the router outlet to the app.component.html which displays our component and add a router link:


  <a>Subcomponent</a>

Now it’s time to compile the application and get a production build. Once again we use the Angular CLI:

ng build --prod

The build artefacts are stored within the dist folder and are ready to publish. We can do that by clicking on the $web link within the Static website (preview) blade. Then we have to click on the Upload button and select all files within the artefact (dist) folder:

angularupload

If we browse to the URL we can see our Angular App up and running and also the routing is working as expected:

angulardemo.png

You can test the site here: https://spastore.z6.web.core.windows.net

The source code including the ARM Template and the Angular sample application can be found in my GitHub repository.

Update:

You can actualy use the PathLocationStrategy by setting the error page to index.html (thanks to @nthonyChu). The first try to serve a URL like https://spastore.z6.web.core.windows.net/subcomponent will fail since it is not a valid resource. Then the path will get passed to the error page which is our index.html (page returns content with 404 status code). However, this is hack. I still recommend using the HashLocationStrategy until a rewrite mechanism is in place.