Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Oh AOS why have you forbidden me

$
0
0

Sometimes when services are trying to authenticate to an AOS in Dynamics 365 for Finance and Operations, both in the Cloud version and the on-premise version, the calling application may receive the error message "forbidden" back from the AOS. This message is deliberately vague, because we don't want a calling application to be able to poke the AOS and learn about how to get in, but unfortunately that vagueness can make it difficult to figure out what is actually wrong, in this post we'll discuss what's happening in the background and how to approach troubleshooting.

Anything which is calling web services could receive this "Forbidden" error - for example an integrated 3rd party application, or Financial Reporting (formerly Management Reporter).

First let's talk about how authentication to Finance and Operations works, there are two major stages to it:

1. Authentication to AAD (in Cloud) or ADFS (in on-premise)- this is happening directly between the caller and AAD/ADFS - the AOS isn't a part of it.
2. Session creation on the AOS - here the caller is giving the token from AAD/ADFS to the AOS, then AOS attempts to create a session.

The "forbidden" error occurs during the 2nd part of the process - when the AOS is attempting to create a new session. The code within the AOS which does this has a few specific cases when it will raise this:

- Empty user SID
- Empty session key
- No such user
- User account disabled
- Cannot load user groups for user

For all of these reasons the AOS is looking at the internal setup of the user in USERINFO table - it's not looking at AAD/ADFS. In a SQL Server based environment (so Tier1 or on-premise) you can run SQL Profiler to capture the query it's running against the USERINFO table and see what it's looking for.

Examples:

- Financial Reporting (Management reporter) might report "Forbidden" if the FRServiceUser is missing or incorrect in USERINFO. This user is created automatically, but could have been modified by an Administrator when trying to import users into the database.
- When integrating 3rd party applications if the record in "System administration > setup > Azure Active Directory applications" is missing


Disable any reliance on internet in Finance and Operations on-premise

$
0
0

There are some features within Dynamics 365 for Finance and Operations on-premise which rely on an internet connection.

This means that the on-premise version DOES by default have a dependency on some cloud services - BUT you can turn that off, so there is no dependency.

As an example, last week there was an AAD outage, which affected on-premise customer's ability to log into the application. What was happening was - you'd log in as normal - see the home page for a moment, then it would redirect to the AAD login page, which was down, so the user would be stuck.

In the background this relates to the Skype presence feature - after the user logs in, in the background the system is contacting the Skype service online - which is what triggers that redirect to AAD when AAD is unavailable.

There is a hotfix available which allows a System Administrator to turn off all cloud/internet related functions in the on-prem version, details are available here:
Disable internet connectivity

How to select the document management storage location

$
0
0

In Dynamics 365 for Finance and Operations the document management feature allows you to attach documents (files and notes) to records within the application. There are several different options for storage of those documents – in this document we will explain the advantages and disadvantages of each option

Document storage locations

There are 3 possible options for document storage:

• Azure storage: In the cloud version of Finance and operations this will store documents in Azure blob storage, in the on-premise version this will store documents in the file share given in the environment deployment options in LCS*
• Database: stores documents in the database
• SharePoint: stores documents in SharePoint Online, this is currently only supported for the cloud version. Support for on-premises SharePoint is planned to be added in the future

Each document storage option can be configured per document type – meaning that it’s possible to configure a type of document “scanned invoices” and choose storage “Database”, and configure another type of document “technical drawings” and choose storage “Azure storage”.

Classes

When configuring document types there are 3 different classes of document available, each class of document only allows certain storage locations:
- Attach file: this allows selection of “Azure storage” or “SharePoint” locations
- Attach URL: this allows only “Database” location
- Simple note: this allows only “Database” location

Document storage location options

Azure storage

This type of storage can be configured for the “attach file” class of document only.

As mentioned earlier in this document, in the cloud version of Finance and operations this will store documents in Azure blob storage, in the on-premise version this will store documents in the file share given in the environment deployment options in LCS.

In the cloud version the an Azure storage account is automatically created when an environment is deployed. No direct access to the storage account is given, access is only via the application. This is a highly available geo-replicated account, so there are no additional considerations required to ensure business continuity for this component.

In the on-premise version an SMB 3.0 file share is specified at environment deployment time. High availability and disaster recovery options must be considered to ensure availability of this file share, the application accesses this using its UNC path – ensure this UNC path is available at all times.

Files stored in this way are not readable by directly accessing the file share location – they are intended only to be accessed through Finance and Operations – specifically files stored will be renamed to a GUID type name, and their file extension is removed. Within Finance and Operations a database table provides the link between the application and the file stored on the file system.
No direct access to this folder should be allowed for users, access for the Finance and Operations server process is controlled through the certificate specified during environment deployment.

Database

Database storage will be used automatically for document types using classes “Attach URL” or “Simple note”. The “Attach file” class of documents will not be stored in the database.
Documents stored in the database will be highly available by virtue of the SQL high availability options which are expected to be in place already as a requirement of Finance and Operations.

SharePoint

This type of storage can be configured for the “Attach file” class of document only.

For the cloud version of Finance and Operations, SharePoint Online is supported but currently SharePoint on-premise is not supported. For the on-premise version SharePoint Online is also not supported currently.

SharePoint Online is a highly available and resilient service, we recommend to review our documentation for more information.

Cloud versus On-premise

In the cloud version of Finance and Operations, for file storage, either SharePoint Online or Azure blob storage can be used.
In the on-premise version, for file storage, only Azure blob storage can be used – which will store files in a network file share as defined in the environment deployment options.
*The screenshot below shows the setting for file share storage location used by on-premise environments when selecting “Azure storage”.

On-premise deployment storage options

Troubleshooting on-premise environment deployment D365FFO

$
0
0

This document contains tips for troubleshooting on-premise Dynamics 365 for Finance and Operations environment deployment failures, based on my own experiences when troubleshooting this process for the first time.

Types of failures

The first type of failure I am seeing here is a simple redeploy of the environment. Originally I was trying to deploy a custom package, but it failed and I didn’t know why, so I deleted the environment and was redeploying with vanilla – no custom bits, just the base, and it still failed. In LCS after it runs for approx. 50 minutes I see the state change to Failed. There is no further log information in LCS itself, that information is within the respective machines in the on-premise environment.

Orchestrators

The Orchestrator machines trigger the deployment steps. In the base topology there are 3 orchestrators, these are clustered/load balanced – often the first one will pick up work, but don’t rely on that – it could be any of them which picks up tasks – and it could be more than 1 for a given deployment run – for example server 1 picks up some of the tasks and server 2 picks up some other tasks – always check event logs on all of them to avoid missing anything useful.

To make it easier to check them you can add a custom view of the event logs on each orchestrator machine, to give you all the necessary logs in one place, like this:
Create custom event log view

Select events

I found in my case that server 2 was showing an error, as below, it’s basically saying it couldn’t find the AOS metadata service, and I notice the URL is wrong – I’d entered something wrong in the deployment settings in LCS:
Example error

AOS Machines

There are also useful logs on the AOS machines – the orchestrators are calling deployment scripts but for AX specific functions the AOSes are still running – for example database synchronize is run by an AOS. Again the AOSes are clustered so we need to check all of them as tasks could be executed by any of them. Similar to the orchestrators I create a custom event log view to show me all Dynamics related events in one place. This time I am selecting the Dynamics category, and I have unchecked “verbose” to reduce noise.

AOS event log

Here’s an example of a failure I had from a Retail deployment script which was trying to adjust a change tracking setting, for an issue such as this, once I know the error I can work around the problem by manually disabling change tracking on the problem table from SQL Server Management Studio and then starting the deployment again from LCS.

AOS example error

ADFS Machines

The ADFS servers will show authentication errors – typical causes of this kind of failure could be a “bad” setting entered in the deployment settings in LCS – for example I entered the DNS address for the ax instance incorrectly, then I see an ADFS error after deployment when trying to log into AX:

ADFS error example

If you see an error as above, you can understand more about it my reviewing the Application group setup in “ADFS Management” on the ADFS machine, open it from server manager:

ADFS

Under application groups you’ll see one for D365, double click it to see the details

ADFS setup

If you’re familiar with the cloud version of D365, then you’ll probably know that AAD requires application URLs to be configured against it to allow you to log in – in cloud the deployment process from LCS is doing this automatically, and you can see it if you review your AAD setup via the Azure portal. In the on-prem version, this ADFS management tool shows you the same kind of setup, also in on-prem the deployment process is creating these entries automatically for you. Click on one of the native applications listed and then the edit button you can see what’s been set up:

ADFS application group setup

The authentication error I mentioned previously:
MSIS9224: Received invalid OAuth authorization request. The received 'redirect_uri' parameter is not a valid registered redirect URI for the client identifier: 'f06b0738-aa7a-4a50-a406-5c1e486c49be'. Received redirect_uri: 'https://dax7sqlaodc1.saonprem.com/namespaces/AXSF/'.

We can now see from the configuration above that for client 'f06b0738-aa7a-4a50-a406-5c1e486c49be' the request URL isn’t configured. If we believed that the URL is correct, then we could add it here and ADFS would then allow the request to go through successfully. IN my case the URL was the mistake, so I didn’t change ADFS settings, I corrected the URL in LCS and started the deployment again.

Package deployment failures

When reconfiguring an environment, and including a custom package, if the deployment fails, check the orchestrator machine event logs, as described above – use a custom event log view to check all the logs on a machine at once.

I have had a situation where I’m getting failures related to package dependencies where my package does not have the failing dependency, I will explain:
Error is:

Package [dynamicsax-demodatasuite.7.0.4679.35176.nupkg has missing dependencies: [dynamicsax-applicationfoundationformadaptor;dynamicsax-applicationplatformformadaptor;dynamicsax-applicationsuiteformadaptor]]

My package does not contain demodatasuite, so the error is a mystery. Turns out that because my package has the same filename as a previously deployed package, the system is not downloading my package and just attempting to deploy an old package with the same name. Packages can be found in the file share location, as below:
\DAX7SQLAOFILE1SQLFileShareassets

The first part, \DAX7SQLAOFILE1SQLFileShare, is my file share (so will differ in different environments – it’s a setting given when the environment was created), the assets folder is constant.

In here I see that my current package “a.zip” (renamed to a short name to work around an issue with deployment failure due to path too long), is from several weeks ago and is much larger than the package I expect. To get past this I rename my package to b.zip and attempt deployment again. Note that after PU12 for on-premise this issue no longer occurs.

Package deployment process

During the package deployment process, the combined packages folders will be created in this folder:

\DAX7SQLAOFILE1SQLFileSharewpProdStandaloneSetup-109956tmpPackages

Error when environment left in Configuration mode

When running a reployment, the error below can occur if the environment has been left in Configuration mode (for changing config keys), turn off configuration mode, restart the AOSes and then re-run the deployment.

MachineName SQLAOSF1ORCH2
EnvironmentId c91bafd5-ac0b-43dd-bd5f-1dce190d9d49
SetupModuleName FinancialReporting
Component Microsoft.Dynamics.Performance.Deployment.Commands.AX.AddAXDatabaseChangeTracking
Message An unexpected error occurred while querying the Metadata service. Check that all credentials are correct. See the deployment log for details.
Detail Microsoft.Dynamics.Performance.Deployment.Common.DeploymentException: An unexpected error occurred while querying the Metadata service. Check that all credentials are correct. See the deployment log for details. ---> System.ServiceModel.FaultException: Internal Server Error Server stack trace: at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at

Error when FRServiceUser is missing

This error can also happen when the FRServiceUser is missing in USERINFO – the AOS metadata service is trying to create an AX session as this user.
This user is normally created by the DB synch process. If the user is incorrect in USERINFO then deleting that User and re-running DB synch should recreate the user – you can set USERINFO.ISMICROSOFTACCOUNT to 0 in SSMS, and then re-run db synch to create the user. DB synch can be triggered in PU12+ by clearing the SF.SYNCLOG table and then killing AXService.exe – when it automatically starts back up it will run a db synch. The you should see the FRServiceUser created back in USERINFO.

MachineName SQLAOSF1ORCH2
EnvironmentId c91bafd5-ac0b-43dd-bd5f-1dce190d9d49
SetupModuleName FinancialReporting
Component Microsoft.Dynamics.Performance.Deployment.Commands.AX.AddAXDatabaseChangeTracking
Message An unexpected error occurred while querying the Metadata service. Check that all credentials are correct. See the deployment log for details.
Detail Microsoft.Dynamics.Performance.Deployment.Common.DeploymentException: An unexpected error occurred while querying the Metadata service. Check that all credentials are correct. See the deployment log for details. ---> System.ServiceModel.FaultException: Internal Server Error Server stack trace: at System.ServiceModel.Channels.ServiceChannel.HandleReply(ProxyOperationRuntime operation, ProxyRpc& rpc) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at

OMS Alerts are invisible from OMS portal and Ibiza for Log Analytics – 04/17 – Investigating

$
0
0
Initial Update: Tuesday, 17 April 2018 16:09 UTC

We are aware of issues within Log Analytics and are actively investigating. Some customers may experience issues in accessing OMS alerts from OMS portal and Ibiza. New alerts creation is not impacted at the moment.
  • Work Around: None
  • Next Update: Before 04/17 17:30 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Vishal Suram


Blazor 0.2.0 release now available

$
0
0

Just a few weeks ago we announced the first preview release of an experimental web UI framework called Blazor. Blazor enables full-stack web development using C# and WebAssembly. So far thousands of web developers have taken on the challenge to try out Blazor and done some pretty remarkable things:

The feedback and support from the community has been tremendous. Thank you for your support!

Today we are happy to announce the release of Blazor 0.2.0. Blazor 0.2.0 includes a whole bunch of improvements and new goodies to play with.

New features in this release include:

  • Build your own reusable component libraries
  • Improved syntax for event handling and data binding
  • Build on save in Visual Studio
  • Conditional attributes
  • HttpClient improvements

A full list of the changes in this release can be found in the Blazor 0.2.0 release notes.

Many of these improvements were contributed by our friends in the community, for which, again, we thank you!

You can find getting started instructions, docs, and tutorials for this release on our new documentation site at http://blazor.net.

Get Blazor 0.2.0

To get setup with Blazor 0.2.0:

  1. Install the .NET Core 2.1 Preview 2 SDK.
    • If you've installed the .NET Core 2.1 Preview 2 SDK previously, make sure the version is 2.1.300-preview2-008533 by running dotnet --version. If not, then you need to install it again to get the updated build.
  2. Install the latest preview of Visual Studio 2017 (15.7) with the ASP.NET and web development workload.
    • You can install Visual Studio previews side-by-side with an existing Visual Studio installation without impacting your existing development environment.
  3. Install the ASP.NET Core Blazor Language Services extension from the Visual Studio Marketplace.

To install the Blazor templates on the command-line:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates

Upgrade a Blazor project

To upgrade an existing Blazor project from 0.1.0 to 0.2.0:

  • Install all of the required bits listed above
  • Update your Blazor package and .NET CLI tool references to 0.2.0
  • Update the package reference for Microsoft.AspNetCore.Razor.Design to 2.1.0-preview2-final.
  • Update the SDK version in global.json to 2.1.300-preview2-008533
  • For Blazor client app projects, update the Project element in the project file to <Project Sdk="Microsoft.NET.Sdk.Web">
  • Update to the new bind and event handling syntax

Your upgraded Blazor project file should look like this:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
    <RunCommand>dotnet</RunCommand>
    <RunArguments>blazor serve</RunArguments>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.AspNetCore.Razor.Design" Version="2.1.0-preview2-final" PrivateAssets="all" />
    <PackageReference Include="Microsoft.AspNetCore.Blazor.Browser" Version="0.2.0" />
    <PackageReference Include="Microsoft.AspNetCore.Blazor.Build" Version="0.2.0" />
    <DotNetCliToolReference Include="Microsoft.AspNetCore.Blazor.Cli" Version="0.2.0" />
  </ItemGroup>

</Project>

Build reusable component libraries

Blazor components are reusable pieces of web UI that can maintain state and handle events. In this release we've made it easy to build reusable component libraries that you can package and share.

To create a new Blazor component library:

  1. Install the Blazor templates on the command-line if you haven't already

     dotnet new -i Microsoft.AspNetCore.Blazor.Templates
    
  2. Create a new Blazor library project

     dotnet new blazorlib -o BlazorLib1
    
  3. Create a new Blazor app so we can try out our component.

     dotnet new blazor -o BlazorApp1
    
  4. Add a reference from the Blazor app to the Blazor library.

     dotnet add BlazorApp1 reference BlazorLib1
    
  5. Edit the home page of the Blazor app (/Pages/Index.cshtml) to use the component from the component library.

     @addTagHelper *, BlazorLib1
     @using BlazorLib1
     @page "/"
    
     <h1>Hello, world!</h1>
    
     Welcome to your new app.
    
     <SurveyPrompt Title="How is Blazor working for you?" />
    
     <Component1 />
    
  6. Build and run the app to see the updated home page

     cd BlazorApp1
     dotnet run
    

    Blazor component library

JavaScript interop

Blazor apps can call browser APIs or JavaScript libraries through JavaScript interop. Library authors can create .NET wrappers for browser APIs or JavaScript libraries and share them as reusable class libraries.

To call a JavaScript function from Blazor the function must first be registered by calling Blazor.registerFunction. In the Blazor library we just created exampleJsInterop.js registers a function to display a prompt.

Blazor.registerFunction('BlazorLib1.ExampleJsInterop.Prompt', function (message) {
    return prompt(message, 'Type anything here');
});

To call a registered function from C# use the RegisteredFunction.Invoke method as shown in ExampleJsInterop.cs

public class ExampleJsInterop
{
    public static string Prompt(string message)
    {
        return RegisteredFunction.Invoke<string>(
            "BlazorLib1.ExampleJsInterop.Prompt",
            message);
    }
}

In the Blazor app we can now update the Counter component in /Pages/Counter.cshtml to display a prompt whenever the button is clicked.

@using BlazorLib1
@page "/counter"

<h1>Counter</h1>

<p>Current count: @currentCount</p>

<button onclick="@IncrementCount">Click me</button>

@functions {
    int currentCount = 0;

    void IncrementCount()
    {
        currentCount++;
        ExampleJsInterop.Prompt("+1!");
    }
}

Build and run the app and click the counter button to see the prompt.

Counter prompt

We can now package our Blazor library as a NuGet package and share it with the world!

cd ../BlazorLib1
dotnet pack

Improved event handling

To handle events Blazor components can register C# delegates that should be called when UI events occur. In the previous release of Blazor components could register delegates using a specialized syntax (ex <button @onclick(Foo)> or <button onclick=@{ Foo(); }>) that only worked for specific cases and for specific types. In Blazor 0.2.0 we've replaced the old syntax with a new syntax that is much more powerful and flexible.

To register an event handler add an attribute of the form on[event] where [event] is the name of the event you wish to handle. The value of the attribute should be the delegate you wish to register preceded by an @ sign. For example:

<button onclick="@OnClick" />
@functions {
    void OnClick(UIMouseEventArgs e)
    {
        Console.WriteLine("hello, world");
    }
}

or using a lambda:

<button onclick="@(e => Console.WriteLine("hello, world"))"

If you don't need access to the UIEventArgs in the delegate you can just leave it out.

<button onclick="@OnClick" />
@functions {
    void OnClick()
    {
        Console.WriteLine("hello, world");
    }
}

With the new syntax you can register a handler for any event, including custom ones. The new syntax also enables better support for tool tips and completions for specific event types.

The new syntax also allows for normal HTML style event handling attributes. If the value of the attribute is a string without a leading @ character then the attribute is treated as normal HTML.

For some events we define event specific event argument types (ex UIMouseEventArgs as shown above). We only have a limited set of these right now, but we expect to have the majority of events covered in the future.

Improved data binding

Data binding allows you to populate the DOM using some component state and then also update the component state based on DOM events. In this release we are replacing the previous @bind(...) syntax with something more first class and that works better with tooling.

Bind tooling

To create setup a data binding you use the bind attribute.

<input bind="@CurrentValue" />
@functions {
    public string CurrentValue { get; set; }
}

The C# expression provided to bind should be something that can be assigned (i.e. an LValue).

Using the bind attribute is essentially equivalent to the following:

<input value="@CurrentValue" onchange="@((UIValueEventArgs __e) => CurrentValue = __e.Value)/>
@functions {
    public string CurrentValue { get; set; }
}

When the component is rendered the value of the input element will come from the CurrentValue property. When the user types in the text box the onchange is fired and the CurrentValue property is set to the changed value. In reality the code generation is a little more complex because bind deals with a few cases of type conversions. But, in principle, bind will associate the current value of an expression with a value attribute, and will handle changes using the registered handler.

Data binding is frequently used with input elements of various types. For example, binding to a checkbox looks like this:

<input type="checkbox" bind="@IsSelected" />
@functions {
    public bool IsSelected { get; set; }
}

Blazor has a set of mappings between the structure of input tags and the attributes that need to be set on the generated DOM elements. Right now this set is pretty minimal, but we plan to provide a complete set of mappings in the future.

There is also limited support for type conversions (string, int, DataTime) and error handling is limited right now. This is another area that we plan to improve in the future.

Binding format strings

You can use the format-... attribute to provide a format string to specify how .NET values should be bound to attribute values.

<input bind="@StartDate" format-value="MM/dd/yyyy" />
@functions {
    public DateTime StartDate { get; set; }
}

Currently you can define a format string for any type you want ... as long as it's a DateTime ;). Adding better support for formating and conversions is another area we plan to address in the future.

Binding to components

You can use bind-... to bind to component parameters that follow a specific pattern:

@* in Counter.cshtml *@
<div>...html omitted for brevity...</div>
@functions {
    public int Value { get; set; } = 1;
    public Action<int> ValueChanged { get; set; }
}

@* in another file *@
<Counter bind-Value="@CurrentValue" />
@functions {
    public int CurrentValue { get; set; }
}

The Value parameter is bindable because it has a companion ValueChanged event that matches the type of the Value parameter.

Build on save

The typical development workflow for many web developers is to edit the code, save it, and then refresh the browser. This workflow is made possible by the interpreted nature of JavaScript, HTML, and CSS. Blazor is a bit different because it is based on compiling C# and Razor code to .NET assemblies.

To enable the standard web development workflow with Blazor, Visual Studio will now watch for file changes in your Blazor project and rebuild and restart your app as things are changed. You can then refresh the browser to see the changes without having to manually rebuild.

Conditional attributes

Blazor will now handle conditionally rendering attributes based on the .NET value they are bound to. If the value you're binding to is false or null, then Blazor won't render the attribute. If the value is true, then the attribute is rendered minimized.

For example:

<input type="checkbox" checked="@IsCompleted" />
@functions {
    public bool IsCompleted { get; set; }
}

@* if IsCompleted is true, render as: *@
<input type="checkbox" checked />

@* if IsCompleted is false, render as: *@
<input type="checkbox" />

HttpClient improvements

Thanks to a number of contributions from the community, there are a number of improvements in using HttpClient in Blazor apps:

  • Support deserialization of structs from JSON
  • Support specifying arbitrary fetch API arguments using the HttpRequestMessage property bag.
  • Including cookies by default for same-origin requests

Summary

We hope you enjoy this updated preview of Blazor. Your feedback is especially important to us during this experimental phase for Blazor. If you run into issues or have questions while trying out Blazor please file issues on GitHub. You can also chat with us and the Blazor community on Gitter if you get stuck or to share how Blazor is working for you. After you've tried out Blazor for a while please also let us know what you think by taking our in-product survey. Just click the survey link shown on the app home page when running one of the Blazor project templates:

Blazor survey

Have fun!

New Windows 10 Driver Failure Report, now live in Hardware Dev Center

$
0
0

We are happy to announce the availability of the new Windows 10 Driver failure report in Hardware Dev Center! The new report will enable IHVs, OEMs, ISVs and IoT partners to easily view failure data for all submissions made through Hardware Dev Center. No more having to scroll through hundreds of submission ids to view failure analytics 🙂

Other key enhancements, include:

  • Added late arriving failure data for 10 days. This will not only help IoT partners, but also improves overall failure data accuracy reported for all partners.
  • Added Filter Driver Failure Counts in the Hardware Dev Center to assist numerous anti-virus partners access to their filter driver failure data.
  • Added the ability to filter and sort by cab types to ensure Kernel cabs are more easily discoverable.
  • Added Submission Failure Counts in the Hardware Dev Center Drivers Dashboard so partners can quickly identify and take action on submissions that need attention.

What's Next?

As part of our mission to solve some long running partner pain points, in the upcoming months, we will enable you to receive custom and scheduled reports seamlessly through asynchronous reporting API. In addition, we will report new metrics such as unique device count and dimensions as OS SKU, OS Release Version, CPU Name, etc. You will also see a new Driver Flighting report in the Hardware Dev Center that gives you complete visibility of driver performance during flighting.

We are super excited as we embark on this journey and hope you are as well. Stay tuned for more updates, soon!

TFS 2018 Update 2 RC2

$
0
0

We have released Team Foundation Server 2018 Update 2 RC2. You can see details about Update 2, including some key new features, in our RC1 blog post. RC2 is our last planned release before TFS 2018 Update 2 RTW.

Here are some key links:
TFS 2018.2 RC2 Release Notes
TFS 2018.2 RC2 Web Installer
TFS 2018.2 RC2 ISO
TFS 2018.2 RC2 Express Web Installer
TFS 2018.2 RC2 Express ISO

Like RC1, RC2 is a go-live release that is fully supported for installation in your production environment. It is available in all languages. Please report any problems on Developer Community or call customer support if you need immediate help.

The big change in RC2 is that we are enabling XAML builds for your legacy builds. Since XAML builds are not supported in TFS 2018 RTW or Update 1, some customers were blocked on upgrading.  In Update 2, XAML builds are re-enabled but deprecated, meaning there will be no further investment in this area. This should unblock those customers that have legacy XAML builds. For more information, see the release notes and our blog post.

We're looking forward to your feedback.

 


Deploying Your Dockerized Angular Application To Azure Using VSTS – Part 2

$
0
0

In part 1 I demonstrated building a VSTS build pipeline which built an Angular docker image and deployed it to an Azure Web App for Containers. Whereas it helped me achieve consistency across development and production environments, it required the build server to have all the necessary tools to build the application (Typescript, npm dependencies, Node.JS, etc.). This can be a daunting task and quite frankly unnecessary. In this post I will demonstrate how to eliminate the requirement to install the Angular build tools on the build server and instead utilize a Docker image which encapsulates all the necessary tools.

Building the Docker Image

I won't go into the details of creating a DockerFile as it has been discussed extensively in the community. Instead, I am basing my DockerFile on this excellent post which goes into the details of explaining each step of the DockerFile shown below. Here I have a multi-staged docker build where I am installing the npm packages and building the Angular application in the first image and then copying the final assets from the first image to the final image produced in the second stage.

It is important to keep in mind that the order of the different docker instructions shown above matters as it will allow us to utilize the caching feature with docker. We start by copying package.json into the container before the rest of the source code, because we want to install everything the first time, but not every time we change our source code. The next time we change our code, Docker will use the cached layers with everything installed (because the package.json hasn't changed) and will only compile our source code. I am using a private build agent on VSTS to ensure that the cached docker layers are not discarded each time the image is built which would lead to increased build times.

Building a CI/CD Pipeline on VSTS

Compared to the CI/CD pipeline that I built in part 1 you will notice that I am not including an npm install and ng build tasks as there are not carried inside the docker image.

The rest of the steps pertaining to creating an Azure Container Registry as well as building the release pipeline which will deploy to a Web App for Container services will be exactly the same as demonstrated in part 1.

There you have it, next time you check in your code into VSTS you won't have to worry about installing and maintaining the different build tools on your VSTS build agent.

A new challenge: Joining the “AI for Earth” team at Microsoft

$
0
0

I am very excited to announce that I took a new role at Microsoft as Principal Engineer on the AI for Earth team.  I was definitely not looking to move, as my old team is completely fabulous.  But the opportunity to use my passion for machine learning to drive meaningful change was too exciting to pass up. 

AIforEarth-Rectangle

So what is “AI for Earth”?  Microsoft has publicly committed $50 million USD over 5 years for artificial intelligence projects that support clean water, agriculture, climate, and biodiversity.  But our true vision extends beyond a grant program: our data science team is working with partners and nonprofits to build a set of APIs to transform these environmental issues.  Imagine machine learning models exposed through services which could differentiate between various species of animal for conservation purposes, predict agricultural yields, estimate the probability of floods, super resolve climate predictions, or classify aerial and satellite imagery into actionable maps of natural resources that could empower land use planners to optimize the use of our planet’s scarce resources. 

Our AI for Earth work aligns to three key pillars:

  • Access – our grant program provides cloud resources and the ability to do machine learning at scale
  • Education – we are committed to helping our grantees be successful with office hours and training resources
  • Innovation – we are developing a showcase of lighthouse projects, publishing research, and collaborating with others to expand and grow initial projects.  At Build, we will talk about the private preview of a new AI for Earth API, and discuss how we are enabling others to produce their own APIs dedicated to environmental sustainability.

I’ve already had the opportunity to start working with some of our grantees, and I’m so impressed with the ways they are using machine learning to address climate change, improve agricultural yields, lessen water scarcity, and protect wildlife and Earth’s biodiversity.  I am excited to share their work with you in the coming months.  In addition, we work very closely with several groups in Microsoft Research, including Project Premonition and FarmBeats (you may have heard me speaking and blogging on FarmBeats in the past). 

Similar to my previous role, I will still be doing a mixture of public speaking in addition to my engineering work.  Here are some upcoming conferences (the hyperlinked ones are open to the public) where I will be speaking about AI for Earth in the next few months:

Conference

Date

Location

O’Reilly AI Conference

4/30-5/2

New York, NY

Data Science Conference

5/3-5/4

Chicago, IL

Build

5/7-5/9

Seattle, WA

AI for Earth Summit

5/16-5/18

Redmond, WA

WHIPS

5/20-5/22

Suncadia, WA

Vivatech

5/24-5/26

Paris, France

MLADS

6/11-6/13

Redmond, WA

O’Reilly AI Conference

9/5-9/7

San Francisco, CA


For more information on the AI for Earth program, please visit http://aka.ms/aiforearth.  If you are doing machine learning work in agriculture, water, climate change, or biodiversity, feel free to apply for a grant as well. 

Performance Degradation in South Central US – 04/17 – Investigating

$
0
0

Update: Tuesday, April 17th 2018 20:33 UTC

The Engineering team continues to investigate the slow VSTS web experience for users in South Central US. Recent changes are being evaluated and a rollback is being considered. We currently have no estimated time for resolution.

  • Next Update: Before Tuesday, April 17th 2018 22:45 UTC

Sincerely,
Daniel


Initial Update: Tuesday, April 17th 2018 19:42 UTC

We're investigating Performance Degradation in South Central US.

  • Next Update: Before Tuesday, April 17th 2018 20:15 UTC

Sincerely,
Vitaliy

Real-time Code Quality with SonarLint in Visual Studio

$
0
0

In the second part of her SonarQube series, Premier Developer Consultant Sana Noorani builds on top of SonarQube technology and explains how SonarLint can be added in Visual Studio to track real time code quality.


What is SonarLint?

SonarLint an extension you can add to an IDE such as Visual Studio that can provide developers real-time feedback on the quality of the code. It can detect issues in seconds, which can improve productivity. SonarSource describes SonarLint as a capability that can work like a spell checker for text since it detects issues in your code as you go.

In a previous post, I showed how you can integrate SonarQube into your VSTS build/release pipeline. However, one major drawback is that you must wait to get feedback on code which you will only receive when code is pushed and a build is triggered. This can be limiting. However, SonarLint provides a better option for checking code quality since SonarLint integrates directly to Visual Studio, and code can be checked as a developer hacks away.

In this post, we will be discussing how you can enable SonarLint in Visual Studio to get real time feedback on the quality of your code. This project assumes that you already having a running instance of SonarQube on a server.

Adding SonarLint to Visual Studio

In Visual Studio, SonarLint is an extension that can be installed by going to the following:

Tools -> Extensions and Updates -> Online

Then in the search box, search for “SonarLint”. Once you see SonarLint, press “Download”.

image

You must now sign out of Visual Studio to let the changes save properly. VSIX Installer will prompt you to allow for it to modify Visual Studio. After this is completed, you can now use SonarLint for your project.

Using SonarLint in your project

To connect an existing project with SonarQube, click on the following:

Analyze -> Manage SonarQube Connections

Then you will need to press “Connect” to connect to your SonarQube Server. Add in the SonarQube server, username, and password information.

image

Once you connect, you will see SonarLint connect to the SonarQube server. Then you will see a screen that will ask you to select a SonarQube project to bind your solution to. When you do this, your changes in SonarQube will now be synced to Visual Studio.

Analyzing your code

You can now analyze your code. You can do this with the following:

Right click solution -> Analysis -> Run Code Analysis

If there are any errors or issue with your code, you will see it in the “Error List” box built into Visual Studio. Moreover, you will be able to see the UI line items directly in Visual Studio. Normally, you would need to log into the SonarQube web application directly to get this information.

Another benefit of SonarLint in Visual Studio is that you will see a helper notify you with errors or warnings as you write code. This allows you to fix code right away in real time. This can drastically improve efficiency for developers and it will help reduce time needed for code checks.

Which tools will SonarQube work best with?

Overall, SonarLint will catch issues in code on an IDE such as Visual Studio. However, it will not catch issues when your code is integrated with other pieces of the project. Having SonarQube in the VSTS build step is very important to ensure that code smells and issues are being detected when code integration occurs. In summary, it is important to have both the VSTS extension and SonarLint to allow for an efficient bug-free DevOps pipeline.

CDN outage in Azure impacting multiple VSTS features across Western Europe- Mitigated

$
0
0

Final Update: Tuesday, April 17th 2018 21:29 UTC

We have confirmed that the CDN issue has been mitigated. We have verified with customers that functionality is returned for the scenarios which were failing.

Sincerely,
Tom


Initial Update: Tuesday, April 17th 2018 21:14 UTC

A CDN Outage in Western Europe is impacting multiple features across VSTS

  • Next Update: Before Tuesday, April 17th 2018 21:45 UTC

Sincerely,
Tom

Troubleshoot Browser scenarios using Problem Step Recorder (PSR.EXE)

$
0
0

Hi there! In this blog post, we are sharing the PSR windows build-in tools you can use to help record scenarios that are not always easy to explain over the phone.

The Problem Steps Recorder (PSR.exe), was first shipped in Windows 7 (and above). This tool enables the collection of the actions performed by a user while using Windows. The Captured steps will include screenshots that are extremely helpful. The default location of PSR is under %windir%system32psr.exe and can be run from the CMD or RUN window.

The main benefit of using this tool is the amount of time saved while troubleshooting a windows scenario you been asked to assist with. As you can see, the PSR tool is very cool easy to use utility and highly used by Microsoft support while troubleshooting with end users.

Here is how you can start PSR

  • From Start / Run or CMD window type: psr.exe

  • PSR will open

  • To start, click on the Start Record
  • You can add comments as you step thru the reproduction of the scenario
  • Stop when reproduction is done.
  • Save the recording (Alt+ V)
  • Give it a friendly name. This will save it with the .zip file extension

NEXT: Extract the file and open the mht file. It may look something like this: Recording_20180417_1702.mht and it should open using IE.

The mht file, while opened in IE will allow you to do this:

  • Review the recorded steps
  • Review the recorded steps as a slide show
  • Review the additional details

This blog has been provided by the Browser Support Team!

What’s new in VSTS Sprint 132 Update

$
0
0

The Sprint 132 Update of Visual Studio Team Services (VSTS) has rolled out to all accounts and includes several features to help you scale your build and release pipeline.

If you have multiple, dependent teams in your organization working on large products, check out the new build completion trigger. It allows you to chain two related builds together so that changes to an upstream component, such as a library, can trigger a rebuild of a downstream dependency.

build completion triggerWe also generally released a robust, multi-machine deployment feature we call Deployment Groups. Whether your machines are on-premises or in the cloud you can use Deployment Groups to orchestrate deployments across them with rolling updates, while ensuring high availability. There are also new release definition templates and an Azure Resource Group task to make setting this up even easier. Plus, if you create an Azure DevOps Project with the virtual machine option, it will create a Deployment Group for you!

A smaller, but handy feature you’ll discover in your next pull request is the ability to add commit messages from each commit into the description with one click. I recently received an email from someone who has previously been doing this manually since they are thoughtful in describing their commits already, helping them save time every day.

pull request description

Check out the full release notes for more.


VSTS AAD linked accounts experiencing 403/500 errors when they are deleted and recreated – 04/17 – Advisory

$
0
0

We are investigating authentication problems in the Visual Studio Team Services accounts that are linked to an Azure Active Directory tenant, after a user is deleted from that tenant and created again. The user might receive a 500 Internal Server Error page or a 403 Forbidden error page, with the following message:


"According to Azure Active Directory, your Identity is currently Deleted within the following Azure Active Directory. If you feel you have received this message in error, please contact your AAD administrator."


Sincerely,

Daniel

Reverse Proxy with Basic Authentication in Azure Web App

$
0
0

I have previously discussed using a Web App in App Service Environment (ASE) as a reverse proxy with user authentication. In that scenario I used Azure Active Directory (AAD) App Service authentication (a.k.a. Easy Auth). In some cases, the AAD authentication may not be what you would like to use. Specifically, if the client is unable obtain a token from AAD.

In this blog post, I will describe configuring an Azure Web App to use Basic Authentication instead. This is really a 5 step process: 1) create a new web app, 2) Add an applicationHost.xtd file to enable the ARR functionality in the web app, 3) deploy a web app, which includes a module for basic authentication, 4) Set username and password, 5) create/modify rewrite rules in the web.config file for the web app. I have collected all the code, etc. needed in this GitHub repository:  https://github.com/hansenms/BasicAuthReverseProxy.

There are a few different Basic Auth modules out there:

  1. https://github.com/hexasoftuk/Hexasoft.BasicAuthentication
  2. https://github.com/devbridge/AzurePowerTools/tree/master/Devbridge.BasicAuthentication
  3. https://github.com/1and1/WebApi.BasicAuth
  4. Etc.

The first one being inspired by the second one. In this example, I am using the first one from Hexasoft purely because I happened to find it first and it is available on NuGet. You can probably search on NuGet and find other modules. This is just an example.

Step 1: Create a new Azure Web App

Use the Azure Portal to create a new Web App. Here I am using Azure Government, but Azure Commercial will work too. You can also use either a regular web app or a web app in an App Service Environment.

Step 2: Add applicationHost.xdt file

Open the Kudu console and add a filed called applicationHost.xdt in the D:Homesite folder:

Edit the file and set the contents to:

<?xml version="1.0"?>  
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">  
    <system.webServer>  
        <proxy xdt:Transform="InsertIfMissing" enabled="true" preserveHostHeader="false" reverseRewriteHostInResponseHeaders="false" />  
    </system.webServer>  
</configuration>

Step 3: Deploy web app with basic auth module

There are several ways to do this, but an easy way is to just add the git repository https://github.com/hansenms/BasicAuthReverseProxy as an external repository in the deployment options:

 

Step 4: Set username and password

Next use app settings to set BasicAuthentication.Username and BasicAuthentication.Password:

Step 5: Update/Modify/Create rewrite rules

The default deployment proxies YOUR-SITE.azurewebsite.us/msdn/* to blogs.msdn.microsoft.com and the same for technet/. But you can modify and add rules in the web.config file.

 

After deployment and setting passwords, you can try the proxy, you should be asked for credentials:

And then the request will be forwarded. Here is an example of proxying to one of my blog posts:

And that's it. You have now created a web app reverse proxy with basic authentication. Let me know if you have questions/comments/suggestions.

 

The power of technology changes lives

$
0
0

If you are a keen user of Microsoft technologies including Office 365 and Windows 10 then you will already be well aware of the amazing features it offers for accessibility. However, unless you yourself require specific features through your own personal additional needs then you may not have seen the real impact of these tools in practice.


An inspiring story

Last week, Steven Woodgate Marketing and Communications Manager at Microsoft, shared his own inspirational story 'I have dyslexia and dyspraxia, but most importantly I have creativity'. Within this he shares his own personal journey of discovery and how throughout he has drawn upon creative solutions, including technology, to overcome his own difficulties and barriers leading to a successful career at Microsoft. A key point he makes is:

"Technology is not meant to make those with learning difficulties appear special; it’s there to help normalise experiences and create a level playing field."

Read the full inspiring story here.

If you are interested in finding out more about these amazing technologies which have supported Steven then take a look at some of the available resources below.


Check out the Microsoft Accessibility blog

 

Steven's is just one of the inspiring stories coming out of the use of Microsoft accessibility tools, check out the 'Inclusion in Action' blog page to read more stories from people whose lives have been changed through the use of technology.

 


Microsoft accessibility dedicated page

There are no limits to what people can achieve when technology reflects the diversity of everyone who uses it. Transparency, accountability, and inclusion aren’t just built into Microsoft's culture. They’re reflected in products and services designed for people of all abilities. Microsoft is committed to accessibility and their dedicated page hosts an array of information to support you in finding out all about what's on offer! Click here to visit the page.

 


Microsoft Accessibility Sway 

This fantastic Sway put together by the accessibility team takes you step-by-step through all the accessibility features available to you.

 


Follow Microsoft Accessibility on Twitter


 

Computer Vision API の OCR 関数について

$
0
0

こんにちは。Cognitive Services サポートチームの中山です。

Computer Vision API OCR関数で日本語 「Ja」を指定した場合の現象と対策をご紹介いたします。

 

現象:

Computer Vision API OCR関数を呼び出し時に、日本語ドキュメントを読み込むためlanguageのパラメータに「Ja」を指定した場合は、

Response 400 のエラーがリターンさます。

 

 

対策方法:

languageのパラメータに日本語を指定したい場合は、「Ja」ではなく 「ja」に変更してください。

 

なお、Computer Vision API OCR のドキュメントには、サポートされている言語一覧に「Ja (Japanese)」 の記載はございますが、これは「ja (Japanese)」の誤りです。

別途ドキュメントの修正依頼を行っておりますので、しばらくお待ちください。

 

Computer Vision API - v1.0

https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fc

 

上記がお役に立てば幸いです。

 

Cognitive Services 開発サポートチーム 中山

Make changes to Azure App Service setting using Postman

$
0
0

I wrote an article here “How to disable/enable HTTP/2, Azure App Service” that showed how to change an Azure App Service setting Resource Explorer.  You can also do this from Postman, which I wrote about here “Using Postman to call Azure REST APIs” .

The most challenging aspect of doing this is getting the Bearer token which is required so the request you are making from Postman is authenticated.  In the article I already mentioned “Using Postman to call Azure REST APIs”, I show you how to get this token using Fiddler, you can also get it from an F12 Network trace log as well, Figure 1.

image

Figure 1, how to get bearer token from F12

In this article I will retrieve the Bearer token using the ARMClient which is hosted on GitHub here and requires choco to install it.  Both are easy to install so I will not go into that much here.

image

Figure 2, how to install the ARMClient

Then to get the Bearer token, I enter the following command and it is now in my clipboard, Figure 3.

armclient token <subscriptionId>

image

Figure 3, how to get a Bearer token, install the ARMClient

Yes, ok, I got “There is no login token.  Please login to acquire a token”, so I logged in, using "ARMCLIENT LOGIN" and then re-executed the token request…Figure 4.

armclient login

image

Figure 4, how to get a Bearer token, install the ARMClient

Now, to change, for example the “http20Enabled” attribute, first try a GET with the required headers, Figure 5.  I used a URL similar to this:

https: //management.azure.com/subscriptions/<subscriptionId>/resourceGroups/
          <resourceGroup>/providers/Microsoft.Web/sites/<appName>/config/
          web?api-version=2016-08-01

I executed a GET request to make sure the request was working, Figure 5.

image

Figure 5, how update Azure App Service attributes using ARMClient Postman

Then, to update the Azure App Service attribute, do the following:

  • Use the same URL as you did with the GET method
  • Select the PUT verb
  • Copy the contents from the Body of the GET response
  • Paste the Body into the PUT request
  • Add the Authentication header
  • Update the attribute you want to modify
  • Press Send

Should look something similar to Figure 6.

image

Figure 6, how update Azure App Service attributes using ARMClient Postman

Once successful, run the GET request again to retrieve the attributes and confirm the modification was performed.

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>