Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Online migration from AWS RDS MySQL to Azure Database for MySQL

$
0
0

We recently announced public preview support for online migrations of MySQL to Azure Database for MySQL by using the Azure Database Migration Service (DMS). Customers can migrate their MySQL workloads hosted on premises, on virtual machines, or on AWS RDS to Azure Database for MySQL while the source databases remain online. This minimizes application downtime and reduces the SLA impact for customers.

Conceptionally, online migration with minimum downtime in DMS uses the following process:

  1. Migrate initial load using bulk copy.
  2. While initial load is being migrated, incoming changes are being cache and applied after initial load completes.
  3. Changes in source database continue to replicate to target database until user decides to cutover.
  4. During a planned maintenance window, stop the new transactions coming into source database. Application downtime starts with this step.
  5. Wait for DMS to replicate the last batch of data.
  6. Complete application cutover by updating the connection string to point to your instance of Azure Database for MySQL.
  7. Bring the application online.

Below are prerequisites setting up DMS and step-by-step instructions for connecting to a MySQL database source on AWS RDS.

Prerequisites

  • Create an instance of Azure Database for MySQL. Refer to the article Use MySQL Workbench to connect and query data for details on how to connect and create a database using the Azure portal.
  • Create a VNET for the Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either ExpressRoute or VPN.
  • Ensure that your Azure Virtual Network (VNET) Network Security Group rules do not block the following communication ports 443, 53, 9354, 445, 12000. For more detail on Azure VNET NSG traffic filtering, see the article Filter network traffic with network security groups.
  • Configure your Windows Firewall for database engine access.
  • Open your Windows firewall to allow the Azure Database Migration Service to access the source MySQL Server, which by default is TCP port 3306.
  • When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow the Azure Database Migration Service to access the source database(s) for migration.
  • Create a server-level firewall rule for the Azure Database for MySQL to allow the Azure Database Migration Service access to the target databases. Provide the subnet range of the VNET used for the Azure Database Migration Service.
  • The source MySQL must be on a supported MySQL community edition. To determine the version of the MySQL instance, run SELECT @@version; in the MySQL utility or MySQL Workbench.
  • Azure Database for MySQL supports only InnoDB tables. To convert MyISAM tables to InnoDB, see the article Converting Tables from MyISAM to InnoDB.

Setting up AWS RDS MySQL for replication

  • binlog_format = row
  • binlog_checksum= NONE
  • Save the new parameter group

Pre-migration steps and setting up DMS

  • Please refer to this tutorial to continue with schema migration, setting up DMS to do data movement and how to monitor data movement.
  • For known issues and workarounds, please refer to this document.
  1. Congratulations, you have just performed a MySQL migration from AWS RDS to Azure Database for MySQL successfully!For more information, refer to the resources below. If you have any questions, please email the DMS Feedback alias at dmsfeedback@microsoft.com.

    Resources

Sign in on Azure portal and set up an instance of DMS for free. We are constantly adding new database source/target pairs to DMS. Stay up-to-date on #AzureDMS news and follow us on Twitter @Data_Migrations. Join the Azure Database Migration Service (DMS) Preview Yammer group, and give us feedback via User Voice or email by contacting us at dmsfeedback@microsoft.com.


Announcing Public Availability of Azure SQL Database Online Migration in Azure Database Migration Service

$
0
0

In an era of exploding data volumes and ever-changing business requirements, we understand that a streamlined and robust migration service is key to accelerating customers’ transition to the cloud.  Since database migrations can lead to significant downtimes, how can customers deal with database migrations when they need to minimize end-user downtime but also want to ensure that the data remains up-to-date with the source database?

Today, we are very excited to announce public availability of online migration in Azure Database Migration Service (DMS). DMS is a one-stop shop for migrating data from different database engines to Azure with built-in resiliency and robustness. With online migration, businesses can migrate their databases to Azure while the databases continue to be operational. In other words, migrations can be completed with minimum downtime for critical applications, limiting the impact to service level availability and inconvenience their end customers.

DMS also introduces the Business Critical pricing tier supports offline and online migrations for business critical workloads that require minimal downtime. The Business Critical pricing tier is currently available free in public preview.

Online migration is available for customers to migrate on-premises/SQL Server on IaaS workloads to Azure SQL Database in the following regions:

  • West Europe
  • East US2
  • Central US

We will be extending online migration to other regions in a staged approach, and in addition, migrating from on-premises SQL Server/RDS workloads to Azure SQL Database Managed Instance will be available in the near future.

Setting up an online migration in DMS is very simple. DMS first performs the full data load and establishes a continuous sync with the target databases until the customer initiates the cutover migration at a convenient time. A short video describing this process and providing a demonstration follows.

Note: You can also access this video directly here: Online migrations using Azure Database Migration Service.

Limitations and Known issues:

Known issues and limitations associated with online migrations are below.

Online migrations are not supported for temporal tables.
Tables with hierarchyid data types are not supported.
LOB Data types: If the length of Large Object (LOB) column is bigger than 32 KB, data might gets truncated at the target.

To get the complete list of limitations and known issues, please visit our known issues page.

Please refer to the resources below for online migration. If you have any questions, please email the DMS Feedback alias at dmsfeedback@microsoft.com.

Thank you.

Raj Pochiraju
Principal Program Manager
Azure Data Engineering

Visual Studio Toolbox: Telerik & Kendo UI

$
0
0

In this episode, I am joined by Sam Basu and Ed Charbeneau, who take us on a whirlwind tour of the Telerik UI toolkit for .NET applications and the Kendo UI toolkit for modern Web apps. After an overview of these two products, they show controls for the Web [12:30], chat bots [22:45] and Xamarin [27:45].

Check out https://demos.telerik.com to see online demos of all of the Telerik and Kendo UI controls.

Slowdown Distribution History Cleanup for Troubleshooting

$
0
0

For troubleshooting, it would be nice to have more than 48 hours (default) of historical information. To change the History Cleanup settings use SQL Server Management Studio, alt-click "Replication" folder and select "Distributor Properties".

On the "General" page, to far right of your distribution database name, click the "…" more information button. My screen below shows the "Delete Batch Size" option available with SQL 2017.

As you can see the default is 48 hours, too short if you'd like to keep a week of historical data for trend analysis. Adjust the settings as needed.

Changes here are automatically reflected in the SQL Agent Job "Agent history clean up: distribution"

EXEC dbo.sp_MShistory_cleanup @history_retention = 120

Consider also changing the Agent to "Verbose History Profile" allowing detailed logging of replication agents to these history table. In these same dialogs you can use "Change Existing Agent" to reset all agents to Verbose Profile instead of changing them one at a time.

For Push replication environment, restarting SQL Server Agent will restart all replication agents under the new Profile.

WARNING: Be sure restarting SQL Server Agent is what you want to do as it also restarts any non-Replication related job that may be running. It also restarts all replication related jobs, even those belonging to another published database. If you're not sure, don't, just stop and restart individual agent as time permits.

This posting replaces earlier posting providing steps to directly modified the SQL Agent History job as documented here.

Chris Skorlinski
Microsoft SQL Server Escalation Services

Team Foundation Server (TFS) Reporting – Which reports do you use?

$
0
0

If you are using Team Foundation Server (TFS) and SSRS Reporting today, we want to hear from you!

We want to know which of the TFS Reports we offer today are most valuable to you.

Your feedback will help us guide the VSTS Analytics Service roadmap. Analytics is the future of reporting for both Visual Studio Team Services (VSTS) and Team Foundation Server (TFS).

This short survey should take you no more than 5 minutes to complete. Feel free to forward the survey link to your colleagues. All responses are kept confidential.

Link to the survey: https://aka.ms/DevOpsReportingSurvey
The survey will be open until September 12th.

If you have any questions about the survey, please email Anand Guruswamy at Anand.Guruswamy@microsoft.com

Thank you!

Bot Framework v4 with Luis

$
0
0

The Cognitive Services Language Understanding Intelligence Services (Luis) should be no stranger to bot developers, it enables the all-important natural language processing which means users can converse with your bot in the way they would with another human; by using natural language.

With Bot Framework V3 (BFv3), there is a well prescribed pattern for using Luis in your bot via the LuisDialog which gives you a nicely packaged object to work with, abstracts the Luis API call and handles the top level intents.

Bot Framework V4 (BFv4) is a bit more complex.

I did some work on BFv4 a few weeks ago. I wrote about my main initial observations in my 'Bot Framework V4: What I learnt in 4 days in July 2018' article. However, the Luis implementation was something I spent a lot of time on and wanted to drill into it a bit more here.

As with anything I write around emerging technology; this stuff is just a collection of my observations at the time of writing (August 2018). Your mileage may vary.

All the sample code for this article comes from my Banko bot V4 sample which is a made-up bot based on common banking scenarios. If you want the full sample, please clone from GitHub. I'm happy to accept pull requests if you can think of improvements that remained focused on the job of demonstrating Bot v4 with Luis.

BFv4 Luis Options; LuisRecognizer or LuisRecognizerMiddleware

In BFv4 there are two patterns for using Luis with capabilities that are built into the SDK.

The key difference between these two options is that one is implemented as middleware and the other is not.

The non-middleware approach gives you a strongly typed .net object to work with whereas the middleware approach is a collection of dictionaries which are must harder to parse and traverse in your code.

When deciding which approach to use, consider that middleware executes on each and every message to your bot. You can see more about how middleware works in the Middleware docs.

In some cases, it might make sense for every message to your bot to need natural language processing, but in most cases, Luis is only required for top level intent detection and entity resolution. Once you have the user's intent and initial entities, the bot can then launch into a dialog tree, which typically would not require Luis.

Passing every message through Luis when you don't need to will not only add unnecessary network latency to your bot, but will also become fairly expensive as your bot scales. Luis is a very cheap service for what it does, but there are still costs associated and a typical bot conversation could easily generate 10-20 API calls for just one user.

LuisRecognizer Implementation

In my Banko scenario, I decided to use LuisRecognizer for top level intent and entity detection only.

The steps for getting this setup is relatively easy and mostly documented in Extract intents and entities using LUISGen but there are some details missing from the documentation at the moment so hopefully this will fill in the gaps.

1: Create a Luis model

Hopefully you already know how to build your Luis model, but if not you just head to http://luis.ai, setup your intents and entities, then train and publish the model.

There are some key things to watch out for:

  1. The Key and Endpoint you use both need to be on the same data center otherwise you'll get 401 responses
  2. If you have provisioned Luis in Europe, you'll need http://eu.luis.ai to manage the application (I think there is a portal for Asia too but not sure what it is)

2: Create a class with LuisGen

This is where we create a .net class based on our Luis model.

You can use the LUISGen tool to generate classes that make it easier to extract entities from LUIS in your bot's code.

LuisGen is an NPM tool which you install and operate as follows:

npm install -g luisgen

luisgen BankoLuisModel.json -cs Banko.BankoLuisModel -o

This will give you a c# class which you can use to receive your Luis responses.

3: Use a root DialogSet to call Luis and work out intent

In your main bot file, you need to setup a LuisRecognizer object which you can use to to get top level intents and entities. You can then either handle them directly or create a DialogContainer to handle each one.

In your main bot class (the one that inherits form IBot), you can do something like this which gives you the main LuisRecognizer object to work with. :

var luisRecognizerOptions = new LuisRecognizerOptions { Verbose = true };

var luisModel = new LuisModel(
    configuration[Keys.LuisModel],
    configuration[Keys.LuisSubscriptionKey],
    new Uri(configuration[Keys.LuisUriBase]),
    LuisApiVersion.V2);

var LuisRecognizer = new LuisRecognizer(luisModel, luisRecognizerOptions, null);

Later on in the main bot code you can do something like this to capture the utterance from the user, call Luis and work out the intent.

var utterance = dc.Context.Activity.Text?.Trim().ToLowerInvariant();

var luisResult = await LuisRecognizer.Recognize<BankoLuisModel>(utterance, new CancellationToken());

switch (luisResult.TopIntent().intent)
{
    case BankoLuisModel.Intent.Balance:
        //do something to handle the balance intent
        break;
    case BankoLuisModel.Intent.Transfer:
        //do something to handle the transfer intent
        break;
    case BankoLuisModel.Intent.None:
    default:
        await dc.Context.SendActivity($"I dont know what you want to do.");
        await next();
        break;
}

4: Implement a DialogContainer for each intent

When you've determined the right intent from Luis, you can handle it however you want to. However, I think that DialogContainer is probably best practice for most scenarios.

A DialogContainer is similar to a Dialog in BFv3 and is a way of handling a specific branch of the conversation with a user. The way you progress through a dialog is new compared to BFv3 and uses a series of WaterfallStep which are distinct interactions between the bot and user.

You can invoke a DialogContainer from your top level intent handler. As an example the switch statement for handling intents may look like this:

switch (luisResult.TopIntent().intent)
{
    case BankoLuisModel.Intent.Balance:
        await dc.Begin(nameof(BalanceDialogContainer));
        break;
    case BankoLuisModel.Intent.Transfer:
        await dc.Begin(nameof(TransferDialogContainer));
        break;
    case BankoLuisModel.Intent.None:
    default:
        await dc.Context.SendActivity($"I dont know what you want to do.");
        await next();
        break;
}

This is a very simple example of a dialog container which simply gives the user a hard-coded balance and exits.

You can see more complete examples of the BalanceDialogContainer.cs and TransferDialogContainer.cs from my Banko example to learn how to structure a DialogContainer.

public class BalanceDialogContainer : DialogContainer
{
    public static BalanceDialogContainer Instance { get; } = new BalanceDialogContainer();
    private BalanceDialogContainer() : base(nameof(BalanceDialogContainer))
    {
        this.Dialogs.Add(nameof(BalanceDialogContainer), new WaterfallStep[]
        {
            async (dc, args, next) =>
            {
            	//GetBalance is where you'd get the actual balance from your back-end system, but this is a demo
            	var balance = GetBalance();
                await dc.Context.SendActivity($"You have {balance}. What is next?");
            },
            async (dc, args, next) =>
            {
                await dc.End();
            }
        });
    }
}

5: Pass Entity as arguments

As well as intent detection, Luis is also commonly used to extract entities from the user's original utterance.

As an example, if a Banko users says "Transfer £20 from the joint account to martin kearn on saturday", Luis could classify this as follows:

  • Intent: "Transfer"
  • Entities:
    • AccountLabel: "joint account"
    • Money: "£20"
    • Date: "Saturday" (more on date entities later)
    • Payee: "martin kearn"

The LuisRecognizer makes it very simple to extract the entities and pass them as an argument to your DialogContainer so you can work with them. In the scenario for a money transfer, the code looks like this:

case BankoLuisModel.Intent.Transfer:
    var dialogArgs = new Dictionary<string, object>();
    dialogArgs.Add(Keys.LuisArgs, luisResult.Entities);
    await dc.Begin(nameof(TransferDialogContainer), dialogArgs);
    break;

Entity Validation

If you do pass entities from your main IBot to your DialogContainer, you'll want to validate them, convert any entities that have values to the correct type and store them in state so that the rest of your application can use the values.

You may typically want to discard entities that do not have values.

Bot state requires that information is stored as Dictionary<string,object> so I find it best to implement a static class which accepts your _Entities object from the LuisRecognizer, validates and converts each entity and returns a Dictionary<string,object> full of entities to be stored in bot state.

In my Banko example, the LuisValidator.cs contains the full details but this snippet should give you the idea.

This validates that the AccountLabel entity has a value and if it does, it adds the value to a Dictionary<string,object> which is returned.

public static Dictionary<string, object> LuisValidator(BankoLuisModel._Entities entities)
{
    var result = new Dictionary<string, object>();

    // Check AccountLabel
    if (entities?.AccountLabel?.Any() is true)
    {
        var accountLabel = entities.AccountLabel.FirstOrDefault(n => !string.IsNullOrWhiteSpace(n));
        if (accountLabel != null)
        {
            result[Keys.AccountLabel] = accountLabel;
        }
    }

    return result;
}

Within the DialogContainer you can call the LuisValidator and store the results in Bot State. You would typically do this as your first WaterfallStep.

async (dc, args, next) =>
{
    // Initialize state.
    if(args!=null && args.ContainsKey(Keys.LuisArgs))
    {
        // Add any LUIS entities to the active dialog state. Remove any values that don't validate, and convert the remainder to a dictionary.
        var entities = (BankoLuisModel._Entities)args[Keys.LuisArgs];
        dc.ActiveDialog.State = Validators.LuisValidator(entities);
    }
    else
    {
        // Begin without any information collected.
        dc.ActiveDialog.State = new Dictionary<string,object>();
    }

    await next();
}

Resolving date entities

Typically your entities may be simple strings but they could also be more complex types such as DateTime, Money etc.

Luis uses a thing called a 'Resolution' to provide additional data with these kinds of complex entities so that you can resolve the actual values from the words the user said. For example "Saturday" may mean "18th August 2018".

Luis returns date entities to you using Json which looks a little like the following

{
    "entity": "saturday",
    "type": "builtin.datetimeV2.date",
    "startIndex": 32,
    "endIndex": 39,
    "resolution": {
        "values": [
            {
                "timex": "XXXX-WXX-6",
                "type": "date",
                "value": "2018-08-18"
            },
            {
                "timex": "XXXX-WXX-6",
                "type": "date",
                "value": "2018-08-25"
            }
        ]
    }
}

Using this data alone, it is hard to boil this down to DateTime object you can work with. Fortunately, there are some helpers built into the BotBuilder SDK to help you.

The first thing you need to get is the Timex which is a code that can be resolved to a DateTime (I have no idea how this works under the hood).

Luis actually returns several candidate dates in order of likelihood so you may want to implement some logic to determine the correct date (See the BotBuilder Community DataTypeDisambiguation Dialog for help here), but in this example I've just taken the first one.

async (dc, args, next) =>
{
    // Capture Date to state
    if (!dc.ActiveDialog.State.ContainsKey(Keys.Date))
    {
        var answers = args["Resolution"] as List<DateTimeResult.DateTimeResolution>;
        var firstAnswer = answers[0];
        var timex = firstAnswer.Timex;
		var justDate = timex.Substring(0, timex.IndexOf("T"));
		var date = Convert.ToDateTime(justDate);
        dc.ActiveDialog.State[Keys.Date] = date.ToLongDateString();
    }

    await next();
},

Once you've implemented the above, you'll have a valid DateTime object stored in your bot state which you can use to action the user's request.

For the Banko implementation, I used a helper function to do the Timex conversion just to make things a little neater, see TransferDialogContainer.cs and TimexToDateConverter.cs.

Resolving currency entities

Luis has a built in entity type for currency which can accurately capture money however the user phrases it, for example all of these would resolve to a currency entity:

  • "£20"
  • "20.00"
  • "twenty pounds"

This is the Json that comes back from Luis for currency

"entity": "£20.50",
"type": "builtin.currency",
"startIndex": 19,
"endIndex": 24,
"resolution": {
    "unit": "Pound",
    "value": "20.5"
}

If you have built your Luis c# model using the LUISGen tool, you will have a very useful Microsoft.Bot.Builder.Ai.LUIS.Money[] object to work with.

To the actual amount, you can do a simple validation, much like we did with AccountLabel earlier on.

This is an example of how we can extend the LuisValidator.cs from earlier to validate currency entities and convert to a Decimal which is much easier to work with for currency.

public static Dictionary<string, object> LuisValidator(BankoLuisModel._Entities entities)
{
    var result = new Dictionary<string, object>();

    // Check Money
    if (entities?.money?.Any() is true)
    {
        var number = entities.money.FirstOrDefault().Number;
        if (number != 0.0)
        {
            // LUIS recognizes numbers as doubles. Convert to decimal.
            result[Keys.Money] = Convert.ToDecimal(number);
        }
    }
}

This is all great if the user provides the currency in their initial utterance, but if you have to capture it via prompts later, you may have a problem .... more on this in the 'Capturing currency from the user with NumberPrompt' section later.

Entity Completion via WaterfallStep

If the utterance that gets sent to Luis contains all the required entities, you are good to go with the details above around entity validation. However, no two users are the same and not everyone is going to give you everything you need in one go.

Lets examine the concept of a balance transfer; to do a balance transfer, we need 4 bits of information

  • AccountLabel: The short name of the account the money is to be transferred from
  • Money: The amount and currency of the transfer
  • Date: The date the transfer should take place
  • Payee: The person or company receiving the money

All of the following are potential utterances which Luis will resolve to the Transfer intent and contain one or more of the required entities

  • "I want to make a transfer"; the Transfer intent without any entities.
  • "Transfer from the joint account"; the Transfer intent with the AccountLabel entity.
  • "Transfer £20 from the joint account"; the Transfer intent with the AccountLabel and Money entities.
  • "Transfer £20 from the joint account on Saturday"; the Transfer intent with the AccountLabel, Money and Date entities.
  • "Transfer £20 from the joint account to martin kearn on Saturday"; the Transfer intent with the AccountLabel, Money, Date and Payee entities.

If you have used the entity validation approach detailed above, your bot state will contain a Dictionary<string,object> containing all the entities that were provided by Luis. However, if you find that that not all your entities are provided, you will need to prompt the user to provide them.

You can use a WaterfallStep to prompt the user for a value, capture it and store it in bot state as if it were provided by Luis originally. I find it simplest to implement a different WaterfallStep for each message going to or from the user.

The full details of how we can validate, prompt and capture all 4 entities can be found in TransferDialogContainer.cs but here is a quick sample for the AccountLabel entity.

async (dc, args, next) =>
{
    // Verify or ask for AccountLabel
    if (dc.ActiveDialog.State.ContainsKey(Keys.AccountLabel))
    {
        await next();
    }
    else
    {
        var promptOptions = new PromptOptions(){RetryPromptString = "Which account do you want to transfer from? For exmaple Joint, Current, Savings etc"};
        await dc.Prompt(Keys.AccountLabel,"Which account?", promptOptions);
    }
},
async (dc, args, next) =>
{
    // Capture AccountLabel to state
    if (!dc.ActiveDialog.State.ContainsKey(Keys.AccountLabel))
    {
        var answer = (string)args["Value"];
        dc.ActiveDialog.State[Keys.AccountLabel] = answer;
    }

    await next();
},

You'll note that the we are using built in prompts to capture data from the user. In order for these to work, you'll need to add them, with their validators to the Dialogs collection for your DialogContainer. To do this you can do something like this at the bottom of the main DialogContainer constructor

// Add the prompts and child dialogs
this.Dialogs.Add(Keys.AccountLabel, new Microsoft.Bot.Builder.Dialogs.TextPrompt());

this.Dialogs.Add(Keys.Money, new Microsoft.Bot.Builder.Dialogs.NumberPrompt<int>(Culture.English, Validators.MoneyValidator));

this.Dialogs.Add(Keys.Date, new Microsoft.Bot.Builder.Dialogs.DateTimePrompt(Culture.English, Validators.DateTimeValidator));

this.Dialogs.Add(Keys.Payee, new Microsoft.Bot.Builder.Dialogs.TextPrompt());

this.Dialogs.Add(Keys.Confirm, new Microsoft.Bot.Builder.Dialogs.ConfirmPrompt(Culture.English));

Notice how we're using validators to help the prompt validate the answer given? These can be found in the Helpers folder.

Capturing currency from the user with NumberPrompt

The bot framework provides Prompt classes which help you gather specific data types from the user. These are great for entity completion as detailed above, however I encountered an issues with currency which I've not yet been able to resolve.

The best matching Prompt for currency is the NumberPrompt which captures a number from the user. However this number is returned as an Int not a Double or Float which is required to work with currency.

I've not resolved this issue in my Banko sample, but I suspect that the way you'd tackle this is by creating your own prompt as detailed in Prompt users for input using your own prompts. I'm open to pull requests on Banko if anyone wants to write that!? 🙂

In Summary

To summarise, there are several options for using Luis with the BFv4 and the right approach will depending on your application.

For Banko I elected to use the LuisRecognizer because I only wanted to use Luis for top level intent detection and initial entity extraction.

Once you have a Luis response you can use a DialogContainer to interact with your user through a series of WaterfallStep and Dialog objects.

There are some definitive gotchas along the way, but I've tried to capture what I learnt about it in this article, your mileage may vary

OSD Video Tutorial: Part 23 – Nested Task Sequences

$
0
0

This session is part twenty-three of an ongoing series focusing on Operating System Deployment in Configuration Manager. This session is posted a little out of the order from when originally recorded. Don’t worry, the remaining sessions starting with fifteen will continue in our Advanced OSD section.

In this tutorial we explain the nested task sequences capabilities which were added first in the Configuration Manager current branch 1710 release.  The session details how the feature works, what to expect and demonstrations.

Use reflection to get assembly type and method sizes for comparison

$
0
0

I wanted to know what changed between two versions of a managed assembly. Software changes over time, and seeing what changed can be important in understanding behavior.
So I wrote a little program to show the assembly contents, sorted by size, showing the size of various components such as classes and methods.

Because it doesn’t look at things like static members, manifests, etc.  It’s not complete, and left as an exercise to the reader.

Note how it calculates each node’s cost by summing it’s children, then uses the size as a sort description.


<code>

using System;
using System.Reflection;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;
using System.Runtime.InteropServices;
using System.Diagnostics;
 
// Start Visual Studio, File->New->Project->C#, WPF Application. call it "MgdCodeSize"
// Replace MainWindow.xaml.cs wtih this content
 
namespace MgdCodeSize
{
	/// <summary>
	/// Interaction logic for MainWindow.xaml
	/// </summary>
	public partial class MainWindow : Window
	{
		public MainWindow()
		{
			InitializeComponent();
			this.Loaded += MainWindow_Loaded;
			this.WindowState = WindowState.Maximized;
		}
 
		public class MyTreeViewItem : TreeViewItem
		{
			public int size { getset; }
			string _name;
			public MyTreeViewItem(string name)
			{
				this._name = name;
			}
			public void CalcHeader(bool expand = true)
			{
				foreach (MyTreeViewItem child in this.Items)
				{
					this.size += child.size;
				}
				var sp = new StackPanel()
				{
					Orientation = Orientation.Horizontal
				};
				sp.Children.Add(new TextBox()
				{
					Text = size.ToString()
				});
				sp.Children.Add(new TextBox()
				{
					Text = _name
				});
				this.Header = sp;
				this.Items.SortDescriptions.Add(new System.ComponentModel.SortDescription("size", System.ComponentModel.ListSortDirection.Descending));
				if (expand)
				{
					this.IsExpanded = true;
				}
			}
			public override string ToString()
			{
				return $"{this.size} {this._name}";
			}
		}
		private void MainWindow_Loaded(object sender, RoutedEventArgs e)
		{
			try
			{
				var filename = @"C:Program Files (x86)Microsoft Visual StudioPreviewEnterpriseCommon7IDEEntityFramework.dll";
				var asm = Assembly.LoadFrom(filename);
				var tv = new TreeView();
				this.Content = tv;
				var tvFileNode = new MyTreeViewItem(filename);
				tv.Items.Add(tvFileNode);
				foreach (var module in asm.Modules)
				{
					var tvModuleNode = new MyTreeViewItem(module.Name);
					tvFileNode.Items.Add(tvModuleNode);
 
					foreach (var type in module.GetTypes())
					{
						var tvTypeNode = new MyTreeViewItem(type.Name);
						tvModuleNode.Items.Add(tvTypeNode);
						var methods = type.GetMethods(BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.Instance);
						foreach (var method in methods)
						{
							var methodBody = method.GetMethodBody();
							var bytes = methodBody?.GetILAsByteArray();
							var tvMethodNode = new MyTreeViewItem(method.Name);
							if (bytes != null)
							{
								var numbytes = bytes.Length;
								tvMethodNode.size = numbytes;
								tvTypeNode.Items.Add(tvMethodNode);
							}
							tvMethodNode.CalcHeader(expand: false);
						}
						tvTypeNode.CalcHeader(expand: false);
					}
					tvModuleNode.CalcHeader();
				}
				tvFileNode.CalcHeader();
 
			}
			catch (Exception ex)
			{
				this.Content = ex.ToString();
			}
		}
	}
}

</code>


OSD Video Tutorial: Part 13 – to be Known or to be Unknown – that is the question

$
0
0

This session is part thirteen of an ongoing series focusing on Operating System Deployment in Configuration Manager. We discuss the concept of Known and Unknown computer imaging and provides demonstrations and detailed discussion of the advantages and disadvantages of each approach.

OSD Video Tutorial: Part 14 – Pre-staged Media

$
0
0

This session is part fourteen of an ongoing series focusing on Operating System Deployment in Configuration Manager. In it, we discuss the pre-staged media option for image deployment. The discussion includes a description of pre-staged media, the scenarios solved by pre-staged media and a demonstration of configuring and using pre-staged media.

System Center Updates Publisher Video Tutorial

$
0
0

An update to System Center Updates Publisher (SCUP) was released in March 2018 and thought now would be a good opportunity to add a video tutorial for SCUP.

This video is part a series focusing on software updates in Configuration Manager current branch. This session focuses specifically on System Center Updates Publisher (SCUP).  The session covers understanding and configuring SCUP, working with SCUP and integrating SCUP into and using SCUP in a Configuration Manager environment.

OSD Video Tutorial: Part 15 – Advanced Concepts

$
0
0

Still with us? We’ve journeyed through the introductory and deeper dive OSD sessions and now we are pulling it together with the tutorials in the Advanced OSD section. Today’s tutorial is part fifteen of a series discussing the Operating System Deployment feature of Configuration Manager. In it, we cover a variety of advanced topics and builds on various sessions already presented. Topics include configuring and using prestart commands and engineering a task sequence for speed.

OSD Video Tutorial: Part 16 – BIOS or UEFI

$
0
0

This is part sixteen of a series discussing the Operating System Deployment feature of Configuration Manager.  This session is a discussion of Unified Extensible Firmware Interface (UEFI) systems and how Configuration Manager OSD can deliver images to systems running in UEFI mode.  The discussion also includes the challenges of using Pre-staged media with UEFI systems.

Use the official Boost.Hana with MSVC 2017 Update 8 compiler

$
0
0

We would like to share a progress update to our previous announcement regarding enabling Boost.Hana with MSVC compiler. Just as a quick background, Louis Dionne, the Boost.Hana author, and us have jointly agreed to provide a version of Boost.Hana in vcpkg to promote usage of the library among more C++ users from the Visual C++ community. We've identified a set of blocking bugs and workarounds and called them out in our previous blog, and stated that as we fix the remaining bugs, we will gradually update the version of Boost.Hana in vcpkg, ultimately removing it and replacing it with master repo. We can conduct this development publicly in vcpkg without hindering new users who take a dependency on the library.

Today, we're happy to announce that the vcpkg version of Boost.Hana now just points to the official master repo, instead of our fork!!!

With VS2017 Update 8 MSVC compiler, the Boost.Hana official repo with this pull request or later will build clean. We recommend you take the dependency via vcpkg.

For full transparency, below is where we stand with respect to active bugs and used source workarounds as of August 2018:

Source workarounds in place

There are 3 remaining workarounds in Boost.Hana official repo for active bugs in VS2017 Update 8 compiler:

// Multiple copy/move ctors
#define BOOST_HANA_WORKAROUND_MSVC_MULTIPLECTOR_106654

// Forward declaration of class template member function returning decltype(auto)
#define BOOST_HANA_WORKAROUND_MSVC_DECLTYPEAUTO_RETURNTYPE_662735

// Parser incorrectly parses a comparison operation as a template id
// This issue only impacts /permissive- or /std:c++17
#define&nbsp;BOOST_HANA_WORKAROUND_MSVC_RDPARSER_TEMPLATEID_616568

Removed 23 source workarounds that are no longer necessary for VS2017 Update 8 release. Full details for more information.

// Fixed by commit f4e60b2ecc169b0a5ec51d713125801adae24bc2, 20180323
// Note, the workaround requires /Zc:externConstexpr
#define BOOST_HANA_WORKAROUND_MSVC_NONTYPE_TEMPLATE_PARAMETER_INTERNAL

// Fixed by commit c9999d916f1d73bc852de709607b2ca60e76a4c9, 20180513
#define BOOST_HANA_WORKAROUND_MSVC_CONSTEXPR_NULLPTR
#define BOOST_HANA_WORKAROUND_MSVC_CONSTEXPR_ARRAY_399280

// error C2131: expression did not evaluate to a constant
// test_includeautofor_each.hpp
#define BOOST_HANA_WORKAROUND_MSVC_FOR_EACH_DISABLETEST

// testfunctionalplaceholder.cpp
#define BOOST_HANA_WORKAROUND_MSVC_CONSTEXPR_ADDRESS_DISABLETEST
#define BOOST_HANA_WORKAROUND_MSVC_CONSTEXPR_ARRAY_DISABLETEST

// Fixed by commit 5ef87ec5d20b45552784a40fe455c04c257c7b08, 20180516
// Generic lambda preparsing and static capture
#define BOOST_HANA_WORKAROUND_MSVC_GENERIC_LAMBDA_NAME_HIDING_616190

// Fixed by commit 9c4869e61b5ad301f1fe265193241d2c74729a1c, 20180518
// ICE when try to give warning on the format string for printf
// examplemiscprintf.cpp
#define BOOST_HANA_WORKAROUND_MSVC_PRINTF_WARNING_506518

// Fixed by commit 095130d02c8805517bbaf93d92415041eecbca00, 20180521
// decltype behavior difference when comparing character array and std::string
// testorderable.cpp
#define BOOST_HANA_WORKAROUND_MSVC_DECLTYPE_ARRAY_616099

// Fixed by commit a488f9dccbfb4ceade4104c0d8d00e25d6ac7d88, 20180521
// Member with array type
// testissuesgithub_365.cpp
#define BOOST_HANA_WORKAROUND_MSVC_GITHUB365_DISABLETEST

// Fixed by commit 7a572ef6535746f1cee5adaa2a41edafca6cf1bc, 20180522
// Member with the same name as the enclosing class
// testissuesgithub_113.cpp
#define BOOST_HANA_WORKAROUND_MSVC_PARSEQNAME_616018_DISABLETEST

// Fixed by commit 3c9a06971bf4c7811db1a21017ec509a56d60e59, 20180524
#define BOOST_HANA_WORKAROUND_MSVC_VARIABLE_TEMPLATE_EXPLICIT_SPECIALIZATION_616151

// error C3520: 'Args': parameter pack must be expanded in this context
// exampletutorialintegral-branching.cpp
#define BOOST_HANA_WORKAROUND_MSVC_LAMBDA_CAPTURE_PARAMETERPACK_616098_DISABLETEST

// Fixed by commit 5b1338ce09f7827e5b9248bcba2f519043044fef, 20180529
// Narrowing warning on constant float
// examplecoreconvertembedding.cpp
#define BOOST_HANA_WORKAROUND_MSVC_NARROWING_CONVERSION_FLOAT_616032

// Fixed by commit be8778ab26957ae7c6a36376a9ae2d049d64a095, 20180611
// Pack expansion of decltype
// examplehash.cpp
#define BOOST_HANA_WORKAROUND_MSVC_PACKEXPANSION_DECLTYPE_616094

// Fixed by commit 5fd2bf807a0320167c72d9960b32d823a634c04d, 20180613
// Parser error when using '{}' in template arguments
#define BOOST_HANA_WORKAROUND_MSVC_PARSE_BRACE_616118

// Fixed by commit ce4f90349574b4acc955cf1eb04d7dc6a03a568e, 20180614
// Generic lambda and sizeof...
// testtypeis_valid.cpp
#define BOOST_HANA_WORKAROUND_MSVC_GENERIC_LAMBDA_RETURN_TYPE_269943

// Return type of generic lambda is emitted as a type token directly after pre-parsing
#define BOOST_HANA_WORKAROUND_MSVC_GENERIC_LAMBDA_RETURN_TYPE_610227

// Fixed by commit 120bb866980c8a1abcdd41653fa084d6c8bcd327, 20180615
// Nested generic lambda
// testindex_if.cpp
#define BOOST_HANA_WORKAROUND_MSVC_NESTED_GENERIC_LAMBDA_615453

// Fixed by commit 884bd374a459330721cf1d2cc96d231de3bc68f9, 20180615
// Explicit instantiation involving decltype
// exampletutorialintrospection.cpp
#define BOOST_HANA_WORKAROUND_MSVC_DECLTYPE_EXPLICIT_SPECIALIZATION_508556

// Fixed by commit ff9ef6d9fe43c54f7f4680a2701ad73de18f9afb, 20180620
// constexpr function isn't evaluated correctly in SFINAE context
#define BOOST_HANA_WORKAROUND_MSVC_SFINAE_CONSTEXPR_616157

// Fixed by commit 19c35b8c8a9bd7dda4bb44cac1d9d446ed1b20ac, 20180625
// Pack expansion of decltype
// testdetailvariadicat.cpp
// testdetailvariadicdrop_into.cpp
#define BOOST_HANA_WORKAROUND_MSVC_PACKEXPANSION_DECLTYPE_616024

Bugs remaining in the compiler

  • There are 3 active bugs with the VS2017 Update 8 release. This is down from 25 active bugs from Update 7 release.
  • We plan to fix these remaining bugs by the VS2017 Update 9 release later this year.

What's next…

  • Throughout the remaining updates of Visual Studio 2017, we will continue to exhaust the remaining MSVC bugs that block upstream version of the Boost.Hana library.
  • We will continue to provide status updates on our progress. Next update will be when we release VS2017 Update 9.
  • We will ensure that users who take dependency on this library in vcpkg will not be affected by our work.
  • Where are we with enabling Range-v3 with MSVC?
    • Similarly, we are tracking all Range-v3 blocking bugs in the compiler and fixing them. Our plan is to fix them all in the remaining VS2017 Update 9 release.

In closing

We'd love for you to download Visual Studio 2017 version 15.8 and try out all the new C++ features and improvements. As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with MSVC in Visual Studio 2017 please let us know through Help > Report A Problem in the product, or via Developer Community. Let us know your suggestions through UserVoice. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).

Thank you,

Xiang Fan, Ulzii Luvsanbat

.NET Framework August 2018 Preview of Quality Rollup

$
0
0

Today, we are releasing the August 2018 Preview of Quality Rollup.

Quality and Reliability

This release contains the following quality and reliability improvements.

ASP.NET

  • Resolves an issue where an ASP.NET web application is continuously running under high load on high-end web server(40+ CPU cores), it may suffer high thread contention which can cause high CPU usage. [624745]

CLR

  • Fixes an issue that results in a System.InvalidProgramException for some very large XSLT transforms. This may also fix this kind of issue for some other very large methods. [604943]
  • Addresses an issue where a the CultureAwareComparer type was not able to correctly serialize and deserialize across different versions of the .NET Framework, as described in Advisory serializing/deserializing a CultureAwareComparer with .NET Framework 4.6+. [637591]

SQL

  • Resolves an issue where SqlClient login may use an infinite timeout due to truncating a small millisecond timeout to zero when converting to seconds. [631196]

WCF

  • A race-condition existed in AsyncResult that closes a WaitHandle before Set() is called. When this happens, the process crashes with an ObjectDisposedException. [590542]
  • Enable customers using .NET 2.0, 3.0, 3.5, 3.5.1 to use their program under TLS 1.1 or TLS 1.2 [639940]

WPF

  • In multi-threaded WPF applications that process large packages simultaneously, there is potential for a deadlock when one of these files is closing and the other starts to consume larger amounts of memory.[602405]
  • Under certain conditions, WPF applications (like SCCM) using WindowChromeWorker experience high CPU usage or hangs. [621651]

Note: Additional information on these improvements is not available. The VSTS bug number provided with each improvement is a unique ID that you can give Microsoft Customer Support, include in StackOverflow comments or use in web searches.

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, Microsoft Update Catalog, and Docker.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10, .NET Framework updates are part of the Windows 10 Monthly Rollup.

The following table is for Windows 10 and Windows Server 2016+.

Product Version Preview of Quality Rollup KB
Windows 10 1803 (April 2018 Update) Catalog
4346783
.NET Framework 3.5 4346783
.NET Framework 4.7.2 4346783
Windows 10 1709 (Fall Creators Update) Catalog
4343893
.NET Framework 3.5 4343893
.NET Framework 4.7.1, 4.7.2 4343893
Windows 10 1703 (Creators Update) Catalog
4343889
.NET Framework 3.5 4343889
.NET Framework 4.7, 4.7.1, 4.7.2 4343889
Windows 10 1607 (Anniversary Update) Catalog
4343884
.NET Framework 3.5 4343884
.NET Framework 4.6.2, 4.7, 4.7.1, 4.7.2 4343884

The following table is for earlier Windows and Windows Server versions.

Product Version Preview of Quality Rollup KB
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2
Catalog
4346082
.NET Framework 3.5 4342310
.NET Framework 4.5.2 4342317
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 4342315
Windows Server 2012 Catalog
4346081
.NET Framework 3.5 4342307
.NET Framework 4.5.2 4342318
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 4342314
Windows 7
Windows Server 2008 R2
Catalog
4346080
.NET Framework 3.5.1 4342309
.NET Framework 4.5.2 4342319
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.1, 4.7.2 4342316
Windows Server 2008 Catalog
4346083
.NET Framework 2.0, 3.0 4342308
.NET Framework 4.5.2 4342319
.NET Framework 4.6 4342316

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:


Thursday Best Practices: Observing Solution and Providing Feedback – Finding Your Way In The Forums

$
0
0

Hey Guys!

We are back with the Thursday Forum Best Practices where we discuss about a feature or practice on how you can best utilize MSDN or TechNet Forums.

Today's blog post topic is inspired from our weekly blog posting theme Tuesday Featured Post that was published on a topic Tuesday Featured Post: Finding Solutions To Your Programming Problems.

In the above blog post we can see that it talks about how Original Poster asked about whether there is an alternate and faster approach is available to his/her solution. Where you can see that both OP and answerer's both actively contributing to find a solution. And some of the answerers got selected as helpful post (one of the answers has 21 votes!) multiple times which is a big achievement for the answerer's because the solution that answerer's provided may helped many other community members to solve their issue. Which means your contribution does not stick to this single post rather then it helped many other people in the community in other words the solution is finding it's way into the forum 🙂 . This achievement does not come automatically it requires a great "Observation" quality and based on that observation providing a "Feedback". So, if we look some theoretical description on this we see that,

Observation is a fundamental way of understanding something in this case the issue OP posted and the code snippet. We see that answerer's of this thread observed OP's solution carefully and provided feedback with code snippet about an alternate better approach along with they also points out what is wrong in OP's code and what issue will occur in OP's approach. The feedback here the answerer's provided not just helped the OP but also other community members.

A few observation and much reasoning lead to error, many observations and a little reasoning to truth.

Observing Solution and Providing Feedback is a habit and a practice which can be learned, sharpened and improved. I hope that this post helps and see you on the next one.

 

Thank You,
-Ninja Sabah

How to Analyze a SCOM Property Bag

$
0
0

From time to time I find myself writing PowerShell scripts for custom workflows: discoveries, rules, monitors, tasks, etc. Often times the project at hand requires a property bag full of juicy data to be used in the workflow. There are a handful of other blogs that describe what a property bag is but here I’ll show you how to make your property bag experience just a bit easier.

When you want to harvest data with a PowerShell script and then return that data to the workflow (to be used any number of ways ) name value pairs get returned in what is called a property bag. A property bag is simply XML structure that looks like this:

<DataItem type="System.PropertyBagData" time="2018-08-30T20:33:12.2154943-06:00" sourceHealthServiceId="c0c09ff6-3834-fc37-4bb9-1633c62eda56">
   <Property Name="Fingers" VariantType="3">10</Property>
   <Property Name=”Toes” VariantType="3">10</Property>
   <Property Name="Eyes" VariantType="3">2</Property>
</DataItem>

The XML dataitem includes the following information:

  •  This is a property bag.
  •  A timestamp indicating when it was submitted to the management server.
  •  The name/value pairs contained in the property bag.

The VariantType describes the type of the value that is returned. The typical variant types are:

0 = Empty
1 = Null
2 = Short
3 = Integer
4 = Single
5 = Double
6 = Currency
7 = Date
8 = String
9 = Object
10 = Error
11 = Boolean
12 = Variant
13 = DataObject
14 = Decimal
15 = Byte
16 = Char
17 = Long

The property bag is a pretty simple animal but sometimes it can produce unexpected results. When that happens it helps to be able to see what the property bag contains when testing your script. Here’s a sample script which creates a property bag and displays the dataitem to the screen:

#Script: Test.ps1

# Here we create the com object
$api = new-object -comObject 'MOM.ScriptAPI'
# From the com object we spawn the bag. Now we can stuff things into the bag.
$bag = $api.CreatePropertyBag()

# Here I've added an assortment of different type things to the bag. 
# They each have a name and a value.
$bag.AddValue('ONE',[string]"My String" )
$bag.AddValue('TWO',[Decimal]9876543210987654321)
$bag.AddValue('THREE',[double]987654321 )
$bag.AddValue('FOUR',[int]4444 )
$bag.AddValue('FIVE',[float]55.66 )
$bag.AddValue('SIX',[byte]56 )
$bag.AddValue('SEVEN',[datetime]"03/04/2012" )
$bag.AddValue('EIGHT',[long]987654321 )
$bag.AddValue('NINE', $null )

# Here is how I would ordinarily return the bag so 
# that the SCOM workflow can use the contents of the bag.
$bag

# Here is how I can display the contents of the bag to the screen. This is 
# useful only when testing the script manually.
$api.Return($bag)

 

Run the script from an ordinary PowerShell console (not ISE):

AnalyzeaSCOMPropertyBag1

It’s pretty easy to get the data to display on the screen but it’s a messy blob of confusing text. This example is small. Imagine if you had dozens or hundreds of name/value pairs in your bag. It would be a nightmare to comb through all of it. You could copy/paste the text from the console into a text editor and try to reconstruct all of the broken lines but that’s an awful waste of time.

Solution: Output that data to a text file and format it easily with Notepad++. (NPP can be found easily with a quick search of the innerwebs. Yes, innerwebs.)

Output the property bag to an .xml file. This is easy with the redirection operator:

AnalyzeaSCOMPropertyBag2

 

Open the file with Notepad++. The file is still ugly. Let’s fix that.


AnalyzeaSCOMPropertyBag3

 

Format the data with the XML Tools plugin:
Note: You may have to manually add the “Plugin Manager”. This is pretty easy. It involves extracting a few DLLs into the Notepad++ Plugins directory. Once Plugin Manger is available, you can add tons of plugins with a few clicks, including XML Tools. There’s a blurb about it here.

AnalyzeaSCOMPropertyBag4

 

Easy!

AnalyzeaSCOMPropertyBag5

Note: Line 1 is a result of the script statement:
$bag

Lines 2 – 12 are a result of the script statement:
$api.Return($bag)

You'll also notice that in line 10 the variant type is "20". It was cast as a "long" which PowerShell equates to "System.Int64" and I would expect it to end up as VariantType="17".  I don't know what type "20" is and I don't have time to research it. Anyone want to hunt it down? I'm sure it's in some schema doc somewhere. This is an example of getting something unexpected in the property bag.

Azure IoT Toolkit supports C#, Go, Java, Node.js, PHP, Python and Ruby to develop Azure IoT application in VS Code

$
0
0

With the latest release of Azure IoT Toolkit, lots of popular languages are supported to quickly create an Azure IoT application in VS Code: C#, Go, Java, Node.js, PHP, Python and Ruby! (Note: For C#, Java, Node.js and Python, it is based on Azure IoT Hub SDK, while for Go, PHP and Ruby, it is based on Azure IoT Hub REST API.) What's your favorite programing language? Which language would you like to develop an Azure IoT application?

For scripting languages (Go, Node.js, PHP, Python and Ruby), the steps are pretty easy, you could follow this blog post to quickly make that.

For compiled language (C# and Java), there is one more step compared with the scripting languages, but it is still easy. Let's see how easy it is to create a Java application for Azure IoT Hub in VS Code.

Prerequisites

Generate Code

  1. Right-click your device and select Generate Code to monitor the device-to-cloud message
  2. In language list, select Java
  3. In code template list, select Send device-to-cloud message
  4. In pop-up file dialog, select the folder for your Java application
  5. A new VS Code window will open

Run Code

  1. Open Integrated Terminal of VS Code
  2. Run mvn clean package to install the required libraries and build the simulated device application
  3. Run java -jar target/simulated-device-1.0.0-with-deps.jar to run the simulated device application
  4. You will see the Java application is running. It is sending the simulated device data to IoT Hub every second.
  5. If you want to monitor the device-to-cloud message in VS Code, you could refer to our Wiki page.

 

If your preferred language is not supported yet, no worry! You could just submit your request in GitHub, or use the REST API to build your application. Let us know what languages you want! Feel free to leave your feedback or suggestion in our GitHub issue !

Useful Resources:

Library Manager Release in 15.8

$
0
0

Microsoft Library Manager (LibMan) is now available in the general release of Visual Studio 2017 as of v15.8. LibMan first previewed earlier this year, and now, after a much-anticipated wait, LibMan is available in the stable release of Visual Studio 2017 bundled as a default component in the ASP.NET and web development workload.

In the announcement about the preview, we showed off the LibMan manifest (libman.json), providers for filesystem and CDNJS, and the menu options for Restore, Clean and Enable Restore-on-Build. Included as part of the release in v15.8 we’ve also added:
- a new dialog for adding library files
- a new library provider (UnPkg)
- the LibMan CLI (cross-platform DotNet global tool)

What is LibMan?

LibMan is a tool that helps to find common client-side library files and add them to your web project. If you need to pull JavaScript or CSS files into your project from libraries like jQuery or bootstrap, you can use LibMan to search various global providers to find and download the files you need.

Library Manager in Visual Studio

To learn more about LibMan, refer to the official Microsoft Docs: Client-side library acquisition in ASP.NET Core with LibMan.

What's new?



New dialog for adding library files

We've added tooling inside Visual Studio to add library files to a web project. Inside a web project, you can right-click any folder (or the project root) and select Add-->Client-Side Library…
This will launch the Add Client-Side Library dialog, which provides a convenient interface for browsing the libraries and files available in various providers, as well as setting the target location for files in your project.

LibMan Add Files Dialog

New Provider: UnPkg

Along with CDNJS and FileSystem, we've built an UnPkg provider. Based on the UnPkg.com website, which sits on top of the npm repo, the UnPkg provider opens access to many more libraries than just those referenced by the CDNJS catalogue.

LibMan CLI available on NuGet

Timed with the release of Visual Studio 2017 v15.8, the LibMan command line interface (CLI) has been developed as a global tool for the DotNet CLI and is now available on NuGet. Look for Microsoft.Web.LibraryManager.Cli

You can install the LibMan CLI with the following command:

> dotnet tool install -g Microsoft.Web.LibraryManager.Cli

The CLI is cross-platform, so you’ll be able to use it anywhere that .NET Core is supported (Windows, Mac, Linux). You can perform a variety of LibMan operations including install, update, and restore, plus local cache management.

LibMan CLI example

To learn more about the LibMan CLI, see the blog post: LibMan CLI Release or refer to the official Microsoft Docs: Use the LibMan command-line interface (CLI) with ASP.NET Core

Related Links

Happy coding!

Justin Clareburt, Senior Program Manager, Visual Studio

Justin Clareburt (justcla) Profile Pic Justin Clareburt is the Web Tools PM on the Visual Studio team. He has over 20 years of Software Engineering experience and brings to the team his expert knowledge of IDEs and a passion for creating the ultimate development experience.

Follow Justin on Twitter @justcla78

Removing the Terminate­Thread from code that waits for a job object to empty

$
0
0


Some time ago I showed

how to wait until all processes in a job have exited
.
Consider the following code which wants to create a job,
put a single process in it,
and then return a handle that will become signaled when that
process and all its child processes have exited.



This exercise is inspired by actual production code,
so we're solving a real problem here.



template<typename T>
struct scope_guard
{
scope_guard(T&& t) : t_{std::move(t)} {}
~scope_guard() { if (!cancelled_) t_(); }

// Move operators are auto-deleted when we delete copy operators.
scope_guard(const scope_guard& other) = delete;
scope_guard& operator=(const scope_guard& other) = delete;

void cancel() { cancelled_ = true; }

private:
bool cancelled_ = false;
T t_;
};

template<typename T>
scope_guard<T> make_scope_guard(T&& t)
{ return scope_guard<T>{std::move(t)}; }



This scope_guard class
is similar to every other scope_guard
class you've seen:
It babysits a functor and calls it at destruction.
We do add a wrinkle that the guard can be cancelled,
which means that the functor is not called after all.



struct handle_deleter
{
void operator()(HANDLE h) { CloseHandle(h); }
};

using unique_handle = std::unique_ptr<void, handle_deleter>;



The unique_handle class is
a specialization of std::unique_ptr
for Windows handles that can be closed by
Close­Handle.
Note that it will attempt to close
INVALID_HANDLE_VALUE,
so don't use it for file handles.



struct WaitForJobToEmptyInfo
{
unique_handle job;
unique_handle ioPort;
};

DWORD CALLBACK WaitForJobToEmpty(void* parameter)
{
std::unique_ptr<WaitForJobToEmptyInfo> info(
reinterpret_cast<WaitForJobToEmptyInfo*>(parameter));

DWORD completionCode;
ULONG_PTR completionKey;
LPOVERLAPPED overlapped;

while (GetQueuedCompletionStatus(info->ioPort.get(), &completionCode,
&completionKey, &overlapped, INFINITE) &&
!(completionKey == (ULONG_PTR)info->job.get() &&
completionCode == JOB_OBJECT_MSG_ACTIVE_PROCESS_ZERO)) {
/* keep waiting */
}

return 0;
}



The Wait­For­Job­To­Empty
starts by taking ownership of the
Wait­For­Job­To­Empty­Info
structure it is passed as a thread parameter
by wrapping it inside a std::unique_ptr.
Next, it monitors the I/O completion port until the job
reports that there are no more processes in it.
Once that happens, the thread exits,
which sets the thread handle to the signaled state.



HANDLE CreateProcessAndReturnWaitableHandle(PWSTR commandLine)
{
auto info = std::make_unique<WaitForJobToEmptyInfo>();

info->job.reset(CreateJobObject(nullptr, nullptr));
if (!info->job) {
return nullptr;
}

info->ioPort.reset(
CreateIoCompletionPort(INVALID_HANDLE_VALUE,
nullptr, 0, 1));
if (!info->ioPort) {
return nullptr;
}

JOBOBJECT_ASSOCIATE_COMPLETION_PORT port;
port.CompletionKey = info->job.get();
port.CompletionPort = info->ioPort.get();
if (!SetInformationJobObject(info->job.get(),
JobObjectAssociateCompletionPortInformation,
&port, sizeof(port))) {
return nullptr;
}

DWORD threadId;
unique_handle thread(CreateThread(nullptr, 0, WaitForJobToEmpty,
info.get(), CREATE_SUSPENDED,
&threadId));
if (!thread) {
return nullptr;
}

// Code in italics is wrong
auto ensureTerminateWorkerThread = make_scope_guard([&]{
TerminateThread(thread.get());
});

PROCESS_INFORMATION processInformation;
STARTUPINFO startupInfo = { sizeof(startupInfo) };
if (!CreateProcess(nullptr, commandLine, nullptr, nullptr,
FALSE, CREATE_SUSPENDED, nullptr, nullptr,
&startupInfo, &processInformation)) {
return nullptr;
}

auto ensureCloseHandles = make_scope_guard([&]{
CloseHandle(processInformation.hThread);
CloseHandle(processInformation.hProcess);
});

auto ensureTerminateProcess = make_scope_guard([&]{
TerminateProcess(processInformation.hProcess);
});

if (!AssignProcessToJobObject(info->job.get(),
processInformation.hProcess)) {
return nullptr;
}

info.release();
ensureTerminateProcess.cancel();
ensureTerminateWorkerThread.cancel();

ResumeThread(processInformation.hThread);
ResumeThread(thread.get());

return thread.release();
}



Let's walk through this function.



First, we create the
Wait­For­Job­To­Empty­Info
object that contains the information we are passing to the
worker thread.



We initialize the job and the I/O completion port,
and associate the job with the completion port.
If anything goes wrong, we bail out.



Next, we create the worker thread that will wait for
the signal from the I/O completion port that the job
is empty.



Here is the sticking point:
We aren't finished setting up everything yet,
and if it turns out we can't create the process
or can't put the process in the job, then that
thread will be waiting around for a notification
that will never happen.
But we want to pre-create all the resources we need
before creating the process, so that we don't
find ourselves later with a process that has already
been created, but not enough resources to monitor that
process.



Okay, so the idea is that we create the thread suspended
so that it is "waiting" and hasn't actually started doing
anything yet.
That way, if it turns out we need to abandon the operation,
we can terminate the thread.
(Uh-oh, he talked about terminating threads.)



Okay, now that we have all our resources reserved,
we can create the process.
If that fails, then we bail out,
and the ensure­Terminate­Worker­Thread
will terminate our worker thread as part of the cleanup.



If the process was created successfully, then we
create a scope_guard object
to remember to close the handles in the
PROCESS_INFORMATION
structure.
And we also remember to terminate the process in case
something goes wrong.



Next, we put the process in the job.
If this fails, we bail out,
and our various scope_guard
objects will make sure that everything gets cleaned
up properly.



Once the process is in the job, we have succeeded,
so resume the process and the worker thread,
and return the worker thread to the caller so it
can be waited on.



The problem with this plan, of course, is that
pesky call to
Terminate­Thread,
which is a function so awful

it should never be called

because there is basically no safe way of calling it.



So how do we get rid of the
Terminate­Thread?



One solution is to tweak the algorithm so the
thread is the last thing we create.
That way, we never have to back out of the thread
creation.



HANDLE CreateProcessAndReturnWaitableHandle(PWSTR commandLine)
{
auto info = std::make_unique<WaitForJobToEmptyInfo>();

info->job.reset(CreateJobObject(nullptr, nullptr));
if (!info->job) {
return nullptr;
}

info->ioPort.reset(
CreateIoCompletionPort(INVALID_HANDLE_VALUE,
nullptr, 0, 1));
if (!info->ioPort) {
return nullptr;
}

JOBOBJECT_ASSOCIATE_COMPLETION_PORT port;
port.CompletionKey = info->job.get();
port.CompletionPort = info->ioPort.get();
if (!SetInformationJobObject(info->job.get(),
JobObjectAssociateCompletionPortInformation,
&port, sizeof(port))) {
return nullptr;
}

// DWORD threadId;
// unique_handle thread(CreateThread(nullptr, 0, WaitForJobToEmpty,
// info.get(), CREATE_SUSPENDED,
// &threadId));
// if (!thread) {
// return nullptr;
// }
//
// auto ensureTerminateWorkerThread = make_scope_guard([&]{
// TerminateThread(thread.get());
// });

PROCESS_INFORMATION processInformation;
STARTUPINFO startupInfo = { sizeof(startupInfo) };
if (!CreateProcess(nullptr, commandLine, nullptr, nullptr,
FALSE, CREATE_SUSPENDED, nullptr, nullptr,
&startupInfo, &processInformation)) {
return nullptr;
}

auto ensureCloseHandles = make_scope_guard([&]{
CloseHandle(processInformation.hThread);
CloseHandle(processInformation.hProcess);
});

auto ensureTerminateProcess = make_scope_guard([&]{
TerminateProcess(processInformation.hProcess);
});

if (!AssignProcessToJobObject(info->job.get(),
processInformation.hProcess)) {
return nullptr;
}

// Code moved here
DWORD threadId;
unique_handle thread(CreateThread(nullptr, 0, WaitForJobToEmpty,
info.get(), 0, // not suspended
&threadId));
if (!thread) {
return nullptr;
}

info.release();
ensureTerminateProcess.cancel();
// ensureTerminateWorkerThread.cancel();

ResumeThread(processInformation.hThread);
// ResumeThread(thread.get());

return thread.release();
}


We don't need to create the thread suspended any more;
it can hit the ground running.


Okay, so that's a solution if you can find a way to tweak
your algorithm is that the thread is the last thing to be
created.
That way, you never have to try to roll back a thread creation.
But that may not be possible.
For example, maybe your algorithm involves creating multiple threads.
Some thread gets to be last, but the others are now at risk
of needing to be rolled back in case the last thread cannot
be created.



Technique number two:
Trick the thread into doing nothing if it turns out
we don't want it to do anything.



In our case, what we can do is post a fake completion
status to the I/O completion port to tell it,
"Um, yeah, the job has no processes in it.
Your job is done.
Go home."



HANDLE CreateProcessAndReturnWaitableHandle(PWSTR commandLine)
{
auto info = std::make_unique<WaitForJobToEmptyInfo>();

info->job.reset(CreateJobObject(nullptr, nullptr));
if (!info->job) {
return nullptr;
}

info->ioPort.reset(
CreateIoCompletionPort(INVALID_HANDLE_VALUE,
nullptr, 0, 1));
if (!info->ioPort) {
return nullptr;
}

JOBOBJECT_ASSOCIATE_COMPLETION_PORT port;
port.CompletionKey = info->job.get();
port.CompletionPort = info->ioPort.get();
if (!SetInformationJobObject(info->job.get(),
JobObjectAssociateCompletionPortInformation,
&port, sizeof(port))) {
return nullptr;
}

DWORD threadId;
unique_handle thread(CreateThread(nullptr, 0, WaitForJobToEmpty,
info.get(), 0, // not suspended
&threadId));
if (!thread) {
return nullptr;
}

// thread owns the info now
auto ensureReleaseInfo = make_scope_guard([&]{
info.release();
});

auto ensureTerminateWorkerThread = make_scope_guard([&]{
// Tell the thread that there are no processes
// so it will break out of its loop.
PostQueuedCompletionStatus(info->ioPort.get(),
JOB_OBJECT_MSG_ACTIVE_PROCESS_ZERO,
(ULONG_PTR)info->job.get(),
nullptr);

});

PROCESS_INFORMATION processInformation;
STARTUPINFO startupInfo = { sizeof(startupInfo) };
if (!CreateProcess(nullptr, commandLine, nullptr, nullptr,
FALSE, CREATE_SUSPENDED, nullptr, nullptr,
&startupInfo, &processInformation)) {
return nullptr;
}

auto ensureCloseHandles = make_scope_guard([&]{
CloseHandle(processInformation.hThread);
CloseHandle(processInformation.hProcess);
});

auto ensureTerminateProcess = make_scope_guard([&]{
TerminateProcess(processInformation.hProcess);
});

if (!AssignProcessToJobObject(info->job.get(),
processInformation.hProcess)) {
return nullptr;
}

// info.release();
ensureTerminateProcess.cancel();
ensureTerminateWorkerThread.cancel();

ResumeThread(processInformation.hThread);
// ResumeThread(thread.get());

return thread.release();
}



Technique number three:
If all else fails, then just have a special flag to tell the thread,
"I don't want you to do anything. Just get out as quickly as you can."



struct WaitForJobToEmptyInfo
{
unique_handle job;
unique_handle ioPort;
bool active = false;
};

DWORD CALLBACK WaitForJobToEmpty(void* parameter)
{
std::unique_ptr<WaitForJobToEmptyInfo> info(
reinterpret_cast<WaitForJobToEmptyInfo>(parameter));

// If we are not active, then do nothing.
if (!info->active) return 0;

DWORD completionCode;
ULONG_PTR completionKey;
LPOVERLAPPED overlapped;

while (GetQueuedCompletionStatus(info->ioPort.get(), &completionCode,
&completionKey, &overlapped, INFINITE) &&
!(completionKey == (ULONG_PTR)info->job.get() &&
completionCode == JOB_OBJECT_MSG_ACTIVE_PROCESS_ZERO)) {
/* keep waiting */
}

return 0;
}

HANDLE CreateProcessAndReturnWaitableHandle(PWSTR commandLine)
{
auto info = std::make_unique<WaitForJobToEmptyInfo>();

info->job.reset(CreateJobObject(nullptr, nullptr));
if (!info->job) {
return nullptr;
}

info->ioPort.reset(
CreateIoCompletionPort(INVALID_HANDLE_VALUE,
nullptr, 0, 1));
if (!info->ioPort) {
return nullptr;
}

JOBOBJECT_ASSOCIATE_COMPLETION_PORT port;
port.CompletionKey = info->job.get();
port.CompletionPort = info->ioPort.get();
if (!SetInformationJobObject(info->job.get(),
JobObjectAssociateCompletionPortInformation,
&port, sizeof(port))) {
return nullptr;
}

DWORD threadId;
unique_handle thread(CreateThread(nullptr, 0, WaitForJobToEmpty,
info.get(), CREATE_SUSPENDED,
&threadId));
if (!thread) {
return nullptr;
}

// auto ensureTerminateWorkerThread = make_scope_guard([&]{
// TerminateThread(thread.get());
// });

auto ensureResumeWorkerThread = make_scope_guard([&]{
ResumeThread(thread.get());
});

PROCESS_INFORMATION processInformation;
STARTUPINFO startupInfo = { sizeof(startupInfo) };
if (!CreateProcess(nullptr, commandLine, nullptr, nullptr,
FALSE, CREATE_SUSPENDED, nullptr, nullptr,
&startupInfo, &processInformation)) {
return nullptr;
}

auto ensureCloseHandles = make_scope_guard([&]{
CloseHandle(processInformation.hThread);
CloseHandle(processInformation.hProcess);
});

auto ensureTerminateProcess = make_scope_guard([&]{
TerminateProcess(processInformation.hProcess);
});

if (!AssignProcessToJobObject(info->job.get(),
processInformation.hProcess)) {
return nullptr;
}

info->active = true; // tell the thread that it has work to do
info.release();
ensureTerminateProcess.cancel();
// ensureTerminateWorkerThread.cancel();
ensureResumeWorkerThread.cancel();

ResumeThread(processInformation.hThread);
ResumeThread(thread.get());

return thread.release();
}



We could have signaled the thread that it should not do anything
by closing the handles in the
Wait­For­Job­To­Empty­Info
structure,
but I want to demonstrate the most general possible solution.



There is some subtlety in resuming the worker thread:
We need the Resume­Thread to happen before
the thread.release() because the
thread.release() causes the thread
to relinquish knowledge of the kernel thread.
I probably could have fixed this some more scoping,
but I tried to change the existing code as little as possible.



So there you go: Three ways of getting rid of the
Terminate­Thread from this specific algorithm.
The general-purpose trick works if the reason you were
terminating a thread was to prevent it from starting.
Instead of terminating the thread,
resume it, but make sure it does nothing.

Viewing all 35736 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>