Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

The MIPS R4000, part 1: Introduction

$
0
0


Continuing in the
"Raymond introduces you to a
CPU architecture that Windows once
supported but no longer does" sort-of series,
here we go with the MIPS R4000.



The MIPS R4000 implements the MIPS III architecture.
It is a 64-bit processor, but Windows NT used it in 32-bit mode.
I'll be focusing on the aspects of the processor relevant
to debugging user-mode programs on Windows NT.
This means that I may skip over various technical details
on the assumption that the compiler knows what the rules are
and won't (intentionally) generate code that violates them.



Throughout, I will say "MIPS" instead of "MIPS III architecture".
Some of the issues do not apply to later versions of the architecture
family, but I am focusing on MIPS III since that's what Windows NT used.



The MIPS is a RISC-style load-store processor:
The only operations you can perform with memory are load and store.
There is no "add value to memory" instruction, for example.
Each instruction is 32 bits wide, and the program counter must
be on an exact multiple of 4.



The processor can operate in either little-endian or big-endian mode;
Windows NT uses little-endian mode,
and even though some instructions change behavior depending on whether the
processor is in big-endian or little-endian mode,
I will discuss only the little-endian case.



The architectural terminology for a 32-bit value is a
word (w), and a 16-bit value is a halfword (h).
There's also
doubleword (d) for 64-bit values, but we won't see it
here because we are focusing on the 32-bit mode of the processor.



The MIPS has 32 general-purpose integer registers,
formally known as registers $0 through $31,
but which conventionally go by these names:





































































































Register Mnemonic Meaning Preserved? Notes
$0 zero reads as zero Immutable Writes are ignored
$1 at assembler temporary Volatile Helper for synthesized instructions
$2 v0 value No On function exit, contains the return value
$3 v1 value No High 32 bits of return value (for 64-bit values)
$4$7 a0a3 argument No On function entry, contains function parameters
$8$15 t0t7 temporary No
$16$23 s0s7 saved Yes
$24$25 t8t9 temporary No
$26$27 k0k1 kernel No access Reserved for kernel use
$28 gp global pointer Yes Not used by 32-bit code
$29 sp stack pointer Yes
$30 s8 frame pointer Yes For functions with variable-sized stacks
$31 ra return address Maybe


The zero register reads as zero,
and writes to it are ignored.



The k0 and k1 registers are
reserved for kernel use,
and no well-written user-mode program will use them.¹



Win32 requires that
the sp and s8 registers
be used for their stated purpose throughout the entire function.
If a function does not have a variable-sized stack frame,
then it can use s8 for any purpose
(which is why the disassembler calls it s8
instead of fp, I guess).
And since 32-bit code doesn't ascribe special meaning to
gp, then it too can be used for any purpose,
provided its value is preserved across the call.
In practice the Microsoft compiler merely
avoids the gp register completely,
and it uses the s8 register only as a frame pointer.



The stack is always aligned on an 8-byte boundary,
and there is no

red zone
.



Some registers have stated purposes only at entry to a function or
exit from a function.
When not at the function boundary, those registers may be used for
any purpose.



Register marked with "Yes" in the "Preserved" column must be
preserved across the call;
those marked "No" do not.



The ra register is marked "Maybe" because you
don't normally need to preserve it.
However, if you are a leaf function that does not modify
any preserved registers
(not even sp),
then you can skip the generation of unwind codes for the leaf
function, but you must keep the return address in ra
for the duration of your function so that the operating system
can unwind out of the function should an exception occur.
(Special rules for lightweight leaf functions
also exist for

Itanium
,

Alpha AXP
,
and x64.)



The at register is volatile because the assembler can use it
for various invisible purposes,
primarily for synthesizing
missing instructions.
We'll see examples of this as we go.



There are also two special-purpose integer registers,
called HI and LO.
These are used by multiplication and division instructions,
and we'll cover them when we get to multiplication and division.



There are 32 single-precision (32-bit) floating point registers,
which can be paired up to form 16 double-precision (64-bit) floating point
registers.
When a pair is used to operate on a single-precision value,
the lower-numbered register holds the value, and the higher-numbered
register is not used.
(Indeed, the value in the higher-numbered register will be garbage.)
So I guess you really have just 16 single-precision floating point registers,
since the odd-numbered ones are basically useless.













































Register(s) Meaning Preserved? Notes
$f0/$f1 return value No
$f2/$f3 second return value No For imaginary component of complex number.
$f4/$f5$f10/$f11 temporary No
$f12/$f13$f14/$f15 arguments No
$f16/$f17$f18/$f19 temporary No
$f20/$f21$f30/$f31 saved Yes



Floating point support is optional.
If not supported, floating point instructions will trap into the kernel,
and the kernel is expected to emulate the instruction.



There is not a lot of floating point in typical systems programming,
so I won't cover it except when discussing the calling convention later.



There is no flags register.
Hopefully you don't find this weird any more,
seeing as

we already encountered this with the Alpha AXP
.



The 32-bit address space is split down the middle between
user-mode and kernel-mode.
The kernel-mode space is further split:
Half of the kernel-mode address space is dedicated to mapping
physical addresses
(the lowest
512MB²
gets mapped twice, once cached and once uncached),
leaving only 1GB for the operating system.
This partitioning is architectural;
you don't get a choice in the matter.



Okay, we'll begin next time by looking at 32-bit integer calculations.



¹
I know you're wondering what happens if poorly-written
user-mode code tries to use them.
The answer is that user-mode code can modify the register all it wants,
but the value read back may not be equal to value last written.
As far as user mode is concerned,
it's basically a black hole register that reads as garbage.
This makes it even more useless than the
zero register, which is a black hole
register that at least reads as zero.
(Internally, the registers are used by kernel mode as
scratch variables during interrupt and exception handling.)



²
I guess they figured that if you had more than 512MB of RAM,
you'd have switched to a 64-bit operating system.


Monitoring Azure Analysis Services with Log Analytics and Power BI

$
0
0

How do you monitor Azure Analysis Services? How many users are connected and who are they? These are great questions to understand around your AAS environment. While metric data is exposed via the Metrics blade for Azure Analysis Services in the portal, it's a quick means to answer these questions. What if you wanted to combine this information with other operational data within your organization? Especially around the area of QPU (or query processing units, which is how AAS is priced). While extended events work in Azure Analysis Services, parsing the resulting XML files into human readable form is cumbersome, difficult, and time consuming. Just as many aspects of the cloud should require some thought process into how systems are designed, it's prudent to rethink the monitoring aspect as well. By using Microsoft Log Analytics, it's possible to build a complete monitoring solution for Azure Analysis Services with a simple Power BI Dashboard that can be accessed along with the rest of needed operational information for system administrators. Log Analytics provides near real-time data exploration for telemetry. There is a great reference here from Brian Harry. The remainder of this post serves to detail setting up this process. One of the elegant approaches of this solution is that once it is set up, apart from the refresh of the power bi report no maintenance is required between the azure analysis services and monitoring step (as opposed to extended events). Without further delay, let's look at the steps required.

  1. Create a Log Analytics instance in your Azure subscription
  2. Configure Azure Analysis Services to send event and performance data to the log analytics instance
  3. Using Log Analytics as the data source, report on the data from Power BI

As the first step in the process, the first item we need to create is a Log Analytics instance, which is a part of Microsoft OMS (Operations Management Suite). For anyone unfamiliar, OMS is essentially Bing for your telemetry analytics data. For more information, see this "What is Log Analytics" page on the Microsoft docs. In the Azure portal, simply select "Create a resource", and then type Log Analytics:

After clicking Create, select to either use an existing OMS workspace or to link to an existing one within the tenant. Choose to either use an existing resource group or create a new one, and then specify a location. Finally, select the pricing tier and click Ok:

Once this is complete, you'll be presented with the OMS workspace node, which is essentially just a summary page. For now, just leave it as is. Next up we'll configure the magic. One of the major benefits of OMS is the ability to configure both on prem machines to forward information to Log Analytics, or to configure PAAS applications to transmit data, and what's what we'll do here. The doc is relatively straightforward, and the only issue that I encountered when setting it up was that my subscription did not have the Microsoft.insights namespace registered for my subscription. Below is the complete PowerShell Script that I ran against my sub utilizing the script outlined in the Microsoft docs:

 

##followed the blog post outlined here: https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-collect-azurepass-posh

##save the azurermdiagnostic script locally that is saved on the powershell gallery.



#save-script -Name Enable-AzureRMDiagnostics -Path "C:PowerShellScripts"



##create a connection to azure

#Login-AzureRmAccount



##install any modules that don't already exist



Install-Module AzureRm.Insights

Install-Module AzureRM.OperationalInsights

Install-Module AzureRM.Resources

Install-Module AzureRM.Profile



##check and see the current state of the provider namespace for microsoft insights

#Get-AzureRMResourceProvider -ProviderNamespace Microsoft.Insights



##register it if needed

#Register-AzureRMResourceProvider -ProviderNamespace Microsoft.insights



##run the saved script from earlier

C:PowerShellScriptsEnable-AzureRMDiagnostics.ps1



#Step 1. select the subscripton where the paas offering resides

#Step 2. SElect the resource type that you wanto to send to azure. in this case, find the resource type for Microsoft.AnalysisServices/servers

#step 3. Provide the  category of logs to process or type ALL.

#allow the script to run.

While running the script, note when it asks for the category of logs to track. For Azure AS, there are 2 options: Engine and Service. The Engine category is what traditionally would be tracked under either profiler or extended events, and the Service category is used for monitoring and tracking traditional perfmon counters (aka metrics).

Note that this "shipping" to Log Analytics is not instant. There can be a delay from what I've seen of anywhere from 5-20 minutes for the data to arrive. However what we are now presented with is a hassle free approach to analysis services monitoring! Next up is getting the data out of Log Analytics for reporting. Let's do that now.

Back in the Azure portal, in the Log Analytics resource that was created earlier, open up the Overview Blade and then select Analytics:

Once this window is open, the first screen presented to you is the Home Page which talks about getting started with AQL. Over on the left hand side of the screen notice the active pane, which contains a root of the oms workspace that you originally created and then a node for "log management". This is where your Azure AS data is being sent. Expanding that node, you see a list of all the tables that are a part of the log management object for Log Analytics. There are two primary tables we are interested in: AzureDiagnostics, which is where Azure AS sends the extended event data (aka engine); and AzureMetrics, which contains the metric data (aka service). In Log Analytics, the queries are built using a language native to the product. For example, to see everything in the AzureDiagnostics table just drop the table into the query window:

I've gone ahead and created two queries to pull the data out of the two tables:


//AASEvents:

AzureDiagnostics

| extend LocalTime = TimeGenerated -4h

| order by LocalTime desc


//AASMetrics:

AzureMetrics

| extend LocalTime = TimeGenerated -4h

| where ResourceProvider == "MICROSOFT.ANALYSISSERVICES"

| order by LocalTime desc

Running these queries results in the data that AAS has sent showing on screen. The next step is to export the data to Power BI. Fortunately Log Analytics again makes this easy:

The resulting file that downloads is a custom query that I can simply copy/paste into PowerBI! How cool is that! It also contains the header, credential information, and my query for me to pull in. After generating both of these for the two queries switch over and open up Power BI Desktop and paste them in via the provided instructions. Now as you follow the instructions and create your dashboard, don't forget about perhaps creating some additional insight by using the metrics descriptions that are published on the Microsoft docs page. You could simply point Power BI to this page and have it extract, however if you decide to go that route remember that Power BI does not currently at the time of this writing allow you to mix both online and "on-prem" data sources together for refresh in a single dataset. I chose the copy/paste method for my report. After creating the report, I wound up with something like the below.

In a real world scenario, there should be some additional slicers to see by resource group and instance as well as what I have here. The sheer amount of information at your disposal regarding Azure Analysis Services performance is overwhelming. For this use case the interest was simply in who is running the queries and what kinds of queries are they running, but all of the information is available to perform in-depth performance tuning and analysis of the underlying engine, query tuning, etc. By scheduling periodic refreshes of the dataset within Power BI as well, we have an always up to date dashboard of usage in Azure Analysis Services. Consider some of the following scenarios that this dashboard could be used for:

  • Dynamically pause/restart azure analysis services during times when there is no QPU activity
  • Dynamically scale up/down an azure analysis services instance based on the time of day and how many QPU units are needed

All of these scenarios could be accomplished using the Azure Analysis Services PowerShell cmdlets.

What’s new in Universal Resource Scheduling for Dynamics 365 April 2018 Update

$
0
0

Applies to Universal Resource Scheduling solution (version 2.4.1.x), Field Service application (version 7.4.1.x), Project Service Automation application (version 2.4.1.x) On Dynamics 365 version 9.0.x)

Please note that this update is being shipped as a patch, so if you are upgrading, you will see a PS/FS/URS patch solution installed with the version numbers referenced above.

Continuing our theme for our update releases, the Universal Resource Scheduling April 2018 Update focuses on our customers. Our priorities are improving user experience, stability, discoverability, and delivering features that open up highly requested scenarios. For a list of bug fixes, read this post.

 

Scheduling Enhancements:

 

I recently had the opportunity to visit some of our customers onsite and observe first line URS users firsthand. I am pleased to incorporate some of that feedback into our April 2018 update, with extremely impactful features such as:

 

 

 

Other features we were able to work on:

 

 

 

Book by estimated arrival time instead of start of travel!

 

Overview

 

When a resource manager searches for availability, especially with the customer on the phone at the same time, it is important to efficiently and effectively be able to communicate the available appointment times from which the customer can select. In previous releases, when searching for availability for an onsite requirement, the gray recommended slot drawn on the board represented the available appointment time, inclusive of travel. In many cases, especially when talking to a customer, all the resource manager cares about is when the resource will arrive!

 

Therefore, we introduced a feature called "Book Based On," allowing you to book by estimated arrival.

 

Electing to book based on estimated arrival on the schedule board will display the recommended slot based on when the resource will arrive onsite, as opposed to when they will begin travel.

 

Resource managers more focused on arranging bookings without interacting with customers can continue to use the current option to book by "Start of Travel."

 

Details

 

In the screenshot below, you can see the experience without this new feature. The recommended appointment time for Efrain to arrive at the customer site is 12:28 PM. But there are also 28 minutes of travel-time as. So the resource manager sees the block of time showing as 12 to 2:58. In order to figure out when the resource will arrive onsite, they need to add the 28 minutes travel to the beginning of the recommended slot in their heads ("12:00+28 minutes means we can show up at 12:28!"), and then they can tell the customer when to expect the resource to arrive onsite. This math is pretty simple, but imagine the slot recommended is 9:43 with 28 minutes of travel. It can get tough to compute in your head on the fly!

 

Book using Start of Travel option

 

 

Here, you can see the same exact scenario on a schedule board with "Book by Estimated Arrival" enabled. You will notice that the gray recommended slot shows up perfectly on the board from 12:28 to 2:58 which represents the ETA and the end time of the booking. It still lets you know what the travel time will be within the slot, 28 minutes, but visually, you can see just the block of time lining up with the ETA and the end time. Of course when you create the booking, travel will still be booked as well.

 

 

Book using Book by Estimated Arrival option

 

When right-clicking within the recommended slot to select a more specific time, the "Book Here" text also adjusts to inform you of the ETA and end time, keeping the experience simple.

 

 

Book Here adjusts for ETA and end time

 

 

Setup

 

To enable this feature, open the schedule board by double-clicking on your schedule board, and select which option you would like to book by.

 

Select Book Based On

 

New Schedule Boards will inherit this value from the default schedule board.

 

To change the default schedule board:

 

  • Open up the Schedule Board by double clicking any schedule board name
  • Click the “Open Default Settings” button on the top right of the tab settings window
  • Select the default “Book Based On” value you prefer

 

Change Default Schedule Board

 

 

Display more at once on the schedule board by adjusting the scale

 

Overview

 

We have heard feedback from resource managers that they often plan work for resources across many days, yet they can't see these days all at once on the schedule board.

 

Even when planning across weeks, they find it important to be able to see the whole picture in one eyeshot without endlessly scrolling.

 

Therefore, you can now adjust the scale of the schedule board to decrease the width of the columns to view a wider date range at once.

 

Details

 

Whether you are on the hourly, daily, weekly, or monthly board, you will see a "scale" control on the bottom of the schedule board.

 

In this image of the hourly board, you can see about a day and a half:

 

Scale control on hourly board

 

 

Using the same display settings, after changing the scale to the smallest supported option, you can now see 8 days in one shot on the hourly board!

 

Change the scale to see 8 days on the hourly board

 

For those customers who have been with us since the Naos Field Service days, this feature brings back much of the functionality of the old "daily" view, allowing you to see many days at once.

 

The impact of scaling down your board is that it will be more challenging to read the text of small bookings, so always remember that you can hover over a booking to view more details, or single-click it to view the details in the right panel.

 

Hover over or click to see booking details

 

We did not decrease the minimum pixel width for the daily, weekly, or monthly boards, but we still added the scale control to the bottom of the boards to allow users to quickly adjust the scale if they please. We do intend on looking into allowing you to shrink the width for multiday boards, once we allow for viewing more days/weeks/months at a time.

 

Scale control to allow users to quickly adjust the scale

 

 

Note that the column width settings can also be accessed from the settings dropdown on the schedule board; the intention of the slider control is to promote discoverability and reduce clicks:

 

Access column width settings from Settings drop down on schedule board

 

Display up to 14 days on hourly schedule board

 

In an effort to allow resource managers to see more data without changing the schedule board dates, we have increased the number of days you can view on the hourly schedule board from 7 to 14.

 

To increase the number of days, navigate to your settings, and adjust the “Number of days per page” slider:

 

Adjust Number of days per page slider to increase number of days

 

Display more resources on schedule board!

 

Overview

 

MORE REAL ESTATE PLEASE!!!!!

 

We are on a continued quest to offer more real estate to resource managers. For demos, the more whitespace the better, but when the rubber meets the road (pardon the cliché), we have heard your feedback and you want to see more.

 

You can now narrow the height of the resource rows to see more resources at once!

 

Details

 

Here you can see 13 resources with the previous minimum row height we allowed:

 

13 resources with the old minimum row height

 

Using the same board on the same monitor, here you can see 25 resources at a time with our new minimum row height:

 

25 resources with the new minimum row height

 

 

You can adjust this on the daily, weekly, and monthly views as well:

 

Adjust the height of the resource rows on daily, weekly, or monthly views

 

As I’ve already mentioned, it’s more challenging to read the text on smaller bookings when scaling down the board, so always remember that you can hover over a booking, or single-click it to view the details in the right panel.

 

To change the row height, just open the schedule board settings and change the row height:

 

Open schedule board settings to change the row height

 

Note that each board has its own settings for row width and height, meaning changing this on the daily board does not change it automatically on the hourly, weekly, or monthly boards. Also, within each board, the width and height settings are saved uniquely for when you use the Schedule Board regularly and when you use the Schedule Board to search for availability offering even more flexibility!

 

We recognize that the resource image gets cut off when the row height becomes small; we plan to update the resource cell template to hide the resource cell when the row height cuts off the resource image. In the meantime, remember you can always modify the resource cell template on your own using extensibility!

 

For the hourly vertical view, you can also update both the width and height to fit more resources and more days. Here you can see that the resources are narrow and the height for each block of time is short, allowing a resource manager to see more resources and more time:

 

Update width and height to fit more resources

 

 

Change booking statuses from daily, weekly, and monthly schedule boards!

 

Overview

 

We have always offered the ability to right-click a booking on the hourly schedule board and change the status of a booking. However, on the multiday schedule boards, we aggregate bookings from the same requirement and the same status into one "block," so it is legible to users. We now allow you to change the statuses for multiple bookings at once from the multiday boards. In reality, you are updating the status for a bunch of visually aggregated bookings at once.

 

Details

 

Just right-click a booking on the board, and change status:

 

Change status

 

This will update all bookings that are aggregated into that block. If you only intend on changing the statuses for a subset of those bookings, zoom into that date range and make the change from there.

 

For example, if I only want to change the statuses from 3/25 onwards, change the board to 3/25 and then change the status:

 

To change status of a subset of bookings, zoom into the date range and make the change

 

This will only change the status for what you see on the board:

 

Status changed only for what you see on the board

 

You can always navigate to the daily board or hourly board for more granularity.

 

Note that when changing the status, it takes a moment for the board to refresh with the new status icon and color, based on how many bookings are being changed.

 

If you want to learn more about booking statuses, here is a previous blog post with details on how you can leverage booking statuses.

 

Pass in date AND time when using URL to open schedule board

 

Overview

 

I previously blogged about a feature that allows our customers to launch the Schedule Assistant and pass in the search parameters by using the URL. Previously, you were able to pass in the search date range, but not the specific time of the search on each day. For example, you could express that you need a resource between April 1st and April 3rd, but you could not express that you need a resource between April 1st at 8 AM and April 3rd at 5 PM. We have enhanced this capability allowing you to specify both a date and time for when the search begins and ends.

 

Details

 

Here you can see that the previous format for passing in the search start and end was yyyy-mm-dd:

 

Old format for passing in the search start and end_yyyy-mm-dd

 

Now, the date+time format is yyyy-mm-ddThh:mm+TZD

 

Example: 2018-04-01T19:20+01:00 would be April 1st, 2018, at 7:20 PM in the Timezone of +1:00

 

     YYYY = four-digit year

     MM   = two-digit month (01=January, etc.)

     DD   = two-digit day of month (01 through 31)

     hh   = two digits of hour (00 through 23) (am/pm NOT allowed)

     mm   = two digits of minute (00 through 59)

     TZD  = time zone designator (Z or +hh:mm or -hh:mm)(Z=UTC Time Zome)

 

If you want to just pass in the date and a time zone: "yyyy-mm-ddT00:00+TZD"

Example: 2018-04-01T00:00+05:30

 

Don’t worry, the previous supported format of yyyy-mm-dd will still work.

 

Extensibility - Multi-select option set fields can be queried and displayed

 

Overview

 

When modifying a UFX query for an extensible scheduling scenario, you can now query multi-select option set fields. Additionally, you can display selected values in a multi-select option set on the resource cell template. Look out for a blog post on this one. To read more about extensibility possibilities, check out this previous blog post. Here are a few more detailed posts on the subject too: SB date ranges, and Sort by total available time.

 

Extensibility - Ignore or consider proposed bookings when searching for availability. (Change default)

 

Overview

 

In our February 2018 update, we introduced the ability to ignore or consider proposed bookings with regards to resource availability. (To learn more, visit this blog post and find the "Ignore or consider proposed bookings" feature.) In the last blog post, I mentioned that we would add the ability to change the default value instead of it always defaulting to "ignore proposed bookings." Well, here we are!this blog post and find the "Ignore or consider proposed bookings" feature.) In the last blog post, I mentioned that we would add the ability to change the default value instead of it always defaulting to "ignore proposed bookings." Well, here we are!

 

Details

 

If you would like the default behavior to simply ignore the existence of proposed bookings when it comes to availability, there is nothing you need to do, as this is the default behavior. However, perhaps you would prefer to return resources that are completely available, meaning they do not have any proposed bookings; if you can't find the right resource, you will then ignore these proposed bookings in your availability search. To change the default value for whether you consider or ignore proposed bookings, just modify the "Schedule Assistant Retrieve Constraints Query." Change the ignore proposed value from true, to false:

 

<IgnoreProposedBookings ufx-type="bool">true</IgnoreProposedBookings>

 

Modify Schedule Assistant Retrieve Constraints Query to change default value for proposed bookings

 

You can even get fancy and add conditions for the default value! Just fetch the attributes you need in your condition, add a UFX key to the attribute, and reference the key in your condition.

 

<IgnoreDuration ufx:if="$input/@ufx-id" ufx-type="bool">true</IgnoreDuration>

<IgnoreDuration ufx:if="not($input/@ufx-id)" ufx-type="bool">false</IgnoreDuration>

 

 

Remember, to review the details of how to modify the default "Schedule Assistant Retrieve Constraints Query," check out this previous blog post, which gives an overview of extensibility and where to make modifications.

 

As a reminder, ultimately these settings drive the "ignore proposed bookings” flag in the filter panel, when searching for availability:

 

Ignore Proposed Bookings check box in the filter panel

 

 

Leverage booking panel when drag and drop scheduling on multiday schedule boards

 

Overview

 

We are continuing our investments in ensuring a consistent user experience across the schedule board, regardless of whether you are using the hourly, daily, weekly, or monthly schedule board.

 

Our latest investment is moving our interactions to the booking panel and shifting away from pop ups to create bookings. Now, when you drag a booking from the bottom booking requirements panel to the multiday schedule boards, instead of a blocking pop up, we slide out our booking panel. This is not just an experience unification feature; by using this panel instead of a pop up, you can still interact with the board changing dates and resources, which seamlessly updates the booking panel with those respective values. When you make your selections and are ready to book, just click "book" in the panel! While we were at it, we also ensure that if you select the booking method "front load" or "evenly distribute hours," we properly set the duration field to the remaining hours of the requirement.

 

Booking panel when drag and drop scheduling on multiday schedule boards

 

 

Selecting a resource now loads resource value properly into driving directions window

 

Overview

 

This one may be a cross between a bug fix and a feature, but nonetheless, I am excited to share! When you select a resource cell on the schedule board and click actions > get driving directions, the selected resource and their address are now properly loaded into the driving directions window.

 

Selecting a resource loads resource value into driving directions window

 

Show resource card with a right click instead of a hover

 

Overview

 

We received feedback and when users were "mousing" around their schedule board, the resource flyout card was too easy to accidentally pop up.

 

Show resource card with a right click

 

Users were trying to expand the resource row or navigate to the left side of the board, and the resource card became a nuisance. So naturally, we listened! To view the resource card on the schedule board, instead of a hover, right click the resource and "view resource card."

 

To view resource card, right-click View Resource Card

 

To Exit, just click the "X":

 

Click X to exit

 

This also is an accessible alternative to our previous hover action.

 

Display day of the week on hourly vertical schedule board

 

Overview

 

In our last update, we exposed the day of the week on the horizontal hourly schedule board. When visiting a customer, I noticed that they kept opening up their PC calendar on their toolbar while talking to customers and arranging appointments. When they ended the call, I asked what that was all about? The answer? "I don't know what day of the week the 27th is, so I open the calendar to check". Sometimes, it is the easy things that really help our customers. So here it is, day of the week displayed on the vertical schedule board.

 

Day of the week displayed on the vertical schedule board

 

 

Keep your context when searching for availability and substituting a resource

 

Overview

 

Suppose you are searching for availability for a requirement, and the resource you want for the requirement does not have the hours you need. You analyze their bookings for that timeframe and feel it makes sense to move an existing booking to another resource to make room for this requirement you are trying to schedule. You want to make sure to move the existing booking only to a qualified resource with availability, without changing the booking time. Then, when you are done moving the existing booking, you would like to land in the same "schedule assistant" context you started out in to book the original resource!

 

Now, this flow is complete!

 

Details

 

Here is a visual walkthrough:

 

Resource manager selects a requirement and searches for availability. Manager sees that Abraham, the perfect resource for the job, is 20 hour short. However, the booking on April 3rd – 5th seems like a solid one to move to free up the 20 hours needed.

 

Select a requirement and search for availability

 

I can right click that booking and click "find substitution"

 

Right click a booking and click Find Substitution

 

Now I am in the context of finding a resource with the proper skills and availability for this existing booking. In essence, we are using the schedule assistant to substitute the resource on an existing booking. You will notice that the demand panel now shows the booking I am trying to substitute, and the filter panel adds the required roles, characteristics, etc.

 

Substitute the resource on existing booking

 

Now I have a list of matching resources. I can select one and click "substitute" on the top toolbar.

 

Select a resource and click Substitute

 

 

Booking reassigned

 

After clicking ok, the bookings are reassigned, and you are brought back into the context of the schedule assistant search where you began.

 

Now, Abraham has plenty of availability and you can book away:

 

Schedule Assistant search

 

We hope you enjoy these enhancements, as they are driven from your feedback! The team continues to work in overdrive to deliver enhancements and improvements, so please keep blogging, posting, Yammering on the field service group and the project service group and submitting ideas. We are keeping an eye out for what is needed!

 

Happy Scheduling!

Dan Gittler

Principal Program Manager, Dynamics 365 Engineering

 

What’s New for Dynamics 365 Resource Scheduling Optimization v2.5 Release

$
0
0

Applies to: Dynamics 365 Organization 9.0+ with Field Service Solution

 

With the goal of continuously improving quality, performance, usability, and responding to some customer feedback, we recently released the Resource Scheduling Optimization v2.5 update. Below are the new features and capabilities introduced in this release.

 

Extensible Optimization Scope

Scope is the RSO mechanism to define the relevant inputs:  resources requirements, resources and existing resource bookings. It also includes the timeframes to be considered for optimization. Extensible scope leverages Dynamics 365 entity views providing an easy and flexible way to define what to be optimized (resource requirements, resources and existing resource bookings)

Feature Details

Upon opening the Scheduling Optimization Scope form, a user can select existing system views or personal views (for which he has read permission) from the Resource, Requirement, and Resource view dropdowns.

 

Select existing system views or personal views

 

List of views based on privilege and security role settings

 

  1. Take ‘Resource View’ as an example: I have defined a ‘0_WA Resources’ view with the below filter conditions, which is equivalent to configuring WA territory as optimization scope in the previous version. Users can apply more filter conditions as needed to specify resources they need optimized. RSO will respect the Optimize Scheduling setting on individual resource records on top of the resource view filters.

 

Apply filter conditions to specify resources they need optimized

 

Filter conditions

 

  1. User must select at least one requirement or booking view for what needs to be optimized.

 

Select at least one booking view for what needs to be optimized

 

  1. If the user selects Booking View, he can set it to ‘Now or After.’ For example: I want to optimize bookings for the next 5 days, from 2 hours on (while excluding bookings within next 2 hours and bookings in the past). The current, out-of-the-box Dynamics 365 entity view filter doesn’t support this ‘Now or After’ condition; RSO enabled this additional setting on top of whatever filter conditions are defined for that booking view.

 

Set Now or After

 

  1. Optimization Range Settings is the time range where bookings can be created/updated/deleted.

 

Optimization Range Settings

 

  1. User can preview resources, requirements, and bookings for optimization scope through the Schedule Board:
    1. Resource filters on the Schedule Board are pre-populated from Resource View. The resource list matches the number of resources defined in Resource View. RSO will display a lock icon and tooltip to indicate if a resource is not enabled for optimization (even though it was added into Resource View).
    2. Requirements under Eligible for Optimization match the records from Requirement View.

 

Requirements match the records from Requirement View

 

  1. User can modify filters on the left panel and save into scope:
    1. If Resource View referred by optimization scope is a system view, modified filters through the Schedule Board will be saved as a new personal view.
    2. If Resource View referred by optimization scope is a personal view, modified filters through Schedule Board will be saved back into the same personal view.

 

Modified filters

 

  1. Run optimization schedule, and open optimization request:
    1. The user can see which resources are being optimized, and which resources are not optimized (and for what reason).
      Resources being optimized and reasons 
    2. Optimization Start/End Range for specific run:Optimization Start_End Range

RSO Deployment App Enhancements

  • New and modern UI, with an intuitive user experience
  • Simplified deployment process, with fewer necessary clicks
  • Meets accessibility requirements
  • Enabled capability to delete RSO Azure deployment from customer side

 

Deploy Resource Scheduling Optimization instance

 

Manage Resource Scheduling Optimization instance

 

For more information:

 

Feifei Qiu

Program Manager

Dynamics 365, Field Project Service Team

 

Hosting an Angular Progressive Web Application (PWA) with Azure

$
0
0

Progressive Web Applications might just be the future of web development. In this post, Wael Kdouh shows how to deploy an Angular PWA to Azure, including some potential pitfalls to watch out for.


In his blog post, Wael introduces the popularity of Progressive Web Applications and how to deploy them to Azure.

He writes “In this post I will show you how you can deploy an Angular PWA application to Azure. I won't show you the detailed steps to build the CI/CD pipeline on VSTS which in turn deploys to Azure as I have already discussed it in a previous post which can be found here. Instead I will focus on some of the pitfalls that you may face while attempting to deploy a Angular PWA application to Azure as it may be tricky to new comers to PWA and mobile development in general. Finally, I will test the application on an Android device running Oreo with the latest version of Chrome. I did not test it on IOS since I did not have access to the latest technical preview of Safari which introduced support for PWA but I will address it in future posts.”

To learn how to deploy a PWA to Azure, click here for Wael’s full post.

Office Developer: Difference between Office Web addin vs COM/VSTO addin

$
0
0

COM or VSTO add-ins are earlier Office integration solutions that run only on Office for Windows. The major difference what you see is that COM addins will be running in the user device or in the Office Client. The new Office Add-ins don't involve code that runs on the user's device or in the Office client. For an Office Add-in, the host application, for example Excel, reads the add-in manifest and hooks up the add-in’s custom ribbon buttons and menu commands in the UI. When needed, it loads the add-in's JavaScript and HTML code, which executes in the context of a browser in a sandbox.

Components of a Hello World add-in

In general, Office Add-ins provides advantages over add-ins built using VBA, COM, or VSTO. I can name few of them,

  • Cross-platform support:
    Office Add-ins run in Office for Windows, Mac, iOS, and Office Online. So your solution can run in Office across multiple platforms, including Office for Windows, Office Online, Office for the Mac, and Office for the iPad.

  • Single sign-on (SSO):
    Office Add-ins integrate easily with users' Office 365 accounts.

  • Centralized deployment and distribution:
    Admins can deploy Office Add-ins centrally across an organization.

  • Easy access via AppSource:
    You can make your solution available to a broad audience by submitting it to AppSource.

  • Based on standard web technology:
    You can use any library you like to build Office Add-ins

Hope this helps.

Dynamics 365 Business Central is now live!

$
0
0

A modern solution for modern businesses

With 160,000+ customers, more than 2.7 million users, and 3500 partners worldwide, Dynamics NAV has sold in 195 countries – that’s pretty much every country in the world. Today we are proud to announce that we are furthering this success story with Dynamics 365 Business Central by making it generally available as a cloud service starting today.

Dynamics 365 Business Central brings the full power of Dynamics NAV to the cloud. As such, Business Central has at its foundation a set of trusted, proven technologies in a single, end-to-end application. As such, Business Central offers:

  • Business without silos. Unify your business, and boost efficiency with automated tasks and workflows—all integrated within familiar Office tools like Outlook, Word, and Excel.
  • Actionable insights. Achieve greater outcomes and gain a complete view of your business with connected data, business analytics, and guidance delivered by Microsoft’s leading intelligent technologies.
  • Solutions built to evolve. Start quickly, grow at your own pace and adapt in real time with a flexible platform that makes it easy to extend Business Central based on your changing business needs.

For an intro to Dynamics 365 Business Central, view the announcement made by Alysa Taylor here:

Announcing Dynamics 365 Business Central

If you would like to go deeper, enjoy this video:

Dynamics 365 Business Central Deep Dive

Finally, Business Central would not be possible without the amazing team that is behind this great product. I cannot express enough how proud I am of them and what they have accomplished. To hear from some of the team members about what their favorite features are, see this:

Our favourite capabilities in Dynamics 365 Business Central

Because Dynamics 365 Business Central and Dynamics NAV run the same application code base, this empowers us to smoothly transition all our Dynamics NAV partners and customers into Dynamics 365 Business Central. Dynamics 365 Business Central is a cloud first solution, designed for the age of digital transformation powered by the cloud. But it will not be a cloud only solution. In the fall it will be available for self-deployment on premise and in the intelligent edge for customers that prefer that option. For the existing partners and customers, this will be just another upgrade, like they would do for Dynamics NAV, with a name change. For customers interested in the cloud, starting today, Dynamics 365 Business Central offers unprecedented opportunities to drive transformation of the customers’ businesses, increasing their business performance through the power of Microsoft cloud technologies and services.

 

For more information on coming updates, please keep an eye on the roadmap site.

Dynamics 365 Business Central will be generally available on April 2, 2018 in 14 countries – United States, Canada, United Kingdom, Denmark, Netherlands, Germany, Spain, Italy, France, Austria, Switzerland, Belgium, Sweden, and Finland, and will be sold through our Cloud Solution Provider (CSP) partners. Australia and New Zealand will be generally available beginning July 1, 2018. Our network of global partners has the expertise to help you create and deploy a solution that meets your industry-specific needs. Do even more with Dynamics 365 Business Central using pre-built applications, available through the AppSource marketplace, to easily and cost effectively extend your solution.

You can try Dynamics 365 Business Central today by browsing to our homepage and starting the free trial. Looking forward to hear your questions, feedback, and thoughts on our community forum, ideas site. You can also follow news and discussions on Twitter, using the #MSDyn365BC hashtag.

 

Marko Perisic

General Manager

Microsoft Dynamics SMB

Experiencing errors in creation of Log Ananlytics workspaces in new Susbcriptions- 04/02 – Investigating

$
0
0
Initial Update: Monday, 02 April 2018 18:00 UTC

We are aware of issues within Azure Log Analytics and are actively investigating. Some customers who attempt to create new Log Analytics work spaces through Azure portal in the subscriptions created after 13:00 UTC,will see an error and the work space creation will fail. 

  • Work Around: If customers have the ability to deploy through an ARM template, specifying the "pergb2018" pricing tier instead of one of the pricing tiers displayed in the portal will unblock the user.
  • Next Update: Before 04/02 21:00 UTC

We are working hard to resolve this issue and apologize for any inconvenience.


-Sapna


Application Insights – Advisory 04/02

$
0
0
On April 2, 2018, we simplified the Application Insights pricing plans. For the Basic plan, we removed the cost to use Continuous Export and enabled the Application Insights connector to OMS Log Analytics. As a result, the Enterprise plan will no longer be available to many subscriptions which are not in an Enterprise Agreement as it no longer offers any advantage to customers.   

A result of this change is that, effective at 04/02 13:00 UTC, Application Insights created from an ARM template with the "Application Insights Enterprise" are receiving a 400 return code.  However, in order to allow customers to have additional time to respond to this breaking change, we will temporarily silently convert Application Insights resources from "Application Insights Enterprise" to "Basic" until we have time to communicate this breaking change to affected customers.

To remedy this, you need to change CurrentBillingFeatures to be "Basic" instead of "Application Insights Enterprise".  We apologize for any inconvenience.


-Sapna

Experiencing errors while creation of Application Insights app using Visual Studio – 04/02 – Investigating

$
0
0
Initial Update: Monday, 02 April 2018 16:55 UTC

We are aware of the issues within Application Insights and are actively investigating. Customers creating a new project with Application Insights on by default in Visual Studio 2015 will see a failure message as below-

'Could not add Application Insights to project. Could not create Application Insights Resource <App_Name>: The downloaded template from 'https://go.microsoft.com/fwlink/?LinkID=511872' is null or empty. Provide a valid template at the template link. Please see https://aka.ms/arm-template for usage details. This can happen if communication with the Application Insights portal failed, or if there is some problem with your account.'


  • Work Around:  Apps can be created using Azure portal without any issues
  • Next Update: Before 04/02 21:00 UTC

We are working hard to resolve this issue and apologize for any inconvenience.


-Sapna

Don’t filter out your Farm and Object Cache accounts in SharePoint’s People Picker

$
0
0

I often have SharePoint cases where customers don't want their Object Cache accounts or Farm account to show up in the People Picker in SharePoint. While there are a variety of reasons people choose to do this, filtering these accounts out of People Picker is a bad idea.

Internally, SharePoint code uses the People Picker code path for other account resolution purposes. Accounts that are filtered from People Picker won't be found by SharePoint's internal code in these instances.

The most common place I've run into this issue is with SharePoint 2016's Fast Site Collection Creation code. By default, SharePoint 2016 uses Fast Site Collection Creation to speed up the My Site creation process (see here for more details). If the Farm Account cannot be found by the People Picker code, you will see a message like the following in the ULS logs:

Error in resolving user. User: 'contosofarmaccount', ResolverInformation: 'SPActiveDirectoryPrincipalResolver, DomainName: 'contoso.com', DomainIsForest: 'False', DomainLoginName: '', CustomSearchQuery: '', CustomSearchFilter: '(&(objectCategory=Person)(objectClass=User)(!(userAccountControl:1.2.840.113556.1.4.803:=2))(|(employeeID=*)(employeeNumber=*))(|(extensionAttribute12=*)(mail=*)(proxyAddresses=*)))', Timeout: '00:00:30', IncludeDistributionList: 'True''

If the Object Cache accounts (Portal Super User and Portal Super Reader) are filtered from the People Picker, you may experience seemingly random permissions issues with a message similar to the following in the ULS:

User Key aysye Unexpected User key value from token is not a user key so throwing. UserKey: 'i:0#.w|contososuperreader'

The solution is to allow the accounts to be resolved in the People Picker.

The following PowerShell commands may be used to check the current People Picker settings. You'll want to check the ActiveDirectoryCustomQuery and ActiveDirectoryCustomFilter properties:

$wa = Get-SPWebApplication http://<sitename>
$wa.PeoplePickerSettings

Object Cache accounts may be queried as follows:

$wa = Get-SPWebApplication http://<sitename>
$wa.Properties["portalsuperuseraccount"]
$wa.Properties["portalsuperreaderaccount"]

Exam Prep for 70-533, Implementing Microsoft Azure Infrastructure Solutions

$
0
0

I recently took the 70-533, Implementing Microsoft Azure Infrastructure Solutions exam. Each Microsoft exam page has a Skills Measured section. This section clearly outlines the knowledge that will be measured on the exam. i.e. the exam questions are going to be about the following topics only.

 

As I went through the sections above, I noticed that each section corresponded to documentation on the Azure docs site. Last year when I took the 534 exam, I had noticed the same similarity. Looks like this is a consistent approach that the exam writers are taking across exams. It does make sense since exam writers would test the most important concepts. i.e. those would be the same ones that the Azure technical writers have already documented.

 

Study Guide/Links for 70-533

With the above in mind, I went through all the documentation on the exam site, and put together a study guide. I got great feedback last year about my post, and I am hoping that this is going to be helpful as well.

 

Design and Implement Azure App Service Apps (10-15%)

  • Deploy Web Apps
  • Configure Web Apps
  • Configure diagnostics, monitoring and analytics
  • Configure Web Apps for scale and resilience

Create and Manage Compute Resources (20-25%)

  • Deploy workloads on Azure Resource Manager (ARM) virtual machines (VMs)
  • Perform configuration management
  • Design and implement VM storage
  • Monitor ARM VMs
  • Manage ARM VM availability
  • Scale ARM VMs
  • Manage Containers with Azure Container Services (ACS)

Design and Implement a Storage Strategy (10-15%)

  • Implement Azure Storage blobs and Azure Files
  • Manage access
  • Implement storage encryption

Implement Virtual Networks (15-20%)

  • Configure virtual networks
  • Design and implement multi-site or hybrid network connectivity
  • Configure ARM VM networking
  • Design and implement a connection strategy
  • Implement Hybrid Connections to access data sources on-premises; leverage S2S VPN to connect to an on-premises infrastructure

Design and Deploy ARM Templates (10-15%)

  • Implement ARM templates
  • Control access
  • Design role-based access control (RBAC)

Manage Azure Security and Recovery Services (25-30%)

  • Manage data protection and security compliance
  • Implement recovery services

Manage Azure Operations (5-10%)

  • Enhance cloud management with automation
  • Collect and analyze data generated by resources in cloud and on-premises environments. A

Manage Azure Identities (5-10%)

  • Manage domains with Azure Active Directory Domain Services
  • Implement Azure AD B2C and Azure AD B2B

 

 

 

Note - Some links may be missing above since the material will be covered in different areas.

Do let me know if you find the guide above useful or if you find more relevant links by leaving a comment.

 

 

Introducing the Facebook Solution Framework (FSF) Starter Edition

$
0
0

I am very pleased to announce the second release in my series of products aimed to assist .NET Developers with building apps on the Facebook Platform. Introducing the Facebook Solution Framework (FSF) Starter Edition.

The Facebook Solution Framework (FSF) Starter Edition is a full-featured, mobile-friendly, ASP.NET web application template with core Facebook Platform integration.  Rather than spending countless hours developing an application with Facebook integration from scratch, simply download FSF and get going within minutes!  FSF inlcudes Facebook Login, Graph API integration, it's own SQL Server database project, role-based security and is mobile friendly for use on any device.  In addition FSF is Microsoft Azure ready.  Once you've made your customizations simply deploy your web application to the Azure App Service and you're live!

Download the Facebook Solution Framework (FSF) Starter Edition today at the following URL:
https://www.modernappz.com/products/fsf

Practical Guide on understanding Common Data Service and PowerApps

$
0
0

Over the last few months a lot was mentioned about PowerApps and Common Data Service (CDS). As I started to review the changes I felt two things. 1) There were quite a bit of architecture changes compared to previous versions of CDS / PowerApps strategy 2) Understanding these are crucial when you plan on using the apps & services going forward. Hope this blog helps you understand the latest changes in CDS and how you can leverage them within you organization.

 

Overview : Common Data Service - Apps 

As you look at the following architecture slide you will notice that CDS has evolved into 2 different aspects: CDS for Apps and CDS for Analytics. In this blog we will concentrate our time on CDS for Apps as this is the solution that powers most of  Powerapps ecosystem. CDS for Analytics will be used a common data store for interaction based data that can be used for marketing and other areas.

In the previous version of CDS, when you create a new environment a Common Data Model (CDM) database was created. This database contained commonly used entities for a business applications. Although some basic relationship between entities were created, no rules were built in and it was difficult to scale this database model to a large enterprise which required complex business rules and security contexts. That brings us to the birth of the new version of CDS.

In the new version of CDS, when you create a new environment, a new Dynamics 365 Customer Engagement instance is been provisioned behind the scenes to support your CDS environment.  It is also important to note that when you create a Dynamics 365 Instance (v 9.0) through the D365 Admin Center a corresponding CDS Environment is created.

Now let's take a moment on how this is being done.

 

Creating a CDS-Apps Environment and Database

  • Navigate to admin.powerapps.com and create a New Environment as below.
    • Trial environments: lasts for 30 days and has the option for installing sample Apps & data.
    • Production environments: Created as a Production instance type without sample apps & data.

 

  • Once an environment is created an option to create a new database is presented.

 

  • Select the currency and language settings. Since I selected a trial instance, I have the option to install sample apps and data.

  • Review the new environment in powerapps admin center as below .

  • Also see a corresponding Dynamics 365 instance is created with the following settings.

 

  • If you open the Dynamics instance you will see a minimal sitemap is created for administering security, and customizations as well as the Sample apps (part of the trial instance) are created.

 

Now that we have created a CDS environment and database, let's take a closer look at PowerApps in the context of the new CDS.

Overview: Powerapps in the new CDS Environment

Previous version of CDS only supported one type of Design Mode, Canvas. Canvas Apps provided WYSIWYG type of creating apps that could connect to variety of Microsoft Assets as well as other cloud and on-premise solutions.

In the new version of CDS, users have the option to design apps in 2 modes. The Canvas Apps as well as the Model Driven Apps designer modes. Model Driven Apps are Apps that exist in the Dynamics 365 instance. These apps adhere to the security, business rules that are set in the Dynamics 365 environment. Let's take a few minutes creating a Model Driven App.

 

Creating a Model Driven App

  • In PowerApps switch the design model to Model-Driven as below.

  • Create a New App by filling in the details including the name and App URL that you want to use. Keep in mind, you can start the App with a solution that you have in the D365 instance or start from scratch. In this example I will use the Fund Raiser solution that was installed as part of the sample as a starting point.

 

  • Select the Fundraiser Solution and Sitemap

  • App Designer appears for additional options. If everything looks good, click on Validate and then Publish.

 

 

We have now created a PowerApp using the new CDS framework that leverages D365 as the CDM.

It is also important to note that only the Canvas Apps shows up on the PowerApps Mobile Application You do need to use the web browser to access the Model Driven Apps. Let's take a look at how the App looks like on an iPad.

 

I hope you enjoyed this blog as I discussed PowerApps and Common Data Service - Apps.

 

Service Fabric Customer Profile: SiriusIQ

$
0
0

SiriusIQ delivers seriously intelligent AI on Azure Service Fabric

To design the next generation of enterprise workflow automation, SiriusIQ built a new development stack and defined their core architecture using Azure Service Fabric, Azure Cognitive Service Language Understanding, and other Azure services.

This article is part of a series on customers who worked closely with Microsoft on Service Fabric over the last year. We look at why they chose Service Fabric, and we take a closer look at the design of their solution.

 

Bot-driven intelligence and workflow automation

SiriusIQ focuses on next-generation artificial intelligence (AI) and bot technologies with workflow automation. Their cloud-born solutions streamline business processes, conversations, and analytics using a dynamic workflow engine built on the Service Fabric platform.

SiriusIQ solutions share a dynamic, intelligent workflow engine at their core. Many of these workflows solve common business issues including automated data migrations, GDPR (General Data Protection Regulation) solutions, AI-built FAQs, automated meeting bookings, and cross-system enterprise workflows involving third-party interfaces such as Salesforce, ServiceNow, and DocuSign.

By combining the power of SiriusIQ’s workflow engine with AI and bot technologies, they developed an innovative cloud-based migration tool that enables their customers to move just about any data from point A to point B. SiriusIQ embraces AI as the UI—their natural language interface makes it easy for customers to interact with the many SiriusIQ services. This interface uses Language Understanding (LUIS) and other services from Azure’s growing library.

Service Fabric gives us a globally scalable solution that is secure, stays current, and is easily extensible by adding new microservices. It also eased or removed many classic development challenges—OS patching, version testing, security issues, scalability. We can develop and bring new functionality to our global customers in timelines that were not possible before embracing Azure services.

—Ken Leach, SiriusIQ Partner

Microservices empower the development team

SiriusIQ wanted to avoid the traditional way of developing and solving workflow issues. That model requires distinct solutions, forcing code changes and regression testing for variances in customers’ requirements. Using this classic, monolithic development model, developers are burdened with many challenges. Anyone who has written production code in the last 30 years is familiar with massive code bases, recompilation issues, and the need to redeploy the whole stack while managing the impact on user downtime. Whole app regression testing is challenging enough without also trying to keep up with operating system patches.

This pattern is common in the industry, but SiriusIQ knew they needed to do things differently. They needed a broader architecture than they could create on their own, one that scaled globally, stayed secure, and could be dynamically optimized for performance. They simply could not use the classic development model with its operating system limits and much slower software development life cycle. Azure gave them the platform they needed for delivering a globally balanced solution, and Service Fabric provided a way to get new features into production in just minutes or hours.

By switching to Service Fabric and microservices, the developers at SiriusIQ soon discovered how many of the old development issues they could leave behind. They quickly embraced the new development patterns that enabled them to scale on demand and deploy only what’s changed without downtime. The team didn’t have to worry about operating system patches. Even security was simpler, since many of the classic application security concerns are not relevant in a Service Fabric solution. Microservices also freed developers from the overhead associated with the traditional development model. They can focus on just the microservice they are building, which makes the team more efficient.

Using Service Fabric, the SiriusIQ team could also rethink the way they developed services. They made the most of stateless services for HTTP as well as stateful services for internal processing and actors for processing millions of small transactions—all of which can be scaled independently. This shift in technology led to far more advanced integrations than the team ever imagined and gave rise to QuinnTM, the SiriusIQ intelligent assistant that uses Azure Cognitive Services AI with active learning. Quinn plays many roles at SiriusIQ. It manages the overall flow of data, consumes telemetry, and learns more efficient workflows. The more Quinn is used, the smarter it gets.

A new workflow architecture

One of the team’s goals was to create a new workflow engine to speed their customers’ data migrations. SiriusIQ went beyond the classic development model, which runs processes in parallel or uses PowerShell scripts. Instead, they used Service Fabric to build microservices that disconnect reads from writes to create dynamic workflows that adapts to the most efficient path based on their custom AI. The Azure platform provides global scalability and performance.

Figure: SiriusIQ dynamic workflow architecture uses separate reader and writer microservices for fast, intelligent data migration.

To achieve the goal of ever smarter, faster services, SiriusIQ’s new workflow architecture uses a collection of AI services and telemetry—much of which is collected with Microsoft Application Insights—to power Quinn, the SiriusIQ AI bot. The team also optimized the new architecture with a continuous deployment model based on Visual Studio Team Services. This continuous integration and deployment environment for Azure allows SiriusIQ to deploy on demand using their AI bot, which avoids the need for downtime.

A dynamic service workflow

In the SiriusIQ architecture, dynamic workflows define goals. Those goals are achieved by a collection of microservices that run in Service Fabric and perform a certain task efficiently. Using deep learning and Application Insights telemetry, the AI calculates an efficient path from the starting point to the goal via the correct set of microservices. Intelligent workflow processes consider multiple permutations to ensure the most efficient path. Many SiriusIQ services work in similar fashion to support very complicated workflows such as tracking a contract through signing with HubSpot, ServiceNow, and DocuSign.com.

For data migrations, the AI manages optimal path to reach the goal using SiriusIQ’s growing library of microservices. One migration will easily have millions of Service Fabric requests, instantiating required microservices that securely read data from sources such as email, files, or messaging systems. The reader microservice works independently of the writer microservice that writes to a destination in a similar manner. Massive parallelism ensures extremely fast workflows. The architecture can even read once and write to multiple destination systems if, for instance, a customer wants a copy to go to both OneDrive for Business and Box.com.

During a file migration, millions of instances may be running, but even if an instance encounters an exception, the overall process does not slow at all. The process with the exception attempts to use a new path based on past telemetry from instances with similar issues where a known path solved the issue.

If a path does not exist, a development “swim lane” is generated with captured telemetry from Application Insights. This detailed information includes the telemetry around the issue as well as the line of code that caused the exception. A developer can quickly find the issue and build a slightly different version of the microservice, then use a continuous deployment pipeline to add it to the possible options—without breaking any of the existing code. The excepted process can then find a new route with the new microservice. If successful, it becomes a model for other processes that may encounter a similar issue. If unsuccessful, the effectiveness rating of the new service drops, and the process is aged off the system if it does not provide any value. In this manner, the development team can dynamically optimize any workflow and take full advantage of Service Fabric and the AI.

Service Fabric allows SiriusIQ to build a real, managed microservice platform that can be scaled and provisioned easily with high availability both on premises and in the cloud.

Wallace Breza, SiriusIQ Partner

Service Fabric benefits

The team at SiriusIQ has been working with the latest technology for more than 30 years. Always on the lookout for value and more efficient and effective ways to achieve their goals, the team was impressed by what Service Fabric had to offer. It completely changed the way they developed, tested, deployed, and scaled their core technology. Service Fabric also delivered a valuable model for any development effort going forward, enabling SiriusIQ to bring more value to customers through dynamic services, improved costs and efficiencies, and a better user experience.

The Service Fabric platform provided the following key benefits:

  • Fewer dependencies: By breaking up the dynamic workflow process into smaller microservices, the SiriusIQ team removed widespread dependencies. Dependencies still exist but only for a focused microservice instead of a larger dependency matrix typical in monolithic applications. This change alone greatly increased performance and reduced testing and deployment timelines.
  • Mid-process branching at scale: Using the Service Fabric Actor pattern, SiriusIQ introduced mid-process branching, which opened up many possibilities for future products. For example, an added service is metadata tagging during content migrations. If access is granted by a customer, the contents of the object being moved can be tagged for PII, GDPR, or other custom requirements, all out of band while the payload is in transit. Tagging does not slow down the migration—it occurs without interruption to the dataflow underway.
  • Global scale: The Service Fabric microservice model gave the SiriusIQ team the global scalability and performance they needed. For example, in a simple migration where 1,000 users exchange email, the new platform easily supports more than seventy million messages on Service Fabric. SiriusIQ was also able to comply with data sovereignty laws of Australia while moving sensitive government email. They deployed a complete, dedicated solution in the Azure Australian data center using their AI bot and Azure Resource Manager in under thirty minutes.
  • Streamlined testing: SiriusIQ’s patent-pending workflow exception model allows dynamic workflow updates without code deployments. The new microservice-based process reduces the scope of the testing needed. The overhead of managing patches, scale, and deployments is no longer a burden.
  • Self-tuning systems: When a new service is introduced, SiriusIQ’s patent-pending workflow process compares its performance to others that perform a similar function and ranks them dynamically. The best options for a given scenario float to the top; likewise, the slower or non-performing options age off the system. This process is managed by SiriusIQ’s custom AI that considers rank among many other metrics when determining the most efficient path to reach goals.
  • Security: The compliance certifications of the Azure platform bring added value to SiriusIQ and their customers’ security teams. SiriusIQ performs regular service audits based on the Service Organization Control (SOC) reporting framework. Since SiriusIQ’s complete solution lives in Azure, many of the compliance checkboxes needed are provided to them by Microsoft.

Other Azure services

SiriusIQ works with a number of other Azure services, including:

  • LUIS: The AI bot can dynamically update the intent model in LUIS, growing as it is used.
  • Application Insights: One of the secrets to the success of the best practices model of developing in Service Fabric is to have the best possible telemetry. Application Insights gives the SiriusIQ team a level of visibility into everything that is happening, from one user’s file to billions of messages and migrations happing simultaneously. A small email migration of 1,000 users, for instance, can easily produce over 35 MB a day of telemetry data from Application Insights. SiriusIQ spent a fair amount of time designing how they could not just report status but use their AI and Azure deep learning to make the telemetry actionable.
  • Azure Redis Cache: The power and speed of Azure Redis Cache combined with the ability to secure the transport brings performance, security, and scalability for the countless services SiriusIQ provides to its customers.
  • Azure Service Bus: Leveraging Service Bus topics and subscriptions allows the pub/sub model to scale to support thousands of requests per second.
  • Azure Cosmos DB: The solution uses Azure Cosmos DB for fast, reliable geo-redundant data storage across document and graph databases.
  • Microsoft Bot Frameworks: SiriusIQ’s bot communication channels seamlessly scale to the most popular messaging channels available. The framework offers capabilities for building their own as well.
  • Azure Key Vault: To keep SiriusIQ’s protected configuration data secure, they use Key Vault.

Deciding to build new tech from scratch using Service Fabric at the core of our solution could not have been a better fit for us. We now start projects within a whole ecosystem of technology from within Azure that honestly gets us more than 80 percent of the way to delivery. The more microservices we build and accumulate, the more capabilities we are able to offer to our customers. There was no other alternative on the market that came close to Service Fabric.

—Mark Golden, SiriusIQ Partner

Summary

As the cloud landscape continues to evolve and customers ask for more services, the Service Fabric platform helps the SiriusIQ team adapt to change quickly. Although the team’s current focus is data migration, GDPR, and healthcare use cases, the company’s core dynamic workflow is very flexible and can be used to solve many business issues. By simply adding new microservices, SiriusIQ can take advantage of new functions in Service Fabric, release new services to its customers, and set the pace of the industry. The natural language capabilities enable users to continue to interact with the process while it runs.

SiriusIQ was recently approved as a member of the Co-Sell program of the Microsoft One Commercial Partner ISV program. This exclusive program highlights unique partner solutions that address critical enterprise IT pain points by making the most of Microsoft technologies. Learn more at the SiriusIQ website.

The SiriusIQ offering is intelligent, powerful, and flexible. And the leadership team has a stellar track record of successfully bringing next-gen tech to the enterprise.

Frank J. Casale, IRPAA Founder and Chairman


Deploying containerised Asp.net Core Applications to Azure

$
0
0

In the March 2018 issue of MSDN Magazine (msdn.com/magazine/mt845653), was covered a scenario in which insurance policies for vehicles generated by an agency could be stored securely in Azure Key Vault as secrets, and how additional features like asymmetric encryption with keys and built-in versioning could be used to maintain an audit trail of access to insurance policies. The applications that used Azure Key Vault were deployed to Azure App Service as Web apps. In msdn.com/magazine/mt846465 article, the same applications would be containerized and deployed to Azure Container Services for Kubernetes (AKS). 

Microsoft partners play critical role in transforming national computing fabric with Microsoft’s Azure Australia Central

$
0
0

Today is a very exciting day for all of us here at Microsoft as we officially launch the Azure Australia Central Regions in Canberra. This launch will make Microsoft Australia the undisputed leader of digital transformation, by serving the needs of government, critical infrastructure, and their suppliers.

 

Microsoft partners play a critical role in transforming the national computing fabric, and I’m delighted to announce that 47 partners, both local and international, are leveraging the unique characteristics of the platform. Read my full blog post here.

 

This milestone opens the opportunity for partners to unlock new markets and drive innovation with government, critical infrastructure and their suppliers. While the Australia Central Regions is restricted to partners who service government and critical infrastructure, you can check your eligibility by applying to be whitelisted.

 

Together with our partners, we can make a real difference by bringing rapid innovation to the computing fabric of our nation.

Issue Mitigated: Admin portal failing to load properly in northern Europe

$
0
0

Earlier today (4/2/18), between approximately 08:00 UTC and 18:15 UTC, some customers using portal.azure.com in northern Europe may have experienced latency issues while loading the Azure AD B2C administrative experience. This issue, which was caused by long timeouts when retrieving tenant configuration data, has been mitigated. Any customers who may have previously experienced this issue should try again. 

Data integration capabilities are now available for Dynamics 365 Finance and Operations on-premises deployments

$
0
0

Today, we are happy to announce the availability of data integration capabilities for Dynamics 365 Finance and Operations on-premises customers. To use the data integration capabilities in an on-premises environment, you need to be on the latest 7.2 and Platform Update 12 build (version 7.0.4709.41184). For information on how to use this feature for on-premises environments, see the topic, Data Management Package API.  The latest 7.2 and Platform Update 12 build has now been made available through Lifecycle Services. In addition to the data integration capabilities, this new build also has other key features, including:

  • Enabled ISV licenses to be applied to an on-premises environment. 
  • Admin toggle button to turn off features dependent on internet connectivity. 

 

Going forward, all new deployments that are completed through LCS will receive the latest 7.2 and Platform Update 12 build (version 7.0.4709.41184). For existing customers that want to use data integration feature will need to delete and redeploy the environments to use this feature.  If you want to use the data integration capabilities but do not want to delete and redeploy existing on-premises environments, we will be adding support for you in the next few weeks. We will also be making 7.3 and Platform Update 12 available for on-premises customers around the same time.

Release Notes for Field Service and Project Service Automation Update Release 5

$
0
0

Applies to: Field Service for Dynamics 365, Project Service Automation for Dynamics 365, and Universal Resource Scheduling (URS) solution on Dynamics 365 9.0.x

We’re pleased to announce the latest update to the Field Service and Project Service Automation applications for Dynamics 365. This release includes improvements to quality, performance, and usability, and is based on your feedback and requests.

This release is compatible with Dynamics 365 9.0.x. To update to this release, visit the Admin Center for Dynamics 365 online, solutions page to install the update. For details, refer How to Install, Update a Preferred Solution

Field Service enhancements (v7.4.1.31)

Improvements 

  • Performance improvement on create, update of account record
  • Added new validation string for validateSystemStatus in PurchaseOrder.Library.js
  • GDPR compliance

 

Bugs

  • Fixed: [Field Service solution] Woodford Solution and Woodford Project template links are not opening the right location
  • Fixed: form id added in SalesDocumentCustomFormIds.js should NOT be case-sensitive
  • Fixed: SalesDocumentFormLoader.Library.js Error on new custom Sales forms.

 

Project Service Automation (v2.4.1.46) Enhancements

Improvements 

  • GDPR compliance

 

Bugs

 

  • Fixed: Assignments not updating when substituting resources
  • Fixed: Booking plugin causing performance degradation when create project team member in PSA
  • Fixed: Creating Team member with named resource throws script error when selecting a resource with a different view
  • Fixed: Drag and drop requirement onto schedule board and create bookings not update assignment
  • Fixed: Move project fails with real (non-generic) resources assigned to the project
  • Fixed: Proposal Schedule Board shows nothing when switching to hourly view
  • Fixed: Corrupted Calendar Rules from Setting Calendar
  • Fixed: Deadlock / timeouts when trying to move a task with assignments - partial data corruption
  • Fixed: Team member has difference between assigned hours and required hours after substituting generic resource through schedule board
  • Fixed: Requirement detail create/update/delete should be suppressed in bulk project scenarios: MS Project publish
  • Fixed: Project Estimates do not display information of categories for line tasks
  • Fixed: Requirement detail create/update/delete should be suppressed in bulk project scenarios: project copy and create project from template
  • Fixed: Effort Hours less than expected for Calendar with Breaks
  • Fixed: Assigning work on non-working day in MS Project due to calendar differences throws argument not valid exception
  • Fixed: Rounding errors when publishing a task with contours with indefinite fractional values
  • Fixed: "Amount" in Expense entry can't exceed 3 digits figures when language is set to Finnish
  • Fixed: Time Entry Paste (Ctrl-C + Ctrl-V) msdyn_date time stamp is not set to 12:00pm UTC in classic calendar experience.

Universal Resource Scheduling Enhancements

NOTE: Improvements and bug fixes for Universal Resource Scheduling apply to Field Service and Project Service Automation, as well as to other schedulable entities in the Sales or Service applications.

Improvements 

  • Book based on estimated time of arrival
  • View more at once on the schedule board
  • Display up to 14 days on hourly schedule board
  • Display more resources on schedule board
  • Change booking statuses from multiday schedule boards
  • Leverage multi-select option set fields in extensibility
  • Change default value for ignoring proposed bookings
  • Use booking panel when dragging and dropping on multiday schedule boards
  • Selected resource renders in driving directions
  • Right click instead of hover to view resource card
  • Display day of the week on hourly vertical schedule board
  • Keep context when initiating substitution while searching for availability

 

 

Bugs 

 

  • Fixed: Object Instance error message which appeared when searching for availability if a resource location was set to location agnostic, yet they had onsite bookings in the search range.
  • Fixed: Minor localization issues
  • Plugin that updates the fulfilled, proposed, and remaining duration fields on Resource Requirements are only executed when relevant fields change.
  • Fixed: When opening specify pattern window, requirements and requirement detail dates now match by default.
  • Fixed: Proposed Bookings appear on multiday schedule boards even if the only bookings a resource has are "proposed".
  • Fixed: Various alignment issues between demand panel and Schedule Board when searching for availability.
  • Fixed: Issues with Resource Substitution
  • Fixed: When searching for availability, if there are no resources that match, the board will not render results.
  • Fixed: Error and warning messages now clear when switching between different Schedule Board views.
  • Fixed: In specific circumstances, certain resources with availability were being removed from the available resource list when changing Schedule Board dates or when refreshing Schedule Board while searching for availability.
  • Fixed: While searching for availability for onsite requirement, resources that were booked on the same requirement or at the same location were sometimes not being returned as available.
  • Fixed: Using the "pop out" Schedule Board, when the default availability view is set to "grid", when searching for a requirement that is under 24 hours, changing from the grid view to another view did not render the demand panel.
  • Fixed: Latitude and Longitude is now copied from the Resource Requirement to the Booking when a booking is created.
  • Fixed: Schedule Board loading issues in IE11
  • Fixed: Issues when using the setting to dim unavailable resources when using Schedule Assistant instead of removing the resources.

For more information:

 

Feifei Qiu

Program Manager

Dynamics 365, Field & Project Service Team

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>