Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

SQL Server Linux: Directory fsync Activities

$
0
0

When Creating, Renaming or Deleting (remove/unlinking) a file, Linux requires the direct parent directory to be synchronized.  As documented in the manpage for fsync core changes to the directory require the directory itself to be synchronized. 

“Calling fsync does not ensure that the entry in the directory containing the file has also reached disk. For that an explicit fsync on a file descriptor for the directory is also needed.”

The SQL Server Host Extension provides file level integrity.  When a move (rename), create, delete (remove/unlink) occurs the Host Extension issues a fsync on the parent directory as required by Linux. 

Note: Activates such as changes in the file size do not require synchronization of the direct parent.

Bob Dorr - Principal Software Engineer SQL Server


AI playground – Cognitive Services Labs

It rather involved being on the other side of this airtight hatchway: Hanging the loader

$
0
0


A security vulnerability report pointed out that
a malicious file can cause the module loader to
enter an infinite loop,
thereby causing a denial of service on the
process doing the loading.



This was by itself not interesting.
After all, if you have managed to get the system
to attempt to load your DLL,
and you want to use it to cause a denial of service,
then you don't need to get this fancy.
You can just put
Sleep(INFINITE); in your
DLL_PROCESS_ATTACH
handler!



In other words,
you're already on the other side of the airtight hatchway.
And you're bragging that you can do something annoying like
a denial service,
apparently unaware that being on the other side of the
airtight hatchway gives you the ability to do far more
interesting (and threatening) things.

WinDbg Preview 1.0.1812.12001 and new extensibility interfaces

$
0
0

Hi everyone and happy holidays!

We've got a more extension focused release this time around, with a new C++ header, a new data model extension focusing around enabling easier JavaScript extensions, and a bunch of new samples!

Feel free to leave any questions or comments below or reach out to me on Twitter @aluhrs13. If you have feedback on our samples feel free to open a GitHub issue, or a PR if you want to contribute a fix or change.

Debugger data model C++ header

Last month we released a C++ header, DbgModel.h, as part of the Windows SDK for extending the debugger data model via C++. You can find more information in our official docs - https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/data-model-cpp-overview

The main page includes a great overview of how the debugger data model works and how extensions built on the new C++ header, JavaScript, NatVis, and the 'dx' command all inter-op and is work a read for anyone interested on ther internals of the debugger.

New Data Model API Extension

This release includes a new extension that adds some more "API style" features to the debugger data model that can be accessed through the 'dx' command, JavaScript, and the new DbgModel.h header. This extension extensions the data model to include knowledge about assembly and code execution through the Debugger.Utility.Code namespace, and the local file system through the Debugger.Utility.FileSystem namespace.

You can find the full docs for these new namespaces and the objects associated with them at:

Code

File System

The main goal of this extension is to improve the API surface available to JavaScript extensions, but all the functionality can be used via 'dx' and LINQ queries, for example:

0:000> dx Debugger.Utility.FileSystem.FileExists("C:\Users\aluhrs\Desktop\HelloWorld.txt")
Debugger.Utility.FileSystem.FileExists("C:\Users\aluhrs\Desktop\HelloWorld.txt") : false

Or

0:000> dx Debugger.Utility.FileSystem.TempDirectory.Files.Count(),d
Debugger.Utility.FileSystem.TempDirectory.Files.Count(),d : 110

This can be useful when writing scripts to quickly validate what a method might return in a REPL-like fashion.

Known issues

There are a couple known issues in this release that will be fixed in our next release:

  • Many of the iterable objects returned from properties on the disassembler can only safely be iterated once. It is perfectly fine to, in JavaScript, do: `var operandCount = instr.Operands.Count(); for (var operand of instr.Operands) { … }`. The following will have undefined results: `var operands = instr.Operands; var operandCount = operands.Count(); for (var operand of operands) { … }`.
  • There can be an incorrect flow graph produced from DisassembleFunction for some functions (overlapping basic blocks with incorrect flow links).

Synthetic types extension

With this new API extension, we have a new sample up on our GitHub repo here - https://github.com/Microsoft/WinDbg-Samples/tree/master/SyntheticTypes.

This JavaScript extension reads basic C header files and defines synthetic type information for the structures and unions defined in the header. Through the dx command, memory can then be viewed structured as if you had a PDB with type information for those types.

Other change and bug fixes

    • WinDbg Preview will now more intelligently handle bringing source windows or the disassembly window to the foreground when stepping.
    • Re-arranged WinDbgNext's window title to have more important information at the start when kernel debugging.
    • The alternating background contrast in the command window should be slightly more noticeable.

Visual Studio Toolbox: Building Bots Part 2

$
0
0

A bot is software that interacts with humans to do things like chat, make recommendations, book travel and more. This is the second of a two part series where Sam Basu shows us how to build bots. In this episode, Sam gives the bots he built in Part 1 a modern UI by using Telerik Conversational UI user controls.

Cheers to New CodePush Features

$
0
0


 
 
As we close out 2018, our team is still busy delivering useful new additions to Visual Studio App Center for you. Today, I’ll be highlighting the addition of three all-new CodePush features: Install Metrics and Update Metadata in the App Center CLI, Deployment Management in the App Center Portal, and our re-try mechanism for CodePush rollbacks.

All three of these features were heavily requested by you in the community, bringing a better experience to React Native developers end-to-end and ultimately make distributing releases via CodePush a more polished process for you.

Here’s more information on how you can improve your release flow with the newest CodePush features and how to get started.

Install Metrics and Update Metadata in the App Center CLI

We’ve brought install metrics and update metadata to the App Center CLI, completing parity between the CodePush CLI and the App Center CLI. This feature enables you to see various statistics associated with your releases, such as version of the app, whether it’s mandatory or not, release time, and who distributed a particular release. In addition, one can see how many users are actively running a specific release. To use this feature, simply run either the `appcenter codepush deployment list` or the ` appcenter codepush deployment history `commands in the App Center CLI.

Deployment Management in the App Center Portal

Today, you can now completely manage your deployments in the App Center Portal without needing to access commands associated with deployments in the CLI. Now. By navigating to the CodePush page within the Distribution service, you can add deployments, rename deployments, and delete deployments with the click of a button without ever leaving the App Center portal.
 
 

 
 

Re-Try Mechanism for CodePush Rollbacks

The re-try mechanism for CodePush rollbacks was one of our biggest feature requests. With this feature, it’s now possible to customize this flow in a way that can re-attempt previously unsuccessful updates. This allows users who are running older versions due to rollbacks an opportunity to now run the latest release. For example, you can set options like this:
 
 

 
 
Then use one of the following options `App = CodePush(syncOptions)(App)` or `CodePush.sync(syncOptions)`. The above code means that in the case of a rollback, the app will re-attempt to download the release in 1 hour after the rollback at maximum one more time. These numbers can be adjusted to best for your flow. This feature minimizes the time users spend on old versions and ensure your newly distributed code gets in the hands of your users.

To get started with these features, simply log in to App Center and/or download the App Center CLI. We hope you’re as excited about these features as we are. We appreciate your passion and great feedback this past year and look forward to another exciting year ahead with Code Push and App Center.

The Power BI World Tour is back in 2019 as the Power Platform World Tour!

$
0
0

The Power BI World Tour is back in 2019 as the Power Platform World Tour!

It’s all the Power BI content you’ve come to expect from Microsoft team members, MVPs, Champions, and Subject Matter experts but this year’s Power Platform World Tour will now include PowerApps and Microsoft Flow content.

Power Platform World Tour will bring together thousands of users in cities all around the world to learn and network to work smarter, streamline processes, and advance their skill set. It’s a two-day learning opportunity that will teach you how to identify insights into data, create applications in a no-code environment, and automate processes.

Why You Should Attend

Whether you are local to the area, traveling from near or faraway, the Power Platform World Tour will provide exclusive educational opportunities and a chance to network with familiar subject matter experts in this Power platform ecosystem.

For those local to the area, Power Platform World Tour will connect you with your local Power BI, PowerApps or Microsoft Flow User Group. You’ll have the opportunity to meet other users near you as you become part of a local community of Power platform users. When you connect with those who are faraway it gives you the opportunity to broaden your network and be part of a larger global community of product users. It’s a win-win.

There are multiple session tracks that provide content for New Users, End Users, and Technical Users. 24 sessions across two days to provide a valuable experience if you’re new to Power platform or have already mastered some of the concepts. All content presented is available for attendees in the local user group library, so you have materials to reference post World Tour.

Expand your world of experience, grow your network of connections, and get ready to become the Power platform expert you were meant to be.

Visit powerplatformworldtour.com to view the first three cities announced and put your vote in for where these regional user group events should go next!

 

 

 

Start Free with Azure Stack Development Kit

$
0
0

Azure Stack is a hardware-based appliances which means you need to buy the hardware to be able to feel it. Hardware acquisition takes time, so you need a playground to test if things are fine as you expect. Find an abandoned hardware and install the Azure Stack Development Kit (ASDK) to play on it since day one,

Download and extract the development Kit (ASDK)

https://docs.microsoft.com/en-in/azure/azure-stack/asdk/asdk-download

Prepare the ASDK host computer

https://docs.microsoft.com/en-in/azure/azure-stack/asdk/asdk-prepare-host

Post ASDK installation configuration tasks

https://docs.microsoft.com/en-in/azure/azure-stack/asdk/asdk-post-deploy

Azure Stack Pluralsight Training

https://www.pluralsight.com/courses/microsoft-azure-stack-big-picture?twoid=26658a98-62ff-48e5-aa5f-2741523aea54


Free Trainings for Azure Certifications AZ-*

Oslík nebo ovečka? Trénování Microsoft Custom Vision AI modelu

$
0
0

Kognitivní služby v Azure mají připravené vision modely, které jsou schopné říct, zda je na obrázku ovce nebo osel (pro strojové vyhodnocování živého betléma se vám to příští týden může hodit). Ale co ty plyšové? Ty samozřejmě nepozná. O strojovém učení nevím skoro nic, ale služba Custom Vision vypadá tak jednoduše, že bych ji mohl zvládnout a vy určitě také. Pojďme natrénovat model, který na obrázku označí kde je oslík a kde ovečka.

Pojďme na stránku https://www.customvision.ai/

Po přihlášení budete mít trialku, ale já mám svůj Azure a chci si zaplatit těch doslova pár korun za plnou verzi. Mohl bych samozřejmě účet vytvořit v Azure, ale přímo z této stránky se to dá udělat taky.

V Azure se nám vyvořily příslušné zdroje.

Vytvořme si tedy nový projekt.

Můžeme provádět klasifikaci obrázků, ale já bych chtěl raději detekci objektů. Ať mi hračky počítač nejen pozná, ale i řekne kde v obrázku jsou.

Nejdřív musíme trénovat a tak jsem si vyfotil oba plyšáky z různých úhlů, s různým pozadím, v jiných polohách a světelných podmínkách. Jen tak na zkoušku mám asi kolem 25 fotek, ale na přesnějších model by jich to chtělo víc. Nahraji je do Custom Vision.

Teď potřebuji robotovi říct, kde je jaký objekt, tedy ovečka a oslík.

Tohle uděláme pro všechny obrázky a jdeme natrénovat model.

Máme hotovo.

Teď už můžeme rovnou přistupovat na API a začlenit vyhledávání oslíka a ovečky do vlastní aplikace.

Pojďme si ale přímo z GUI vyzkoušet, jak nám to funguje. Mám dvě fotky, které robot ještě neviděl, tak pojďme na to.

Funguje krásně!

 

A teď něco složitějšího. Pozadí s rušivými obrázky, plyšáci v zákrytu…
… více na blogu autora https://tomaskubica.cz

(Autorem článku je Tomáš Kubica, Microsoft Azure TSP.)

Customizing the warehouse mobile app: multi-scan pages

$
0
0

Introduction

This is another blog post in the series about warehouse mobile devices in Dynamics 365 for Finance and Operations. In the last blog post, the difference between customizing for WMDP and the warehouse mobile app was discussed. This blog post will be walking you through a new control scheme that was recently released, explaining how it unlocks new potential for partial offline processing in the warehouse mobile app.  This new functionality is called “multi-scan” and it enables a user to perform a series of offline scanning operations and then return them all to the server in one round trip operation. The goal for this control scheme is to allow for very quick scanning operations (many scans per second as an example) in high transactional warehouses where the standard model of a server round trip after each scan will not scale. Especially in sequential operations where the user does not need to look and verify on the device after each scan, but rather just need to register all scans in one go.

Multi-scan Functionality

If you download the latest version of the warehouse mobile app, you will have some new capabilities in the demo-mode which shows how this new control pattern works.  Once you have enabled the demo mode you should see the following menu:

Cycle counting is the flow we have enabled in the demo with multi-scanning to demonstrate the new capabilities.  It is designed to simulate a user performing a spot cycle count at a location in a warehouse or retail store where there are many items to scan. Currently this in only available in demo mode of the app, there is no support for this functionality when connected to a Dynamics 365 for Finance and Operations environment.

The first screen that is displayed is a location scanning screen – you can enter (or scan) anything in the demo mode here to move to the next screen.

Once you have scanned the location, the app enters the multi-scanning mode.  This is the new control that is being introduced in this release, so let’s go through the different UI elements that have been introduced to support this new flow.

This is the initial screen – you can tell it is the multi-scanning interface because of the new list icon in the bottom left corner; clicking the list icon will show you the list of items you have scanned so far. The checkbox icon in the bottom right is used to report to the app that you are done scanning and it is the only time the processing returns to Dynamics 365 – everything else will take place within the app locally on the device.

Once a worker starts to scan barcodes (or enter data manually into the app) the UI will change slightly.  Every item scanned will be added to an internal buffer and the number of items scanned will be displayed in the main UI.  For example – after a few scans the UI will now display the scanned count of three:

At any time, the user can click the list icon in the lower left, which will then display the list of items that have been scanned (in this example perhaps product barcodes in the location).  The UI for this looks like the following:

This lists the barcodes that have been scanned as well as a count of the times they have been scanned by the user.  This is very useful in the counting scenario, as a user can simply scan each product’s barcode to generate a count of items at that location.

You might note that there are two disabled buttons at the bottom of the screen.  These become active when a row is selected by the user in the list – as you can see below:

The edit icon on the left allows you to manually change the number of scans for the selected row. The icon on the right with the “X” deletes the selected row in case something was scanned accidentally.  

The edit icon will open a new screen with the numeric stepper UI allowing the user to quickly increment or decrement the number of scans or click on the value to open the numeric keyboard:

When clicking on the value, the numeric keyboard will open. As the number of barcodes cannot be negative, or integers, buttons that aren’t relevant for this use case has been disabled:

Returning to the main screen (by clicking the back button in the upper left corner) we are ready to submit the scanned list of items and their counts to the server. We do this by clicking the checkbox button in the bottom right – this is when we finally make the round-trip to the server and communicate with Dynamics 365. Later in the blog post the API will be explained and how to consume the scanned items and their quantities in X++ code.

In the demo flow, the next screen that is displayed is the following list of items which are not present in the location:

This is the second control pattern that we have introduced as part of this release – it allows a workflow to display a list of items (for example barcodes or product UPCs) and then allow the user to “scan to remove” from the list.  In this demo example we are displaying the items that were found in the cycle count but are not currently registered as on-hand for this location; the intention is that the warehouse worker would double check this list and scan any items that were indeed found as an extra validation check.  Scanning “T0001” in the above screen would then remove this from the list – and remember that this is all done client-side at this point. It is also possible to click on any value in the list and remove it.  Then when the user clicks that checkbox/submit button the new list of items would be submitted to the server for processing through a X++ workflow.

Custom Workflow

Hopefully that walkthrough gives you some idea of the capabilities we have added with these two new client-side control screens.  It is important to know that we have not currently added any multi-scan capabilities to the core product yet – the above cycle counting workflow is just a demo inside the app.  The goal of introducing these new control screens is to enable partners and customers to build new workflow-based solutions in the mobile app that support client-side driven scanning operations.  As such let’s walk through a simple customization example to show how the new control screens can be utilized in a real workflow.

Page Patterns

The way to enable the multi-scan screen is through a Page Pattern.  This might not be something you are aware of in the mobile app, as most of the time this is handled for you by the standard framework.  The Page Pattern is what tells the mobile app what type of UI to display on the device itself.  If you look at the WHSMobileAppPagePattern Enum you can see the different options available:

  • Default
    • This is the page pattern used for 90% of the screens in the app. It displays a primary scanning UI and a set of controls in the secondary tab – of which a few can be promoted to the first screen.  An enter and cancel button and an optional set of additional buttons in the menu are supported.
  • Custom
    • This Page Pattern is not used in many places in the core mobile flows – it is designed to allow partners to convert their old WMDP pages into the new model. Using this pattern will render the controls as it was done in WMDP  – each control simply vertically stacked in a single screen.
  • Login
    • This is used for the initial login page.
  • Menu
    • The Menu screens are rendered with this Page Pattern.
  • Inquiry
    • This Page Pattern support the workflows that allow the user to search for something and then see the results – such as LP or Item lookup screens.
  • InquiryWithNavigation
    • This is the Page Pattern that supports the Worklist view in the app. It is similar to the Inquiry pattern, except that includes some sorting options as well as the tiles are navigable.
  • MultiScan
    • This is the new pattern that has been added which will display the multi-scan UI shown in the demo above.
  • < MultiScanResult>
    • Note that as of the 8.1.1 release there is one missing and will be added in an upcoming release. If you want to enable a workflow to use the second screen described above – the “result list” of items, you would need to add a new Enum and return the value MultiScanResult. 

The actual job of returning the Page Pattern to the app is done through a class which derives from WHSMobileAppServiceXMLDecorator. This abstract class has a “requestedPattern” method that can be overridden to return the specific Page Pattern that is necessary.  This is typically done through a workflow-specific factory class that understands the correct workflow steps and thus can return the correct XMLDecorator class depending on the stage in the state machine.

For example – here is the standard factory class for the Work List functionality.  You can see that it typically will return the WHSMobileAppServiceXMLDecoratorWorkList object – which will render the work list Page Pattern as you would expect, however if the user has switched to the edit filter view then we need to display a different Page Pattern – thus the factory has the context to make this switch.

Multi-Scan API

Now that we know how to enable the Multi-Scan UI through a Page Pattern, we need to understand the basic API for passing the scanned items back and forth.  Once the MultiScan Page Pattern is requested, the first input control registered on the page will be used for the multi-scan input.  Remember that most of the UI interaction is all done client-side – so the only thing the server X++ code needs to do is define this control and the data that it contains.

When the user clicks that “submit” check box and sends the multi-scan data back to the X++ code, this is formatted in a very specific way.  The actual parsing of the data is done using the same interaction patterns as before – it will be stored in the result pass object for the specific control defined as the primary input of this page.  But the data will be passed in this format:

                         <scanned value>, <number of scans>|<scanned value>, <number of scans>|…

Thus, in my demo example above the data that the server would receive would be the following:

                         BC-001,2|BC-002,1|BC-003,1

In the X++ code you would then be responsible for parsing this string and storing the data in the necessary constructs.  We will see a simple example in a moment of how to parse this data.

Asset Scanning

The workflow I will demonstrate is very similar to some of the WHS workflow demos we have described in previous blog posts.  In this flow we will be scanning a container and then capturing all the “assets” that are stored in that container.  Imagine that these assets are very numerous and thus are stored outside of the standard batch tracking mechanisms in AX – they are implemented in the sample as a simple asset table associated with a container.  The assumption here is that scanning assets needs to be extremely quick and thus the offline “multi-scan” mode is the perfect solution.

The state machine for this workflow is similar to our previous examples – we will have an initial scan container screen, which will transition into the multi-scan enabled “Scan Assets” state – and finally when the list is returned we process the assets and return to the initial state. 

You can see this reflected in the core displayForm method below.  We will not be covering some of the lower-level details of the code – please review the earlier blog posts for details on the enums and control classes necessary to facilitate the new workflow.  All the code necessary for the solution can be downloaded at the end of the post if you want to dig into the details.

The getContainerStep is identical to our previous examples – it simply shows a simple UI and grabs the container ID from the user.  The getAssetStep method validates this container ID and calls the buildGetAssets method – which is where the UI for the multi-scan screen is built.  This is copied below:

As you can see this does not look much different that standard WHS code we have done previously.  The first input control (in this case the Asset ID field) will be used as the multi-scan field, but this code does not need to be modified in any way to support the multi-scan Page Pattern.  Instead what we need to do in ensure the correct Page Pattern is returned to the app during the correct workflow step.  To make this happen I have added a new DecoratorFactory class which will return a WHSMobileAppServiceXMLDecoratorMultiScan object at the appropriate step for my workflow – which in turn is what renders the Page Pattern to the app.

Please note the attribute at the top of this class – it is the same WHSWorkExecuteMode mapping attribute used for the WHSWorkExecuteDisplayAssetScan class in the code sample above.  This is how the framework knows that this specific decorator factory class is used for this work execute mode – the enum-based attribute ties these classes together through the sysExtension framework.  The key point here is that if you need a custom decorator factory to define when exactly to switch to multi-scan mode, the above example is how you will enable this.

In the final workflow step we need to process the incoming multi-scan results. As mentioned before these are returned to the server in the same way as normal data – it will simply look like a specially formatted string in the value of the input control.  Recall the discussion above with the format of the string being <scanned value>, <number of scans>|… In my simple example below, I am parsing this string using X++ and saving the assets to a new table associated with the Container.  In this case I am not making use of the second piece of information in the collection – the number of scans was not necessary in this case.

Hopefully it is clear how we loop through all the scanned assets and save each one to the new table.  After this is complete, we reset the workflow and move back to the first stage in the state machine.

Example Workflow

Now that you have seen the code to enable this in a custom workflow, let’s walk through this demo.  You can download the complete code for this project in the link at the bottom of this post – you just need to get it up and running on a dev environment and configure the necessary menu items to enable the workflow for your system.

The initial screen shows the Container ID scanning field.  Not that in the sample project I have included the necessary class to default this to the scanning mode – however you will need to set these up in Dynamics as defined here.

Scanning a container id (CONT-000000001 works if you are in USMF in the Contoso demo data) will navigate you to the next screen and enable the multi-scan Page Pattern.

Here you can enter any number of assets and the app will store them into the local buffer.  As we described above you can view the scanned assets by clicking the icon in the lower left.  After a few scans we would see the UI updated:

Clicking the list icon would show us the scans we have performed offline:

Finally clicking the “submit” button on the main screen will push the items to the server, which will then be saved to the custom table we have and the UI will display the success message.

Conclusion

Hopefully this help you understand the new control scheme that was added and how it can enable fast scanning operations.  The code used for this demo is available to download here – please note that this code is demonstration code only and should not be used in a production system without extensive testing.

Internet Explorer prompts for password after selecting Remember my credentails option

$
0
0

When you are working on Unified Service Desk client application, the Internet Explorer often prompts for password even though you have selected Remember my credentials options in the Windows Security dialog.

The issue is from the Internet Explorer and not from Unified Service Desk.

How do I verify if the issue is of Internet Explorer?

  1. Copy the Dynamics URL you are trying to open.
  2. Open the Internet Explorer web browser and paste the URL in the address bar and select enter.
  3. The Windows Security dialog prompts to specify the credentials. Select the Remember my credential option.
  4. Close the web browser
  5. Launch the web browser again and navigate to the same URL.

If the Windows Security dialog prompts for password, then it is the issue of Internet Explorer and not of Unified Service Desk.

Solution:

To workaround the issue, you must add the Dynamics URL to the Trusted sites zones in the Internet Explorer options.

  1. Select windows and type Internet Options.
  2. Select the Security tab.
  3. Select the Trusted sites zone.
  4. Select the Sites button.
  5. Specify the URL in the Add this website to the zone field and select Add. After adding, select Close.

Now, if you login to Unified Service Desk, the Windows Security dialog does not prompt for password.

(This blog has been authored by Deepa Patel and Karthik Balasubramanian with the inputs from USD feature team)

Q# Advent Calendar 2018

$
0
0

To celebrate the festive season, our friends over in the Q# team are running a Q# Advent Calendar with a new post every day from 1st-24th December! Check it out to learn more about everything from quantum simulation to solving the Travelling Santa Problem to why amplitudes are complex!

Frances and I have also contributed posts for this festive event:

For other posts relating to Q# and quantum computing, check out the rest of the Quantum Adventures blog for intro-level posts and links to lots of useful content 😊

We hope you find them useful and wish you happy holidays!

- Anita and Frances

Build Visual Studio extensions using Visual Studio extensions

$
0
0

What if the community of extension authors banded together to add powerful features to Visual Studio that made it easier to create extensions? What if those features could be delivered in individually released extensions, but joined by a single installation experience that allows the user to choose which of the features to install? That’s the idea behind Extensibility Essentials – an extension pack that ships community-recommended extensions for extension authors.

Extension authors are usually interested in improving their own tooling in Visual Studio – either by installing extensions created by others or by building some themselves. By banding together, we can create the best and most comprehensive tooling experience for extension authoring. So, let’s test that theory by creating a range of extensions published to the Marketplace under our own accounts, and reference them in Extensibility Essentials to provide a unified and simple installation experience.

The individual extensions can and probably should be single purpose in nature. This prevents feature-creep where additional features are added that may or may not be useful for extension authors. If additional features are not closely related to the extension, then simply create a new extension for them. That way it is up to the individual extension author to decide if they wish to install it. It is also crucial that the extensions follow all the best practices.

Once the individual extension is stable, it can be added to Extensibility Essentials.

The extension pack Extensibility Essentials doesn’t do anything by itself. It is a barebone extension pack that just references the individual extensions. When installing the extension pack, the user can choose which of the referenced extensions to install. At the time of this writing, there are 9 individual extensions.

How the individual extensions are listed before installing

Ideas for new extensions can be centralized to the GitHub issue tracker. By collecting ideas in a central location, it provides a single location to comment on and potentially design features ahead of implementation.

The issue tracker is for both bugs and suggested features

It would be cool if…

So next time you’re sitting in Visual Studio working on an extension, think about what feature you’d like that would make you more productive. If you can’t think of a feature, but feel there is a scenario that is particularly problematic, then open a bug on the GitHub issue tracker and let other people try to figure out how an extension could perhaps solve the issue.

Thinking “it would be cool if…” is the first step to make it possible and with the Extensibility Essentials, it might be closer to becoming reality than imagined.

Does this idea resonate with you? Let me know in the comments.

Mads Kristensen, Senior Program Manager
@mkristensen

Mads Kristensen is a senior program manager on the Visual Studio Extensibility team. He is passionate about extension authoring, and over the years, he's written some of the most popular ones with millions of downloads.

Who will be announced as the next Small Basic Guru? Read more about December 2018 competition!

$
0
0

What is TechNet Guru Competition?

Each month the TechNet Wiki council organizes a contest of the best articles posted that month. This is your chance to be announced as MICROSOFT TECHNOLOGY GURU OF THE MONTH!

One winner in each category will be selected each month for glory and adoration by the MSDN/TechNet Ninjas and community as a whole. Winners will be announced in dedicated blog post that will be published in Microsoft Wiki Ninjas blog, a tweet from the Wiki Ninjas Twitter account, links will be published at Microsoft TNWiki group on Facebook, and other acknowledgement from the community will follow.

Some of our biggest community voices and many MVPs have passed through these halls on their way to fame and fortune.

If you have already made a contribution in the forums or gallery or you published a nice blog, then you can simply convert it into a shared wiki article, reference the original post, and register the article for the TechNet Guru Competition. The articles must be written in December 2018 and must be in English. However, the original blog or forum content can be from before December 2018.

Come and see who is making waves in all your favorite technologies. Maybe it will be you!


Who can join the Competition?

Anyone who has basic knowledge and the desire to share the knowledge is welcome. Articles can appeal to beginners or discusse advanced topics. All you have to do is to add your article to TechNet Wiki from your own specialty category.


How can you win?

  1. Please copy/Write over your Microsoft technical solutions and revelations to TechNetWiki.
  2. Add a link to your new article on THIS WIKI COMPETITION PAGE (so we know you've contributed)
  3. (Optional but recommended) Add a link to your article at the TechNetWiki group on Facebook. The group is very active and people love to help, you can get feedback and even direct improvements in the article before the contest starts.

Do you have any question or want more information?

Feel free to ask any questions below, or Join us at the official MicrosoftTechNet Wiki groups on facebook. Read More about TechNet Guru Awards.

If you win, people will sing your praises online and your name will be raised as Guru of the Month.

PS: The above top banner came from Vimal Kalathil.


NEW REFERENCE ARCHITECTURE: Build a real-time recommendation API on Azure

$
0
0

We have a new AI Reference Architecture (on the Azure Architecture Center) from AzureCATNikhil Joglekar, Miguel Fierro, and Max Kaznady It was edited by Nanette Ray and Mike Wasson. Reviewed by AzureCATs Tao Wu, Danielle Dean, Emmanuel Awa, and  Le Zhang.

This reference architecture shows how to train a recommendation model using Azure Databricks and deploy it as an API by using Azure Cosmos DB, Azure Machine Learning, and Azure Kubernetes Service (AKS).

This reference architecture is for training and deploying a real-time recommender service API that can provide the top 10 movie recommendations for a given user.

This Reference Architecture includes the following information:

  1. ArchitectureExplaining the different elements of the architectural diagram.
  2. Performance considerationsWhat to watch out for to maintain high levels of performance.
  3. Scalability considerations - A survey of a few Azure services to scale according to your unique needs.
  4. Cost considerations - How pricing works across the services.
  5. Deploy the solution - Instructions and access to our Microsoft Recommenders repository, with more information, instructions, scripts, and notebooks.

Head over to the Azure Architecture Center to learn more about this reference architecture, Build a real-time recommendation API on Azure.

 

See Also

Additional related Reference Architectures:

Find all our Reference Architectures here.

 

AzureCAT Guidance

"Hands-on solutions, with our heads in the Cloud!"

A Year in Visual Studio App Center Test

$
0
0


 
 
Our team is proud of what we brought you with Visual Studio App Center’s physical device testing service in 2018, which included greatly exceeding 3,000 physical devices in our lab, same-day operating system updates for iOS/Android, and seamless upgrades from Xamarin Test Cloud to App Center. If automated UI testing is not already part of your development pipeline, App Center can help. Log in or create your free App Center account to get started today!
 
 

 
 

New Devices

App Center is committed to providing access to cross-platform, multi-format devices, including the latest mobile phones and tablets so that you can test your app thoroughly on real devices, ensuring your users will have the best experience. This year we added new and popular devices such as iPhone XS/XS Max/XR, Samsung Galaxy, Tab S4, Note 9, Huawei Mate 20 Pro, LG G7 ThinQ, Google Pixel 3 / XL, Essential Phone and many more!
 
 

 
 

Operating System Updates

In today’s fast-moving mobile world, we know that it’s important to deliver a high-quality experience on the latest platforms and operating systems. Over the past year we’ve delivered same-day support for iOS and Android major versions; iOS (12.0) and Android Pie (9.0) Get ahead of any breaking changes to your app that could be caused by this major platform update.
 
 

 
 

Extension-less XCUITest Support

To make it even easier to run your XCUITest tests in App Center, we’ve eliminated the need to modify your Xcode project. XCUITest tests can now run on App Center without linking the App Center XCUITest Extension framework. Removing the need for this extension greatly simplifies how end users can go from local testing to easily running on multiple devices in App Center Test Cloud.

Improved Test Notifications

When a test run is complete, we’ll send all collaborators of your app an email with the results of your test run. This report is now configurable by navigating the App -> Settings -> Email Notifications -> “When a test run is completed”.


 
 

Get Started Today

We have a lot of exciting plans for App Center Test in 2019. We look forward to providing even more hardware so you can continuously verify quality in your mobile apps. If you have not already done so, sign-up for free today!
 
 

Remote Monitoring of IoT Devices Using Azure and HoloLens

$
0
0

App Dev Manager, Richard Newell explores HoloLens as a remote monitoring tool for IoT devices using Azure.


In this post we will cover all the steps from the setting up of the backend infrastructure, setting up your IoT environment, and connecting it with Azure Services deploying it onto a device. A brief overview of the different aspects of this blog are as follows:

  • Setting up the backend infrastructure
  • Building up the Azure IoT solutions
  • Connecting a holographic app with Azure

By the end of the post, we will have covered the complete end-to-end development process that requires building connected devices using several Azure IOT services.

This is going to be an IoT solution, which receives data from connected devices, stores the data and makes it available for consumption by a holographic or mobile application. So, you will first learn to set up the backend infrastructure--where you will have a device--which connects with a Cloud Gateway. Data received by the Cloud Gateway is stored within persistent storage, and finally, this data will be made available to a Holographic or Mobile app through Web APIs.

In this solution architecture, we use an IoT Hub as a Cloud Gateway for event data ingress coming from the sensor.

Setting up the Infrastructure

First, we want to create an event hub, you will need a valid Azure subscription, navigate to https://ms.portal.azure.com

  1. From New, select Internet of Things and create a new IoT Hub by providing a name, resource, and other required details, such as pricing tier or location.
    HL1
  2. Once the IoT Hub is created and deployed, open the IoT Hub and navigate to Settings | Shared Access Polices and select ‘iothubowner’.
  3. Write down its connection string-primary key for further reference.
    hl2

Second, we want to create the Azure Cosmos DB

  1. From New, select Azure Cosmos DB service and create a new Cosmos DB by providing ID, resource group, location, and your subscription. You must make sure that you select SQL (DocumentDB) from the API Dropdown.
  2. Once "Azure Cosmos DB account" is created, add a collection by clicking on the "Add Collection" button and add a database with it.
  3. Write down the URI, Primary Key and the Primary connection string.
    hl3

Now that the IoT Hub and the Azure Cosmos DB are created we can go ahead and create the Stream Analytics Job which will take up the data input from the IoT Hub and store the data in the Azure Cosmos DB. Once you create the newly created job, do the following configurations:

  1. Select the Input tab and add a new input of type IoT Hub, connecting to the IoT Hub which you have just created above.
  2. Select the Output tab and add a new output of type SQL-DocumentDB, connecting with SQL-DocumentDB.
  3. Select the query tab and add a simple query to connect the input and output stream.

SELECT

*

INTO

HoloLensDemoAOutput

FROM

HoloLensDemoAInput


Now that we have the IoT hub Cosmos DB and the Stream Analytics Job connecting them we now want to connect a device to the IoT Hub.

    1. Install Visual Studio Code if you haven’t already done so. Look for Azure IoT Workbench in the extension marketplace and install it.
      In this project we are using the IoT Kit with the AZ3166
      hl4
      MXCHIP IoT DevKit – Where to buy on Amazon: http://a.co/d/6a7xmny
    2. Download and install Arduino IDE. It provides the necessary toolchain for compiling and uploading Arduino code.
    3. Open File > Preference > Settings and add following lines to configure Arduino;

· Windows:

"arduino.path": "C:\Program Files (x86)\Arduino",

"arduino.additionalUrls":"https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/

  1. Click F1 to open the command palette, type and select Arduino: Board Manager. Search for AZ3166 and install the latest version.
  2. Download and install USB driver from STMicro.
  3. Install the Arduino extension in Visual Studio Code.

In the new opened project window, click F1 to open the command palette, type and select IoT Workbench: Cloud, then select Azure Provision.

hl5

NOTE: Follow the prompts to select your subscription and the IOT hub you created earlier

  1. Click F1 to open the command palette, type and select IoT Workbench: Device, then select Config Device Settings > Select IoT Hub Device Connection String.
  2. On IoT DevKit, hold down button A, push and release the reset button, and then release button A. Your IoT DevKit enters configuration mode and saves the connection string.

Click F1 again, type and select IoT Workbench: Device, then select Device Upload.

hl6

That’s it you have successfully connected your device to Azure, if you are looking to use a different board please see the chart below I have also included simulator example code if you don’t have a physical device.

IoT device Programming language
Raspberry Pi Node.js, C
IoT Devkit Arduino in VS Code
Adafruit Feather HUZZAH ESP8266 Arduino

Developing the Web API

The Web API consumes the Azure Cosmos DB - SQL (DocumentDB) data received from sensors and exposes the REST API to be consumed by other applications. Here are the steps to follow to build the service. The solution consists of three projects;

https://github.com/sysrichn/HoloLensDemo.IoT.git

  1. HoloLensDemo.Model - consists of all entities related to building, floors and rooms
  2. HoloLensDemo.Simulator - consists of simulated simulator for building sensors, and pushes data to Azure EventHub on periodic interval
  3. HoloLensDemo.Web - Web API project, reads data from Document DB, and exposes as API

To make it work, make following configuration changes;

  1. Within HoloLensDemo.SimulatorProgram.cs file, update following EventHub connection settings:
    private const string EhConnectionString = "[Connection string for Event Hub]";
    private const string EhEntityPath = "[Event Hub Entity Path Name]";
  2. Within HoloLensDemo.WebDataBuildingCreator.cs, update following settings related to DocumentDB:
    private const string EndpointUri = https://[Name of DocumentDB].documents.azure.com:443/;
    private const string PrimaryKey = "[Primary or Secondary Key of DocumentDB]";
    private const string DatabaseName = "[Database name within DocumentDB]";
    private const string CollectionName = "[Collection name within DocumentDB]"

In the next step, deploy the services in Azure. There are several ways to do that. The easiest and fastest way that you can do it is from Visual Studio itself. Right-click on the WebAPI project and select Publish.

hl7

Publish the Web API

It will launch the Application Publish wizard, from where you can select the Microsoft Azure App Service option and follow the wizard steps to complete the deployment process.

hl8

Web API Publish Wizard

Once your service is hosted and deployed on Azure, you can hit the endpoints with any of the Rest API clients (such as the Postman REST API Client) or consume the data in your Holographic or Mobile app.

Connecting a holographic app with Azure

First in your holographic application the data model will hold the data returned by the Rest Services you just created, hence it should be same structure. Using the data class in the example code I provided add that same data model to your assets in your project;

Example the building class

[Serializable]

public class Building

{

public string BuildingName; public string Id; public string Address; public List<Floor> Floors;

}

Now we will create an Azure Bridge, which fetches the data from the services and maps it with the Building Data Model, which we have just defined in the preceding section:

  1. Navigate to the Assets| Scripts folder.
  2. Add a new script by navigating to context menu | Create | C# Scripts and name it AzureBridge
  3. Open the script file in Visual Studio

As in the first step, we will make this class a Singleton to use only one instance of it. You can achieve this by just inheriting the class from the Singleton class, by passing the same class type as parameter:

hl9

Attach the script with the Root object of your holographic app, by just dragging the script into the Root Object, either in Object Hierarchy or in the Inspector window.

With this, our Azure Bridge is ready to connect and fetch the live data. If you run the application by placing a breakpoint inside the GetBuilding() method, you should be able to explore the data retrieved from the services.

hl10

The objective of this blog was to build an enterprise scenario with HoloLens with integrated IoT, there are several enhancements that you can do to extend this solution.

Experiencing Data Access Issues for SEAU workspaces in Azure and OMS portal for Log Analytics – 12/21 – Resolved

$
0
0
Final Update: Friday, 21 December 2018 00:58 UTC

We've confirmed that all systems are back to normal with no customer impact as of 12/20, 23:15 UTC. Our logs show the incident started on 12/20, 09:25 UTC and that during the ~14 hours that it took to resolve the issue some of customers who have workspaces in SEAU would have experienced intermittent failure notifications while attempting to query data from the Azure and OMS Portals.
  • Root Cause: The failure was due to issue with one of our dependent backend service.
  • Incident Timeline: 13 Hours & 50 minutes - 12/20, 09:25 UTC through 12/20, 23:15 UTC

We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Vishal Suram


Release Notes for Field Service Version 8, Update Release 3

$
0
0

Applies to: Field Service for Dynamics 365 (version 8.3.0.255) on Dynamics 365 9.0.x and all solutions that are released as part of the Field Service deployment package. 

We are pleased to announce the latest update to the Field Service and Project Service Automation applications for Dynamics 365. This release includes improvements to quality, performance, and usability, and is based on your feedback and requests. 

This release is compatible with Dynamics 365 9.0.x. To update to this release, visit the Admin Center for Dynamics 365 online, solutions page to install the update. For details, refer to How to Install, Update a Preferred Solution 

Field Service enhancements (v8.3.0.255) 

Enhancement 

  • Numerous performance improvements. 
  • Improved Sitemap: Improved the Field Service app module sitemap including Connected Field Service entities. 
  • Field Service SLA: Implemented SLA for Work Orders which connects the long-standing SLA functionality to the Work Order Time From Promised and Time To Promised fields ensuring that arrival time related SLA fulfillment is driven by existing Field Service scheduling tools. 

Note: SLA is not enabled on the Work Order entity, out of the box. Enable it to use the two SLA KPIs that are shipped as part of the solution. 

Bug Fixes 

  • Fixed: Upgrade Bug Booking is updated with groupId before the record is created. 

Connected Field Service enhancements 

Connected Field Service (CFS) and the IoT solution now deploys with Field Service. The CFS solution extends Field Service to cater to IoT device-driven scenarios so that organizations can respond to device anomalies and react. This allows customers to predict issues in advance and fix issues remotely or schedule a service visit, preventing failures in a proactive way. 

Enhancement 

  • CFS solution is now available out of the box with Field Services, eliminating the need to install the additional solution package. 
  • The sitemap for the Field Service app module now includes CFS entities as part of the default navigation. The IoT settings in the navigation have been merged with the settings section of Field Services. Note: Non-System Administrators may need to include additional permissions for the CFS entities to see the CFS entities.
  • CFS deployment app for PaaS customers (with Azure IoT Hub) has been re-architected for improved performance during initial setup. The user experience for deployment process has also been updated to provide better deployment experience for administrators.

Technician Productivity 

Enhancement 

  • New Field Service Mobile app: The new mobile application that brings with it a plethora of new features. See https://aka.ms/fsmobile-docs for more information.  
  • Push Notifications: Send push notifications to the new Field Service Mobile application based on any conditions. Create a workflow and select the Field Service Mobile Entity Push Notification workflow action to use this feature.
  • See the out-of-the-box example workflow that will allow bookable resources to be notified when they are booked on a work order.  
  • Geofencing: Enable geofencing so that when a booking is scheduled for a work order, a geofence gets created around the service account for that work order and any exit or entry of that geofence by a bookable resource can generate a geofence event record. Out-of-the-box workflows are provided that can perform actions based on these geofence events such as sending a push notification to a bookable resource’s Field Service Mobile app when that resource arrives on-site for a work order.
Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>