Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Performance Degradation in West Europe – 01/19 – Investigating

$
0
0

Initial Update: Friday, January 19th 2018 11:27 UTC

We're investigating Performance degradation in Visual Studio Team Services of West Europe

  • Next Update: Before Friday, January 19th 2018 12:00 UTC

Sincerely,
Ariel Jacob


Thank you for trying out Visio Online Public Preview

$
0
0

We announced the general availability of Visio Online creation at the Microsoft Ignite conference and it has been available for subscription as part of the Visio Online Plan 1and Visio Online Plan 2. We are now ending the Public Preview of Visio Online as of 22nd January, 2018. We would like to thank every preview user for trying out Visio Online and giving valuable feedback.

 

You’ll continue to have access to your diagrams that you created using Visio Online Public Preview. You’ll continue to be able to view your diagrams stored online using Visio Online, and to create/edit diagrams online you can purchase Visio Online subscription from here.

 

To learn more about Visio Online you can visit: View, create, and edit a diagram in Visio Online.

 

We’re constantly looking for ways to improve Visio and invite you to send us your ideas through our UserVoice site. For questions about Visio Online and other features, please email us at tellvisio@microsoft.com. Lastly, you can follow us on FacebookYouTube, LinkedIn and Twitter for the latest Visio news.

 

Thanks again,

Team Visio

Experiencing Data Access Issue in Azure Portal for Many Data Types – 01/19 – Resolved

$
0
0
Final Update: Friday, 19 January 2018 12:33 UTC

We've confirmed that all systems are back to normal with no customer impact as of 01/19, 12:00 UTC. Our logs show the incident started on 01/19, 11:40 UTC and that during the 20 minutes that it took to resolve the issue 11% of customers experienced data access issues while accessing through azure portal.

  • Root Cause: The failure was due to an issue in one of our dependent platform services.
  • Incident Timeline: 20 minutes - 01/19, 11:40 UTC through 01/19 12:00 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Anmol

Top Reasons to visit the Microsoft Campus @ BETT 2018

$
0
0

The countdown is on for BETT 2018, when Educators and technology unite for insights into how technology can be used in the Classroom. There’s never been a more exciting year for attendees to experience the latest trends and technology innovations for Education.

The latest innovations and advances in technologies, including cloud, big data & advanced analytics, artificial intelligence and mixed reality, are unlocking new doors to do exciting things not possible before. Imagine being able to predict what might happen, what should happen, and the best ways to optimize results.

All these things are possible, now. Microsoft is showcasing the future of using Office 365 and Windows 10 to optimise institutions and ensure student's have the greatest learning opportunities—from digital experiences with Mixed Reality to Artificial Intelligence in Office 365 applications.

Here are the top things you won’t want to miss in the Microsoft Campus this year!


Immerse into the real possibilities of intelligent technology. This year, we are creating an educational journey, one in which attendees can experience the future of technology in the classroom. When you step into our Campus, you will experience our Microsoft Training Academy, Learn-Live Theatre and discuss how we can help you on your digital transformation.

See the power of digital transformation in action. As you engage with our Campus, our Team and Partners will take center stage. You will see cutting-edge solutions you can take advantage of in your own classroom, and see how our Microsoft Learning Consultants use the cutting-edge technology in their classroom's!

Check out technology that will empower your institute to maximise productivity and enhance practice. See how advanced analytics and AI can deliver intelligent ways to help tackle core tasks like the teacher workload. Explore new ways leaders are managing collaborate inside and outside the classroom to support educators and students.

Hear from our cutting-edge technology industry leaders. Don’t miss our Keynotes with Anthony Salcitio, VP Microsoft Education and Director of Education (UK), Ian Fordham.

We also encourage you to schedule time to meet with our Microsoft Education team to discuss your transformational journey and see how we can help you!

 

Experiencing Data Access Issue in Azure Portal for Many Data Types – 01/19 (second occurrence) – Resolved

$
0
0
Final Update: Friday, 19 January 2018 13:43 UTC

We've confirmed that all systems are back to normal with no customer impact as of 01/19, 13:15 UTC. Our logs show the incident started on 01/19, 12:50 UTC and that during the 25 minutes that it took to resolve the issue 6% of customers experienced data access issues while accessing through azure portal.

  • Root Cause: The failure was due to an issue in one of our dependent platform services.
  • Incident Timeline: 25 minutes - 01/19, 12:50 UTC through 01/19 13:15 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Anmol

January 2018 VSTS Hosted Agent Image Updates

$
0
0

The January 2018 updates are rolling out this week and should complete by the end of the day Friday January 19, 2018


GitHub Release #1801

Visual Studio 2017 on Windows Server 2016

  • Disabled IE Welcome Screen #41
  • Added .NET Core SDK to 2.1.4
  • Updated Node.js to 8.9.1 #14
  • Updated VisualStudio to 15.5.3
    • Added Visual Studio Components #41
      • Windows81SDK
      • NativeDesktop.Win81
      • NativeDesktop.WinXP
      • Blend.SDK.WPF
      • 4.6.2 Devleoper Tools
      • 4.7.1 SDK
      • 4.7.1 TargetingPack
      • 4.7.1 Developer Tools
  • Added Wix Vsix extension #41
  • Updated SSDT to 15.5.1
  • Added Android Components #37
    • Platform: android-27
    • Build-Tools: 27.0.1
    • Build-Tools: 26.0.3
    • Add-ons: Google api google-21
  • Added Azure PowerShell 5.1.1
  • Added the extended JAVA_HOME variables for Java 8 and 9 #20

Linux

  • Updated Docker daemon to 17.12

Top stories from the VSTS community – 2017.01.19

$
0
0

Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics.

TOP STORIES

VIDEOS

TIP: If you want to get your VSTS news in audio form then be sure to subscribe to RadioTFS .

FEEDBACK

What do you think? How could we do this series better?
Here are some ways to connect with us:

  • Add a comment below
  • Use the #VSTS hashtag if you have articles you would like to see included

Using Azure Commercial AAD Authentication and Graph API in Azure Government Web App – PowerShell Automation

$
0
0

In a previous blog post, I showed how it is possible to use commercial/GCC Azure Active Directory (AAD) authentication for an Azure Web App deployed in Azure Government. This scenario is relevant for organizations that have a commercial/GCC AAD tenant for Microsoft Office 365 but they also have a tenant in Azure Government for cloud computing. More broadly, it applies to organizations with multiple tenants whether they be in the same or different clouds, e.g., an organization could have two tenants in Azure Commercial cloud and they would like to mix and match which tenants they use for AAD authentication in different Web Apps.

Organizations may also want to access the Microsoft Graph API to query information about users or even perform such tasks as sending emails on behalf of users, etc. If the Web App is configured with "Easy Auth" developers will be able to leverage the AAD access token to interact with the Graph API as discussed in this blog. Critically it requires:

  1. An AAD App Registration with appropriate delegated permissions set.
  2. The Web App needs to have the /config/authsettings set correctly. Specifically, in addition to the App registration ID and the client secret, it also needs to have additionalLoginParams set to an array containing ["resource=https://graph.windows.net"].

The latter point is especially tricky since there is no way of setting it directly in the Azure Portal. If the Web App is in Azure Commercial, one can use https://resources.azure.com to set it, but this tool is not available for Azure Government and consequently, the parameter needs to be set with Powershell.

I have made a couple of tools that can be used to configure all of this correctly. The tools are included in my HansenAzurePS module, which you can find in the PowerShell Gallery. This module contains a number of convenience tools such as the Get-GitHubRawPath, which was discussed in a previous blog post. In this blog post, I will demonstrate two tools:

  1. New-AppRegForAADAuth, which is used to create the AAD App registration and enable Graph API permissions.
  2. Set-WebAppAADAuth, which is used to set AAD authentication on the Web App and configure the authsettings correctly for access to Graph API as discussed above.

Let's assume you have already set up a Web App in Azure Government and you would like to enable AAD auth and Graph access using a commercial AAD tenant, you can create the App registration with:

$siteUri = "https://WEB-APP-NAME.azurewebsites.us"
$appreg = New-AppRegForAADAuth -SiteUri $siteUri -Environment AzureCloud

You will be asked for your Azure Commercial credentials. The credentials used must be able to create an App registration. If you would like to add addition Graph API permissions use the -GraphDelegatePermissions parameter. After this you can configure your Azure Government (or commercial) Web App:

Set-WebAppAADAuth -ResourceGroupName RG-NAME -WebAppName WEB-APP-NAME `
-ClientId $appreg.ClientId -ClientSecret $appreg.ClientSecret `
-IssuerUrl $appreg.IssuerUrl -Environment AzureUSGovernment

You will be asked for your Azure Government credentials if you are not signed in.

To use these tools you should make sure you have installed the modules AzureRm, AzureAD, and HansenAzurePS:

Install-Module AzureRm
Install-Module AzureAD
Install-Module HansenAzurePS

After this configuration, your Web App will be able to access the Graph API in Commercial Azure. As an example, this C# code snipped could be added to an ASP.NET controller (in MVC app) to get the user information:

        public async Task Me()
        {
            string res = String.Empty;

            string accessToken = Request.Headers["x-ms-token-aad-access-token"];

            var client = new HttpClient();
            client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
            var response = await client.GetAsync("https://graph.microsoft.com/v1.0/me");
            var cont = await response.Content.ReadAsStringAsync();

            Me me = JsonConvert.DeserializeObject(cont);

            ViewData["UserPrincipalName"] = me.UserPrincipalName;
            ViewData["DisplayName"] = me.DisplayName;
            ViewData["Mail"] = me.Mail;
            ViewData["me"] = cont;

            return View();
        }

I have made a very simple ASP.NET Web App that implements this Graph Access in the HomeController and adds an example View. You can find the code for that application in this GitHub repository.

Putting all of this together, here is how you would a) create a new Web App (in Azure Government), b) deploy the example application, c) configure an App registration in Azure Commercial, and c) configure the auth settings of the Azure Government Web App.

#Configure your settings
$ResourceGroupName = "MY-WEB-APP-NAME"
$webAppName = $ResourceGroupName
$aspName = "$webAppName-asp"
$GitRepoUrl = "https://github.com/hansenms/GraphAPIAspNetExample.git"

#Where do you want your Web App:
$webAppEnvironment = "AzureUSGovernment"
$Location = "usgovvirginia"

#Where do you want your AAD App registration
$appRegEnvironment = "AzureCloud"

#Will prompt you to log in
Login-AzureRmAccount -Environment $webAppEnvironment

#If you need to select subscripton
#Select-AzureRmSubscription -SubscriptionName <NAME OF SUBSCRIPTION>

$grp = Get-AzureRmResourceGroup -Name $ResourceGroupName -ErrorVariable NotPresent -ErrorAction 0
if ($NotPresent) {
    $grp = New-AzureRmResourceGroup -Name $ResourceGroupName -Location $Location
}

$asp = Get-AzureRmAppServicePlan -Name $aspName -ResourceGroupName $ResourceGroupName -ErrorVariable NotPresent -ErrorAction 0
if ($NotPresent) {
    $asp = New-AzureRmAppServicePlan -Name $aspName -ResourceGroupName $ResourceGroupName -Location $Location -Tier Standard
}

$app = Get-AzureRmWebApp -Name $webAppName -ResourceGroupName $ResourceGroupName -ErrorVariable NotPresent -ErrorAction 0
if ($NotPresent) {
    $app = New-AzureRmWebApp -Name $webAppName -ResourceGroupName $ResourceGroupName -Location $Location -AppServicePlan $asp.Id
}

# Configure GitHub deployment from your GitHub repo and deploy once to web app.
$PropertiesObject = @{
    repoUrl = "$GitRepoUrl";
    branch = "master";
    isManualIntegration = "true";
}

Set-AzureRmResource -PropertyObject $PropertiesObject -ResourceGroupName $ResourceGroupName -ResourceType Microsoft.Web/sites/sourcecontrols -ResourceName $webAppName/web -ApiVersion 2015-08-01 -Force

$siteUri = "https://" + $app.HostNames[0]
$appreg = New-AppRegForAADAuth -SiteUri $siteUri -Environment $appRegEnvironment
Set-WebAppAADAuth -ResourceGroupName $ResourceGroupName -WebAppName $webAppName -ClientId $appreg.ClientId -ClientSecret $appreg.ClientSecret -IssuerUrl $appreg.IssuerUrl -Environment $webAppEnvironment

#All Done

After this you can browse to the URL of your Web App and you will be prompted to log in with your credentials. The Web App has a "Me" menu item at the top of the screen. If you navigate there, you should see the information pulled about you from the Graph API:

Notice that the Web App is running in Azure Government but the authentication information and the Graph API information is coming from Azure Commercial.

And that's it. Let me know if you have comments/concerns/suggestions.

 


What’s new in Microsoft Social Engagement 2018 Update 1.1

$
0
0

Microsoft Social Engagement 2018 Update 1.1 is ready and will be released in January 2018. This article describes the fixes, and other changes that are included in this update.

New and updated features

Improved accessibility across all interfaces

With this update, Social Engagement is more accessible for all users. These changes include support for keyboard navigation, screen reader support, and overall improvements to the user experience.

New capabilities for all users:

  • You can choose to represent the data from widgets in Social Engagement in a chart or a data table.
  • You can choose to render the charts with a fill pattern or solid colors. When selecting the fill patterns, data on maps will be represented in different shapes and not rely exclusively on color.
  • You can navigate the maps in Social Engagement with a keyboard by selecting a tab to reach the first data point on a map.
  • In activity maps, you can choose between a visual map and a data table. The map allows navigation by keyboard in cardinal directions.
  • You will experience improved navigation for post lists and post details

Administrators now enabled to remove alert recipients

With an administrator configuration role in Social Engagement, you can now remove recipients from alerts that were configured by other users. You can search for a specific email address and then remove the recipient from all alerts that send email to that address. Additionally, you can export a list of alerts this recipient receives.

Changes for author information from Facebook pages

Starting February 6, 2018, Facebook updates its API to pull data for Facebook pages. From that date, author information for Facebook posts will only be available for pages that have been added as a social profile to your Social Engagement solution. The content of posts and comments, as well as enrichments such as sentiment, will continue to be available for posts and comments without author information. We recommend you add page access tokens for every Facebook page before this change on February 6.

Read more about the experience for Facebook Pages in Social Engagement.

Service and product improvements

In addition to the new features, Update 1.1 addresses the following issues:

  • Updated and translated UI text for several languages in various areas of Social Engagement.
  • Resolved limitations for Firefox users when editing text input. You can now place the text cursor as expected when writing messages.

The HMD Exerciser Kit- A Test Kit for VR HMDs

$
0
0

Authored by Matthew Hartman [MSFT] 

To support the wave of VR HMDs coming to market, Microsoft has developed the HMD Exerciser Kit. This kit is based on the MUTT ConnEx platform and is specifically tailored for HMD testing. 

 HMD Exerciser Kit

 

The HMD Exerciser Kit provides 

  • USB Plug/Unplug/Multiplexing 
  • HDMI Plug/Unplug/Multiplexing 
  • IR User Presence Detection Spoofing 
  • Independent Display Brightness/Color Detection
  • 2x Servo Control with Independent Servo Power 
  • HMD Audio Level Monitoring 
  • USB Voltage/Current Polling 

 

The setup is flexible and can be designed to meet your specific test requirements. Here’s an example of how we used the HMD Exerciser Kit in our lab. In this configuration, we have also have the two motion controllers and the HMD on turntables for movement/FOV testing. 

Complete setup on lab bench 

 

HMD Exerciser Kit Hardware 

The kit includes two main components. The HMD Exerciser (left) and the HMD Board (right). 

HMD Exerciser and HMD Board

The HMD Exerciser is the main unit for all the connections to the HMD and PC. It handles all of the measurements, multiplexing, and PC communication. More details about the components that make up the HMD board are available in the Documentation. 

The HMD Board contains the hardware that interacts with the HMD’s displays and presence sensor. The two TCS34725 color sensors are placed to line up with each display. This allows independent brightness/color measurement. The IR photodiode and LED match the typical placement of IR user presence sensors. They are used in combination to spoof user presence or absence. The desired user presence state is controllable via software. 

 

 HMD Board on 3D printed mount

The HMD Board fits in a 3D printed mount which is designed to clip securely into the HMD. This mount is designed for the Acer Windows Mixed Reality HMD. 

 

HMD Board clipped into Acer HMD

 The HMD Board attaches to the HMD Exerciser using the flat ribbon cable shown above. Each HMD Exerciser can test up to two HMDs with a single PC. 

Find more details in the docs here. 

 

HMD Exerciser Kit Software 

The HMD Exerciser Kit is controlled either through a command line executable or managed class. The command line utility is available in the MUTT tools, and the managed class is available in the BusIoTools Git repo. Look for more details in the Microsoft Docs. 

To get started with the command line tool, identify which ports your HMD is plugged in to on the HMD Exerciser and select those ports to connect the HMD. For this example, we’ll use USB and HDMI port 1. 

 

Next, tell the kit what port (1 or 2) your HMD Board is plugged in to. For this example, we’re using port 1. 

 

After this command, all the display/audio/presence commands will apply to the HMD on port 1. Now we can grab the HMD’s display brightness, display color and audio level. We can also set user presence spoofing if the HMD uses IR user presence detection. 

 

To disconnect the USB or HDMI ports, just set the port to ‘0’ in the command 

 

More Info and Purchasing 

Check out the HMD Exerciser Kit documentation on Microsoft Docs and buy the hardware from MCCI. 

Error improvement for invalid X-AnchorMailbox in REST API calls

$
0
0

We wanted to give you a heads-up on a recent change to the error returned by the Outlook REST API when the value of the X-AnchorMailbox is incorrect. If your app has error-handling for this scenario based on the current behavior of the service, this may be a breaking change.

Starting immediately, the service will start returning a 421 Misdirected status instead of a 503 Service Unavailable. The intent of this change is to return a more appropriate HTTP status and to make it easier for developers to detect why the request failed and fix the problem.

Old behavior

Prior to this change, sending a REST API request with an incorrect X-AnchorMailbox header would result in the following response.

HTTP/1.1 503 Service Unavailable

{
  "error": {
    "code": "MailboxInfoStale",
    "message": "Mailbox info is stale."
  }
}

New behavior

With this change, if your app receives a 421 HTTP status from a REST call, you should check the response body for an error object, then check the code property. If it is ErrorIncorrectRoutingHint, the value you sent in the X-AnchorMailbox is incorrect.

HTTP/1.1 421 Misdirected

{
  "error": {
    "code": "ErrorIncorrectRoutingHint",
    "message": "The x-anchor mailbox 'jason@contoso.com' does not match the target of the request."
  }
}

Handling the error

If your app gets this error, it is recommended that you get the user's mailbox GUID and use that for the X-AnchorMailbox value, instead of the user's SMTP address. You can get the user's mailbox GUID by making the following REST request (with no X-AnchorMailbox header):

GET https://outlook.office.com/api/v2.0/me

This will return the following response:

{
  "Id": "3c7f0e3a-623b-85ae-4032-07d41531beff@7aeb6117-c342-4861-bec7-f8803ae85e41",
  "EmailAddress": "jason@contoso.onmicrosoft.com",
  "DisplayName": "Jason Johnston",
  "Alias": "jason",
  "MailboxGuid": "fece2b65-3577-4972-bf3d-5594fc9c9f9e"
}

You would then use the value of MailboxGuid in the X-AnchorMailbox for subsequent REST calls.

GET https://outlook.office.com/api/v2.0/me/messages

X-AnchorMailbox: fece2b65-3577-4972-bf3d-5594fc9c9f9e

Updating “Select application” drop down in the Policy blade to only show B2C applications

$
0
0

In order to test Azure AD B2C policies during the development phase, administrators can generate test links to invoke and run a policy through the Azure Portal. In order to configure the test link, you need to select an application and reply URL.

 

 

Over the next few weeks, we will be making a modification to the "Select application" drop down so that you will only be able to select applications created through the Azure AD B2C applications menu.

 

 

Azure AD applications were originally shown in the drop down for legacy reasons, and we are now hiding them to simplify the experience. The change will not have any impact to your existing applications that are running in production. It will only affect the test links generated through the portal.

Performance: Evaluate Data Skew

$
0
0

This topic applies to both Azure SQ Data Warehouse and Analytic Platform System

Data skew occurs when one distribution has more data than others.  When data is inserted into a distributed table, each row will be assigned and sent to a distribution for storage.  The distribution a row is sent to is decided by applying a hash algorithm to the value in the distribution column specified at table creation.  The same value will always go to the same distribution. 

Skew comes into play when the data has a small portion of values that have a large number of duplicates.  While you are able to utilize DBCC PDW_SHOWSPACEUSED ( " [ database_name . [ schema_name ] . ] | [ schema_name .] table_name  " )   to get the row count per distribution, it will not give you insight into the density of the data.  If you want to see the actual values the data is skewed on, you can run the following query against the instance:

 

SELECT Count([distirbution_column]) AS [Row Count],
[distirbution_column]
FROM [factscadahistory2017] group by [distirbution_column]
ORDER BY [Row Count] DESC

 

If there is a large disparity in the row counts for a couple of values, the distributions those land on will have skew. You will have to decide if this skew is acceptable for your application. In general we say 20% is acceptable.

Using customvision.ai and building an Android application offline image identification app

$
0
0

image

With the recent release of offline custom vision for iOS and CoreML https://blogs.msdn.microsoft.com/uk_faculty_connection/2017/09/14/microsoft-custom-vision-api-and-intelligent-edge-with-ios-11/ 

You can now build offline model to Android devices https://github.com/Azure-Samples/cognitive-services-android-customvision-sample

The following sample application demonstrates how to take a model exported from the Custom Vision Service in the TensorFlow format and add it to an application for real-time image classification.

Prerequisites

Quickstart

  1. clone the repository and open the project in Android Studio https://github.com/Azure-Samples/cognitive-services-android-customvision-sample
  2. Build and run the sample on your Android device

Replacing the sample model with your own classifier

The model provided with the sample recognizes some fruits. to replace it with your own model exported from the Custom Vision Service do the following, and then build and launch the application:

  1. Create and train a classifer with the Custom Vision Service. You must choose a "compact" domain such as General (compact) to be able to export your classifier. If you have an existing classifier you want to export instead, convert the domain in "settings" by clicking on the gear icon at the top right. In setting, choose a "compact" model, Save, and Train your project.
  2. Export your model by going to the Performance tab. Select an iteration trained with a compact domain, an "Export" button will appear. Click on Export then TensorFlow then Export. Click the Download button when it appears. A .zip file will download that contains TensorFlow model (.pb) and Labels (.txt)
  3. Drop your model.pb and labels.txt file into your Android project's Assets folder.
  4. Build and run.

Make sure the mean values (IMAGE_MEAN_R, IMAGE_MEAN_G, IMAGE_MEAN_B in MSCognitiveServicesClassifier.java) are correct based on your project's domain in Custom Vision:

Project's Domain
Mean Values (RGB)

General (compact)
(123, 117, 104)

Landmark (compact)
(123, 117, 104)

Retail (compact)
(0, 0, 0)

Resources

Calling C functions from Python – part 1 – using ctypes

$
0
0

Recently I’ve been evaluating Python interop technologies for a project at work and I think it’ll made an interesting blog series.

Let’s say your have following C code (add extern "C" if you are in C++ land) and compile it into a dynamic library (dll/.so/.dylib):

    int Print(const char *msg)
    {
        printf("%s", msg);
        return 0;
    }

    int Add(int a, int b)
    {
        return a + b;
    }

    struct Vector
    {
        int x;
        int y;
        int z;
    };

    struct Vector AddVector(struct Vector a, struct Vector b)
    {
        Vector v;
        v.x = a.x + b.x;
        v.y = a.y + b.y;
        v.z = a.z + b.z;
        return v;
    }

    typedef struct Vector (*pfnAddVectorCallback)(struct Vector a, struct Vector b);

    struct Vector AddVectorCallback(pfnAddVectorCallback callback, struct Vector a, struct Vector b)
    {
        return callback(a, b);
    }

One of the ways to call C API from Python is to use ctypes module. The tutorial in docs.python.org is fairly comprehensive and I certainly don’t intend to cover everything in the tutorial.

Instead, I’ll cover it in a exploratory style to show you how what I did to understand these API, and add some fairly interesting details of the API not quite covered by the tutorial (some of the behavior of the API are a bit obscure).

In a future post I’ll also deep dive into ctypes implementation in CPython, but for me to get to that, I need to cover the Python C API first in part 2 first, which makes the deep dive part 3. 🙂

Anyway, let’s get started.

Getting started

First let’s import the ctypes module:

>>> from ctypes import *

To load a module, you can use cdll, windll, oledll library loader objects.

For example, to load kernel32, you can do:

>>> cdll.kernel32
<CDLL 'kernel32', handle 56930000 at 508eb70>
>>> print vars(cdll)
{'kernel32': <CDLL 'kernel32', handle 56930000 at 508eb70>, '_dlltype': <class 'ctypes.CDLL'>}

Basically accessing its attribute would automatically load a DLL by name. This is implemented in Python by overriding __getattr__ and does a LoadLibrary. Obviously this either requires the DLL to be already loaded or searchable using various rules. Since every process effectively has kernel32.dll loaded in the process, you’ll always load kernel32 successfully.

Let’s say we built our dll as MyDll, and try to load it:

>>> cdll.MyDll
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:Python27libctypes__init__.py", line 436, in __getattr__
    dll = self._dlltype(name)
  File "C:Python27libctypes__init__.py", line 366, in __init__
    self._handle = _dlopen(self._name, mode)
WindowsError: [Error 126] The specified module could not be found

Well, that didn’t work. This is because MyDll is not locatable in path, application directory, nor system32.

OK. Let’s try again using cdll.LoadLibrary:

>>> cdll.LoadLibrary(r"D:ProjectsMyDllDebugmydll.dll")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:Python27libctypes__init__.py", line 444, in LoadLibrary
    return self._dlltype(name)
  File "C:Python27libctypes__init__.py", line 366, in __init__
    self._handle = _dlopen(self._name, mode)
WindowsError: [Error 193] %1 is not a valid Win32 application

Hmm.. That didn’t work either. Unfortunately the error didn’t provide a good description of the actual problem. The problem is that I’ve compiled my dll as a 32-bit DLL while Python.exe is 64-bit, so it doesn’t think it’s a valid (64-bit) application (win32 application is just a general term for 32-bit/64-bit windows applications, as opposed to 16-bit windows).

Recompiling the DLL as 64-bit fixed it:

>>> cdll.LoadLibrary(r"D:ProjectsMyDllx64Debugmydll.dll")
<CDLL 'D:ProjectsMyDllx64Debugmydll.dll', handle 4cae0000 at 5064ac8>

Interestingly, it doesn’t really show up in cdll, until you access cdll.mydll:

>>> print vars(cdll)
{'kernel32': <CDLL 'kernel32', handle 56930000 at 508eb70>, '_dlltype': <class 'ctypes.CDLL'>}
>>> cdll.mydll
<CDLL 'mydll', handle 4cae0000 at 509d5f8>

This is because cdll.LoadLibrary only returns a new instance of CDLL object. Because garbage collector didn’t kick in yet, the DLL is still loaded in this process, and therefore accessing cdll.mydll would “just work”. However, do note that these two mydlls are separate Python objects (5064ac8 vs 509d5f8), but pointing to the same library (56930000).

However, the best way is to keep the instance in a variable - there is no point loading this library twice (there is no harm though as DLL has a ref-count maintained by the OS and you wouldn’t load two copies - there is just one as long as it is the same one).

>>> mydll = cdll.LoadLibrary(r"D:ProjectsMyDllx64Debugmydll.dll")

Calling the function

Let’s try calling Print - just call it as a magic attribute:

>>> print vars(mydll)
{'_FuncPtr': <class 'ctypes._FuncPtr'>, '_handle': 140734480777216L, '_name': 'D:\Projects\MyDll\x64\Debug\mydll.dll'}

>>> ret = mydll.Print("abcn")
abc

>>> print vars(mydll)
{'Print': <_FuncPtr object at 0x0000000005501528>, '_FuncPtr': <class 'ctypes._FuncPtr'>, '_handle': 140734480777216L, '_name': 'D:\Projects\MyDll\x64\Debug\mydll.dll'}

Note that calling mydll.Print magically inserts a new attribute on the mydll object. Again, this is achieved through __getattr__

So how does ctypes call Print internally? A few things happens:

  • ctypes does a GetProcAddress (or dlsym) on Print to get the internal address
  • ctypes automatically recognize that you are passing a “abc”, and converts it to a char *
  • ctypes uses FFI to make the call, using cdecl calling convention. CDll by default uses cdecl.

Now let’s try doing an Add:

>>> mydll.Add(1, 2)
3

There is a bit ctypes magic at play: by default ctypes assumes every function returns a int, so this works out fairly well. If you want a different return type, you can change it by assigning a type to restype attribute. In this case, what we need is ctypes.c_char, which is the 1-byte char type in C.

>>> mydll.Add.restype = c_char
>>> mydll.Add(97, 1)  # this can be dangerous!
'b'

Now Add would interpret the returned int automatically as a char. Note that this can be dangerous as the size of int and char aren’t exactly the same. However, in most platforms / calling conventions, return value are returned via a register (EAX/RAX in intel platforms), so this simply involves a truncation and work out fine. But again, you don’t want to make such assumptions. So this is just for illustration purpose only.

Besides CDLL, there is also windll and oledll. windll by default treat the function as stdcall, and oledll would treat it as a COM function, which means accessing the function by an vtable offset, with stdcall, and returning HRESULT.

Define your own struct

Let’s take a look at how to define your own struct. You can do that by deriving from ctypes.Structure type, and supply a set of fields through the magic _fields_ attribute:

>>> class VECTOR(Structure):
...     _fields_ = [("x", c_int), ("y", c_int), ("z", c_int)]
...

If you print out the individual fields in the VECTOR type, you’ll see magic attributes showing up:

>>> print VECTOR.x, VECTOR.y, VECTOR.z
<Field type=c_long, ofs=0, size=4> <Field type=c_long, ofs=4, size=4> <Field type=c_long, ofs=8, size=4>

Note that the individual fields are nicely laid out sequentially (ofs=0, 4, 8), just what you would expect from a good old C struct.

Now we can create new instances of VECTOR and return back VECTOR:

>>> vector_a = VECTOR(1, 2, 3)
>>> vector_b = VECTOR(2, 3, 4)
>>> mydll.AddVector.restype = VECTOR
>>> vector_c = mydll.AddVector(vector_a, vector_b)
>>> print vector_c.x, vector_c.y, vector_c.z
3 5 7

Calling python code from C and some surpises

Let’s make this a bit more interesting. Let’s try to call AddVectorCallback while passinging a python function. To do this you need to make a callback function type first:

>>> ADDVECTORCALLBACK = CFUNCTYPE(VECTOR, VECTOR, VECTOR)

With this type we can then define a Python function that does the add:

>>> def AddVectorImpl(a, b):
...     return VECTOR(a.x + b.x, a.y + b.y, a.z + b.z)
...
>>> mydll.AddVectorCallback(ADDVECTORCALLBACK(AddVectorImpl), VECTOR(1, 2, 3), VECTOR(2, 3, 4))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: invalid result type for callback function

Unfortunately, this doesn’t work. Only simple data types like c_int are supported. Complex data types like struct/union are not, because they didn’t provide a setfunc. We’ll cover more of these details in a future deepdive ctypes post.

        StgDictObject *dict = PyType_stgdict(restype);
        if (dict == NULL || dict->setfunc == NULL) {
          PyErr_SetString(PyExc_TypeError,
                          "invalid result type for callback function");
          goto error;
        }

The workaround is to pass in a pointer instead:

    typedef void (*pfnAddVectorCallback)(struct Vector a, struct Vector b, struct Vector *c);

    struct Vector AddVectorCallback(pfnAddVectorCallback callback, struct Vector a, struct Vector b)
    {
        Vector c;
        callback(a, b, &c);
        return c;
    }
>>> ADDVECTORCALLBACK = CFUNCTYPE(None, VECTOR, VECTOR, POINTER(VECTOR))
>>> def AddVectorImpl(a, b, c):
...     c.contents = VECTOR(a.x + b.x, a.y + b.y, a.z + b.z)

And let’s see if it works:

>>> vector = mydll.AddVectorCallback(ADDVECTORCALLBACK(AddVectorImpl), VECTOR(1,2,3), VECTOR(2,3,4))
>>> print vector.x, vector.y, vector.z
-858993460 -858993460 -858993460

OK. So nope. Appears that setting contents doesn’t do what we want. Reading the code - it actually simply swap the internal pointers of the pointer object and doesn’t do any assignment!

    *(void **)self->b_ptr = dst->b_ptr;

The correct way is to assign it on the VECTOR object returned from contents attribute directly:

>>> def AddVectorImpl(a, b, c):
...     c.contents.x = a.x + b.x
...     c.contexts.y = a.y + b.y
...     c.contents.z = a.z + b.z
>>> vector = mydll.AddVectorCallback(ADDVECTORCALLBACK(AddVectorImpl), VECTOR(1,2,3), VECTOR(2,3,4))
>>> print vector.x, vector.y, vector.z
3 5 7

The reason this works is that the VECTOR object internal b_str pointer points directly to the Vector struct pointed by Vector*, so changing this VECTOR object changes the output Vector struct.

What’s next

As previously mentioned, I’ll cover Python C API in the next post and dive into ctypes implementation in CPython (which are written using python C API).

I’ll update them with links once they become available:

You can also find this post in http://yizhang82.me/python-interop-ctypes


Getting started building a iOS Offline App using Customvision.ai

$
0
0

This a post based on my colleague Anze Vodovnik demo at this Cambridge Hack www.vodovnik.com/2018/01/20/a-look-at-computer-vision/

The following is a short step by step tutorial on how to build a .NET core application and run this on Apple iPhone X with no connectivity

Getting Started

To get started, go to https://customvision.ai. You’ll be greeted by a page allowing you to create a new model (or a list of models if you have them already).

Create a New Custom Vision Model

See https://blogs.msdn.microsoft.com/uk_faculty_connection/2017/09/07/using-microsoft-customvision-ai-service-for-image-recognition/ for a full step by step guide

For this demo were going to build an app which identifies drinks.

Once the model is created, it’s time to start loading our images and tagging them. I’ve compiled (searched for and borrowed from the internet) photos of various different drink types, like wine, beer, shots and cocktails.

Once you have uploaded all the photos, you will have the option to tag them. So, you start with some photos of wine, and you can tag them with wine. You repeat the process for all the other classes and voila, your model is born.

Train the model

When we’ve uploaded the photos, we’re only half way through. Now we need to actually train the model to do something based on those images. There is a big green button at the top called Train.

The nice thing is that you also immediately see how goo the training data seems to be. As you move forward with the model, you will likely end up with multiple iterations the customvision.ai service allows you to have a maximum of 20 iterations in its current preview mode.

You are able to get back to any of the previous iteration had if they have  a much better precision and recall rate, so I’ve elected to keep those. You’ll notice there’s an Export button on top of that page. And that is the next step…

Export the Model

When we click the Export we can choose either CoreML (iOS 11) or TensorFlow (Android). Because I’m writing an iOS app, the choice was obvious.

That downloads a file ending with .mlmodel. You need to drag and drop that model into Xcode and you’re good to go. But, more on that later…

Step 2: Build the iOS App

Next, I needed an iOS app. Because I’m not an iOS app developer, I’ve elected to stick to the sample that the product team built (and it does exactly what it says on the tin). It’s available over on GitHub and you can get started by simply cloning that repository and modifying the bundle identifier and make sure you select the right team. Note: you still need your Apple Developer Account.

When you clone that, it will come with a pre-built model for you, of fruit. But that’s boring…

To make things more fun, we will drag that .mlmodel file we’ve downloaded earlier. Xcode is pretty good at making sure all of the things are set correctly.

The important bit for us is that it automatically generates a class based on the name of the model – in my case, Drinks1 . This is relevant for the next step.


Change the app to use the model

Now that the model is in our app, we need to tell the code to use it. To do that, we will be changing ViewController .

Specifically, there is a line of code that initialises the CoreML model and we need it to look like this:

let model = try VNCoreMLModel(for: Drinks1().model)

Obviously, the key thing for us is the Drinks1  name, representing the class generated from the model we’ve imported.

Step 3: Test the app

Once that’s changed, the app is good to go. I’ve run it on my iPhone X and pointed it towards an image of a wine glass and a shot. These are the results:

Important bit to grasp here is that this is fully offline, so it doesn’t need a connection to do this. So, we’ve trained our own model using Microsoft’s pre-built and optimised networks, exported that to a CoreML model and used it straight from our Swift app.

Bonus: REST API from a .NET Core App on a Mac

The above example is cool, but it doesn’t cover everything, and your model may be evolving constantly, etc. There is a prediction API available and exposed from the service as well meaning that for each model you build, you can also get an API endpoint to which you can send either an image URL or the image itself, and get back a prediction.

Naturally, the only reasonable thing to do was to get down and dirty, and use this morning to quickly build an example app to showcase that as well.

Make sure you’re environment is setup by following the instructions here. Next, launch a terminal and create a new Console app and run

dotnet new console --name MyAwesomeName

Then, open the Program.cs in your favourite editor and make it look something like this:

using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
 
namespace Dev
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.Write("Enter image file path: ");
            string imageFilePath = Console.ReadLine();
 
            Task.Run(() => MakePredictionRequest(imageFilePath));
 
            Console.WriteLine("nnnHit ENTER to exit...");
            Console.ReadLine();
        }
 
        static byte[] GetImageAsByteArray(string imageFilePath)
        {
            FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
            BinaryReader binaryReader = new BinaryReader(fileStream);
            return binaryReader.ReadBytes((int)fileStream.Length);
        }
 
        static async void MakePredictionRequest(string imageFilePath)
        {
            var client = new HttpClient();
 
            // Request headers - replace this example key with your valid subscription key.
            client.DefaultRequestHeaders.Add("Prediction-Key", "your prediction key here");
 
            // Prediction URL - replace this example URL with your valid prediction URL.
            string url = "your prediction URL here";
 
            HttpResponseMessage response;
 
            // Request body. Try this sample with a locally stored image.
            byte[] byteData = GetImageAsByteArray(imageFilePath);
 
            using (var content = new ByteArrayContent(byteData))
            {
                content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
                response = await client.PostAsync(url, content);
                Console.WriteLine(await response.Content.ReadAsStringAsync());
            }
        }
    }
}

There are two placeholders for your prediction URL and prediction key. You get the latter when you open the model in Custom Vision and click on the little World icon.

You then need to open the Settings tab in the upper right corner and get the Subscription key. Once that’s updated in the code, you can build and run it, either from Visual Studio Code, or from the terminal.

You’ll see this returns the tag ‘Nike’ which is great, because that’s exactly what trainers I wearing at the event:

The model  used in this example is one that was pre-built, and contains a lot of Adidas and Nike shoes two tag 27 images of Adidas and 30 images of Nike Trainers. The aim, of course, being that we are able to differentiate between them. The model looks like this:

So, with that, that should give you a quick and dirty start into the world of Computer Vision.

Microsoft DevOps tools win Tool Challenge @ Software Quality Days

$
0
0

Yes, we did it (again). Microsoft did win the Tool Challenge at the Software Quality Days and was presented the BEST TOOL AWARD 2018. The beaRainer and Gerwald with the Best Tool awarduty about this is that the conference participants voted for the best tool between vendors like CA Technologies, Micro Focus, Microsoft or Tricentis. Rainer Stropek presented for Microsoft on the Future of Visual Studio & Visual Studio Team Services covering topics like DevOps, mobile DevOps, Live Unit Testing and how Machine Learning will affect testing.

 

During the conference we presented our DevOps solution based on Visual Studio Team Services, the new Visual Studio App Center service for mobile DevOps and the Cloud platform Microsoft Azure as a place for every tester and developer, regardless of platform or language used, to run their applications or test environments.

 

Software Quality Days are the brand of a yearly 2-day conference (+2 days workshops) focusing on software quality and testing technologies with about 400 attendees. The conference is held in Vienna, Austria and was celebrating its 20th anniversary in 2018. Five tracks - practical, scientific and tool-oriented are building the agenda of conference. In the 3 Software Quality Days 2018 Logopractical tracks there are presentations of application-oriented experiences and lectures - from users for users. The scientific track presents a corresponding level of innovation and research results, and how they relate to practical usage scenarios. The leading vendors of the industry are presenting latest services and tools in the exhibition and showcase practical examples and implementations in the Solution Provider Forum.

Tool challenge
As part of the Software Quality Days the Tool challenge is a special format on the first day of the conference. Participating vendors get questions or a practical challenge that needs to be “solved” during the day. In the late afternoon the solution needs to be presented back to the audience of the conference. For the participating vendors the challenge is the development of the solution and content at the conference location with limited time available as well as the presentation to the audience in a slot of 12 minutes only. Each conference participant gets one voting card and can selects his favorite solution or presentation. The vendor with the highest number of voting cards wins the tool challenge.

Rainer and GerwaldThe slides of our contribution are posted on SlideShare: http://www.slideshare.net/rstropek/software-quality-days-2018-tools-challenge

Video of the Tool Challenge presentation: https://www.youtube.com/watch?v=STr0ZiBtfPQ

Special thanks go to Rainer Stropek for the superior presentation at the Tool Challenge!

 

Rainer Stropek, Regional Director & MVP, Azure (right in the picture)
Gerwald Oberleitner, Technical Sales, Intelligent Cloud, Microsoft (left in the picture)

ConfigMgr Current Branch–Software Update Delivery Video Tutorial

$
0
0

 

Check out this video tutorial I have posted over at the ConfigMgr blog here.

Description of the video:
The release of Windows 10 brought with it a change in the way updates are released – updates are now cumulative. Since the release of Windows 10 this same cumulative update approach has been adopted for the remainder of supported operating systems. While this approach has significant advantages there still remains some confusion about what it all means.

The video linked below was prepared by Steven Rachui, a Principal Premier Field Engineer focused on manageability technologies. In this session, Steven talks through the changes, why the decision was made to move to a cumulative approach to updating, how this new model affects software updating, how the cumulative approach is applied similarly and differently between versions of supported operating systems and more.

Comments
Please share comments on the video.  I am considering posting other videos like this on various ConfigMgr related topics and your comments will be valuable feedback for me to review.

Replace TypeScript with ES2015 for SharePoint Framework Applications

$
0
0

I am a huge fan of TypeScript. For large projects it is indispensable. However, most SharePoint Framework (SPFx) applications are not large by design. SharePoint Framework is intended for small, single purpose applications that augment the functionality of SharePoint. That is not to say that you will never write a SPFx application with 100,000+ lines of code, but applications of that are not common in the context of SPFx applications.

TypeScript comes with a cost. In my opinion the cost is worth it for large applications. The type safety will help you eliminate entire classes of bugs. However, this benefit comes with the cost of having to satisfy the compiler’s strict type checking for all of your code. With React applications, Props and State make this a bit more complicated.

For most SPFx applications, I prefer using es2015+ to build my React components. I don’t like fighting with the TypeScript compiler for the bite sized applications that SPFx was designed to build.

In this article, I will explain the steps to use es2015 or later to build SPFx applications.

There is no template for this, so we will use the React template that ships with the SPFx Yeoman generator. To get started create your application using:

mkdir my-project
cd my-project
yo @microsoft/sharepoint

You can accept all the defaults except for the question shown below. Choose React when asked “What framework would you like to use?”.

Next we need to add dependencies for Babel and friends to transpile our esNext code to JavaScript that most browsers can understand.

Go to a console window in the directory we created for our project and run the following command:

yarn add babel-loader babel-core babel-preset-env babel-preset-react babel-plugin-transform-class-properties -D

or with NPM:

npm i add babel-loader babel-core babel-preset-env babel-preset-react babel-plugin-transform-class-properties --save-dev

All of these dependencies, transpile our futuristic JavaScript. However, babel-plugin-transform-class-properties is used to make working with React easier. This is a proposed feature to the EcmaScript specification. It allows us to use class properties like this:

class MyComponent extends React.Component {
 handleSubmit = () => {
 // Code here
 }
}

With this syntax we can avoid cluttering the constructor with .bind(this) calls as arrow functions automatically set the “this” context to the enclosing class.

The webpack.config.js file is not exposed as part of the SPFx build process. However, an API is exposed that allows us to customize the build. We will leverage this from gulp. Add the following code to the gulpfile.js just above this line:

build.initialize(gulp);

The code to add:

build.configureWebpack.mergeConfig({
additionalConfiguration: (generatedConfiguration) => {
 generatedConfiguration.module.rules.push(
 {
 test: /.js$/,
 exclude: /(node_modules|bower_components)/,
 use: {
 loader: 'babel-loader',
 }
 });
 return generatedConfiguration;
}});

The above code merges a new configuration object into the Webpack config. However, we still need to provide some configuration to Babel. To do that we will use the package.json file.

Add the following code to package.json:

"babel": {
    "presets": [
        "env",
        "react"
    ],
    "plugins": [
        "babel-plugin-transform-class-properties"
    ]
},

I prefer to add it right under the “scripts” section and above the dependencies, but you can put it anywhere at the top level of the object.

At this point we need to convert the TypeScript application to JavaScript.

First rename all of the .ts or .tsx files to .js. Next, delete the “/app/components/IHelloWorldProps.ts” file and then delete the import statement for it in HelloWorldWebPart.js.

Next go through and delete all of the type annotations from the .js files you renamed previously. Also delete any import statements that pull in Props. After the above changes you should be able to run ‘gulp serve’ and at this point you should be able to create your components using es2015+ and have them compile and work correctly in the browser.

Installing NuGet Packages in Azure Functions 2.X Runtime (PREVIEW)

$
0
0

Note: The 2.X version is currently in a preview phase so please bare in mind that this could be no longer valid eventually.

Hi folks,

In the current version of the Azure Functions Runtime (1.x) we are able to install NuGet packages for our C# and F# functions. The way to do it is adding the dependencies to a file named project.json. As you may be thinking this is because when the 1.x version was designed all was about project.json files. There's a really good documentation page where you can read about it: https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-csharp 

During the development of the .NET Core tooling there was a design change, no longer supporting the project.json project files and instead moving the project system files to a XML ones, which is the MSBuild supported way. In the new version of the runtime we can find this design change as well. You can find the differences about both systems here: https://docs.microsoft.com/en-us/dotnet/core/tools/project-json-to-csproj

Adding NuGet Packages to the Function

As we would do with the 1.X, we need to create a file within our function folder to define the function's dependencies. Since we're using the new project system here the file name needs to be function.proj.

In the file, inside an ItemGroup section, we will be adding the PackageReference items with the package and the version. For example here I'm installing Microsoft.Azure.Devices 1.5 and Newtonsoft.Json 10:

<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
    <TargetFramework>netstandard2.0</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.Azure.Devices" Version="1.5.0"/>
    <PackageReference Include="Newtonsoft.Json" Version="10.0.3"/>
  </ItemGroup>

</Project>

Then just save the file if you're editing it from the portal and the Azure Functions Runtime will install your dependencies and compile your function.

May the PaaS  Serverless be with you!

Carlos

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>