Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Leveraging the new WinDbgX and Time-Travel-Trace –Script to list all access to files

$
0
0

 

WinDbg Preview a.k.a. WinDbgX aesthetically looks like a marriage between Visual Studio (VS) and WinDbg, however VS and WinDbg have not many things in common. For me, this is the good news. The bad news is that the support for managed code is as limited as WinDbg’s. You can download the preview here: https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugger-download-tools

 

The most impressive feature of the new debugger is by far the ability to create dump files to later be analyzed as live debugging. This is called Time-Travel-Debugging (TTD). The idea is that you can record an actual live process (at a performance penalty) to later debug going back and forth in time, thus the name. A good use for this is to capture a ‘live dump’ on a production server and analyze the dump later on another environment with access to source code and symbols. If you are a software provider, this is golden. Sometimes, only a particular environment causes the error and your logs do not show much that can help you identify the problem. Recording on a place and debugging on your development lab does not require that you ship your private symbols and source code to a customer’s environment. You may learn more here: https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/time-travel-debugging-overview

 

The analysis is done via a LinQ like syntax that can be used directly from the debugger command window or automated via a classic debugger extension or a JavaScript extension. In this post, I am presenting a proof-of-concept script that shows all instances when a file is accessed. More on WinDbg scripting here: https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/javascript-debugger-scripting

 

Proof of concept OpenFileList.js (this script only WORKS if TTD dumps – .run files)

"use strict";
 
    var FileAccess =
    {
        FILE_READ_DATA: 0x0001,    /* file & pipe */
        FILE_LIST_DIRECTORY: 0x0001,    /* directory */
        FILE_WRITE_DATA: 0x0002,    // file & pipe
        FILE_ADD_FILE: 0x0002,    // directory
 
        FILE_APPEND_DATA: 0x0004,    // file
        FILE_ADD_SUBDIRECTORY: 0x0004,    // directory
        FILE_CREATE_PIPE_INSTANCE: 0x0004,    // named pipe
 
 
        FILE_READ_EA: 0x0008,    // file & directory
 
        FILE_WRITE_EA: 0x0010,    // file & directory
 
        FILE_EXECUTE: 0x0020,    // file
        FILE_TRAVERSE: 0x0020,    // directory
 
        FILE_DELETE_CHILD: 0x0040,    // directory
 
        FILE_READ_ATTRIBUTES: 0x0080,    // all
 
        FILE_WRITE_ATTRIBUTES: 0x0100,    // all
 
        DELETE: 0x10000,
        WRITE_DAC: 0x40000,   
        WRITE_OWNER: 0x80000, 
        SYNCHRONIZE: 0x100000, 
        ACCESS_SYSTEM_SECURITY: 0x11F01FF,
 
 
        GENERIC_READ: 0x80000000,
        GENERIC_WRITE: 0x40000000,
        GENERIC_EXECUTE: 0x20000000,
        GENERIC_ALL: 0x10000000
    };
 
    var FileShare = 
    {
        FILE_SHARE_NONE: 0,
        FILE_SHARE_READ: 0x00000001,
        FILE_SHARE_WRITE: 0x00000002,
        FILE_SHARE_DELETE: 0x00000004
 
    };
 
function PrintFileAccess(Access)
{
    var flag = "";
    if(Access == 0)
      return "None";
    if(Access & FileAccess.FILE_ADD_FILE)
       flag = "FILE_ADD_FILE";
    if(Access & FileAccess.FILE_ADD_SUBDIRECTORY)
       flag = flag.concat(" FILE_ADD_SUBDIRECTORY");
    if(Access & FileAccess.FILE_APPEND_DATA)
       flag = flag.concat(" FILE_APPEND_DATA");
    if(Access & FileAccess.FILE_CREATE_PIPE_INSTANCE)
       flag = flag.concat(" FILE_CREATE_PIPE_INSTANCE");
    if(Access & FileAccess.FILE_DELETE_CHILD)
       flag = flag.concat(" FILE_DELETE_CHILD");
    if(Access & FileAccess.FILE_EXECUTE)
       flag = flag.concat(" FILE_EXECUTE");
    if(Access & FileAccess.FILE_LIST_DIRECTORY)
       flag = flag.concat(" FILE_LIST_DIRECTORY");
    if(Access & FileAccess.FILE_READ_ATTRIBUTES)
       flag = flag.concat(" FILE_READ_ATTRIBUTES");
    if(Access & FileAccess.FILE_READ_DATA)
       flag = flag.concat(" FILE_READ_DATA");
    if(Access & FileAccess.FILE_READ_EA)
       flag = flag.concat(" FILE_READ_EA");
    if(Access & FileAccess.GENERIC_ALL)
       flag = flag.concat(" GENERIC_ALL");
    if(Access & FileAccess.GENERIC_EXECUTE)
       flag = flag.concat(" GENERIC_EXECUTE");
    if(Access & FileAccess.GENERIC_READ)
       flag = flag.concat(" GENERIC_READ");
    if(Access & FileAccess.GENERIC_WRITE)
       flag = flag.concat(" GENERIC_WRITE");
    if(Access & FileAccess.FILE_TRAVERSE)
       flag = flag.concat(" FILE_TRAVERSE");
    if(Access & FileAccess.FILE_WRITE_DATA)
       flag = flag.concat(" FILE_WRITE_DATA");
    if(Access & FileAccess.FILE_WRITE_EA)
       flag = flag.concat(" FILE_WRITE_EA");
    if(Access & FileAccess.DELETE)
       flag = flag.concat(" DELETE");
    if(Access & FileAccess.WRITE_DAC)
       flag = flag.concat(" WRITE_DAC");
    if(Access & FileAccess.WRITE_OWNER)
       flag = flag.concat(" WRITE_OWNER");
    if(Access & FileAccess.SYNCHRONIZE)
       flag = flag.concat(" SYNCHRONIZE");
    if(Access & FileAccess.ACCESS_SYSTEM_SECURITY)
       flag = flag.concat(" ACCESS_SYSTEM_SECURITY");
 
 
 
    return flag;
}
 
function PrintFileShare(Share)
{
    var flag = "";
    if(Share == 0)
      return "None";
    if(Share & FileShare.FILE_SHARE_DELETE)
       flag = "FILE_SHARE_DELETE";
    if(Share & FileShare.FILE_SHARE_READ)
       flag = flag.concat(" FILE_SHARE_READ");
    if(Share & FileShare.FILE_SHARE_WRITE)
       flag = flag.concat(" FILE_SHARE_WRITE");
    return flag;
       
}
 
function findFileHandle()
{
    var fileOper = host.currentSession
        .TTD.Calls("KERNELBASE!CreateFileW", "KERNELBASE!CreateFileA");
    var locations = [];
    for(var loc of fileOper)
    {
        var posObj = {Start: loc.TimeStart, FileNameAddr: loc.Parameters[0],
              Access: loc.Parameters[1], ShareMode: loc.Parameters[2],
              Result: loc.ReturnAddress, Ansi: loc.Function.includes("CreateFileA") };
        
        locations.push(posObj);
        
    }
    return locations;
}  
 
 
function printFileName(location)
{
    host.diagnostics.debugLog("------------------------------------------------n");
 
    location.Start.SeekTo();
 
    host.diagnostics.debugLog((location.Result > 0 ? "(SUCCESS) " : "(Failure) ") +
                       (location.Ansi ? host.memory.readString(location.FileNameAddr) : host.memory.readWideString(location.FileNameAddr)) + "n");
    var access = host.evaluateExpression(location.Access.toString(16)+" & 0xffffffff")
    var share = host.evaluateExpression(location.ShareMode.toString(16)+" & 0xffffffff")
 
    // host.diagnostics.debugLog(access.toString(16) + " " + share.toString(16)+" ");
                        
    host.diagnostics.debugLog("Access: ["+access.toString(16)+"] "+PrintFileAccess(access)+" Share: ["+
                    share.toString(16) + "] "+PrintFileShare(share) + "n");  
    return;                   
}
 
function invokeScript()
{
    host.diagnostics.debugLog("Collecting File Access information...n");
 
 
 
    var currentPosition = host.currentThread.TTD.Position;
 
    var locations = findFileHandle();
    for (var location of locations)
    {
        printFileName(location);
    }
 
    currentPosition.SeekTo();
}
 
 
 

 

This is an example of the output:

Collecting File Access information...
------------------------------------------------
Setting position: 1DB4F:4D
(2370.2184): Break instruction exception - code 80000003 (first/second chance not available)
Time Travel Position: 1DB4F:4D
KERNELBASE!CreateFileW:
00007ffe`d0258460 4883ec58        sub     rsp,58h
Collecting File Access information...
------------------------------------------------
Setting position: 1DB4F:4D
(2370.2184): Break instruction exception - code 80000003 (first/second chance not available)
Time Travel Position: 1DB4F:4D
KERNELBASE!CreateFileW:
00007ffe`d0258460 4883ec58        sub     rsp,58h
Collecting File Access information...
------------------------------------------------
Setting position: 1DB4F:4D
(SUCCESS) C:windowsMicrosoft.NetassemblyGAC_MSILMicrosoft.SharePointv4.0_15.0.0.0__71e9bce111e9429cMicrosoft.SharePoint.dll
Access: [0x4c4b400]  WRITE_DAC ACCESS_SYSTEM_SECURITY Share: [0x1]  FILE_SHARE_READ
------------------------------------------------
Setting position: 42094:4D
(SUCCESS) C:windowsMicrosoft.NetassemblyGAC_64System.Webv4.0_4.0.0.0__b03f5f7f11d50a3aSystem.Web.dll
Access: [0x4c4b400]  WRITE_DAC ACCESS_SYSTEM_SECURITY Share: [0x1]  FILE_SHARE_READ
------------------------------------------------
Setting position: 4211F:4D
(SUCCESS) C:windowsMicrosoft.NetassemblyGAC_64System.Webv4.0_4.0.0.0__b03f5f7f11d50a3aSystem.Web.dll
Access: [0x4c4b400]  WRITE_DAC ACCESS_SYSTEM_SECURITY Share: [0x1]  FILE_SHARE_READ
------------------------------------------------
Setting position: 1DBCD:4D
(SUCCESS) C:windowsMicrosoft.NetassemblyGAC_MSILMicrosoft.SharePointv4.0_15.0.0.0__71e9bce111e9429cMicrosoft.SharePoint.dll
Access: [0x4c4b400]  WRITE_DAC ACCESS_SYSTEM_SECURITY Share: [0x1]  FILE_SHARE_READ
------------------------------------------------
Setting position: 414C5:4D
(SUCCESS) C:windowsMicrosoft.NetassemblyGAC_MSILMicrosoft.SharePoint.Publishingv4.0_15.0.0.0__71e9bce111e9429cMicrosoft.SharePoint.Publishing.dll
Access: [0x4c4b400]  WRITE_DAC ACCESS_SYSTEM_SECURITY Share: [0x1]  FILE_SHARE_READ
------------------------------------------------
Setting position: 4219E:4D
(SUCCESS) C:windowsMicrosoft.NetassemblyGAC_64System.Webv4.0_4.0.0.0__b03f5f7f11d50a3aSystem.Web.dll
Access: [0x4c4b400]  WRITE_DAC ACCESS_SYSTEM_SECURITY Share: [0x1]  FILE_SHARE_READ
------------------------------------------------
Setting position: 1DC4B:4D
(SUCCESS) C:windowsMicrosoft.NetassemblyGAC_MSILMicrosoft.SharePointv4.0_15.0.0.0__71e9bce111e9429cMicrosoft.SharePoint.dll
Access: [0x4c4b400]  WRITE_DAC ACCESS_SYSTEM_SECURITY Share: [0x1]  FILE_SHARE_READ
------------------------------------------------
Setting position: 4221C:4D
(SUCCESS) C:windowsMicrosoft.NetassemblyGAC_64System.Webv4.0_4.0.0.0__b03f5f7f11d50a3aSystem.Web.dll
Access: [0x4c4b400]  WRITE_DAC ACCESS_SYSTEM_SECURITY Share: [0x1]  FILE_SHARE_READ
------------------------------------------------
(...)

Custom directory enumeration in .NET Core 2.1

$
0
0

As discussed in my previous post, this post will cover the new enumeration extensibility points.

The extensibility API we're providing is meant to allow building high-performance custom enumerators. The performance gains primarily come through utilizing the new Span types and ref structs to allow relatively safe access to native data, avoiding unnecessary allocations. The API has been designed to be as simple as possible while still maintaining the low-allocation, high-performance primary design goals.

FileSystemEntry

File system data is wrapped in the System.IO.Enumeration.FileSystemEntry struct. It provides the same set of data that the FileSystemInfo classes do, along with some enumeration specific data:

FileSystemEnumerable

There are a few ways to get FileSystemEntry data. The simplest way is through the System.IO.Enumeration.FileSystemEnumerable class:

Basic usage isn't too complicated. Let's suppose you wanted to just get the names (not the full paths) of all the file and directories in a given directory:

You can, of course, write the equivalent functionality using the existing APIs that return FileSystemInfo objects (e.g. getting .Name off of each), but it comes with significant cost. Writing a custom enumerable only takes a single allocation for each MoveNext (the filename). Getting the Info objects via existing APIs will allocate much more (the Info class, the full path, etc.) and will be slower, particularly on Unix as it will cause another fstat call to fill out data you won't ever use.

To take it another step, lets filter to just files (no directories):

Here is another, potentially more useful, example. Lets say you want to create a helper that allows you to get all files with a set of given extensions:

Let's suppose you want to count the number of files in a directory. With the new APIs you can write a solution that cuts allocations by 200x or more (yes, by a factor of 200).

The example is a little strange as we need some sort of output transform. I picked int as the type and returned 1, but it could be anything, including string and string.Empty or null.

Very similar to the above, we could total up file sizes.

The performance characteristics of this last example differ quite a bit from Unix to Windows. Allocations are roughly equivalent, but Unix does not provide length when enumerating directories. We have to make another call to get the file length. .NET being designed for Windows, the content of what is in FileSystemInfo (and therefore matches what Windows gives back during enumeration). Unix doesn't give much more than the file name. To allow for this a number of the properties are lazy in the Unix implementation. Time stamps, length, and attributes are all in this bucket. Even the filename is lazy in that we don't convert the raw UTF-8 data to char until you access it. These details are important to know for multiple reasons:

  1. Calling properties you don't strictly need can have non-trivial cost
  2. Sitting on the struct without calling properties can give different results depending on how long you wait (notably with time stamps- we go out of our way to keep attributes constant)
  3. Using IsDirectory and IsHidden is better than using Attributes to check those states

FileSystemEnumerator

FileSystemEnumerable is a simple IEnumerable wrapper around FileSystemEnumerator. FileSystemEnumerable is meant to be simpler and provide a model that limits the number of types via delegate usage. FileSystemEnumerator has a few more complicated options that can be used for even more advanced scenarios.

Using the enumerator directly you can (if desired) more easily track when directories finish and have more control over errors. As the errors are native error codes, you would need to check the platform in addition to the code to know how to respond. It is not intended to be easy or commonly needed.

FileSystemName

This helper class provides filename matching methods.

Both Matches* methods allow escaping with the forward slash character '/'.

MatchesWin32Expression() is a little complicated. It matches according to [MSA-FS] 2.1.4.4 Algorithm for Determining if a FileName Is in an Expression. This is the algorithm that Windows actually uses under the covers (see RtlIsNameInExpression). If you want to match the way Win32 does you first have to call TranslateWin32Expression() to get your '*' and '?' translated to the appropriate '>', '<', and '"' characters. It is there if you want to match in the Win32 style. MatchesSimpleExpression() is the recommended style of matching. Win32 rules aren't easy to intuit.

That's the summary of the changes we've introduced in 2.1.

FAQ

Why is this so complicated?

This isn't intended to be a common-use API. The existing APIs will be kept, maintained, and extended based on customer demand. We don't want to:

  1. Have scenarios blocked waiting for new APIs to work their way through the system
  2. Have to write "normal" APIs to address more corner cases

In order to make this a usable extension point we have to sacrifice some usability to get the necessary characteristics. Note that what people build on this will directly impact our future designs of the standard, usability focused, APIs.

Why are you using Linq in your examples?

For example clarity. Some of the examples above could be optimized further.

Why aren't you providing an X matcher?

We want to only provide matchers that have broad applicability. Based on feedback we can and will consider adding new matchers in the future.

How do I get other platform specific data?

This is something we're investigating for future improvements. We might, for example, expose the UTF-8 data via another interface off of the entry data (or some other mechanism).

How to Connect Your Bot App with Microsoft Cortana

$
0
0

You can create and deploy a Bot app, connect it to Cortana, and test the app with Cortana all through the Azure portal and your Windows 10 PC.Check out the steps outlined in “Create your first skill”.

For .NET developers using Visual Studio, you can develop a Bot app by following instructions outlined in “Create a bot with the Bot Builder SDK for .NET” and  “Develop bots with Bot Builder”. With “Bot Builder SDK v4”, which is being actively developed, developers can create bots in other popular programming languages like JavaScript, Python, and Java.

Once you’ve done testing with the BotFramework-Emulator running locally, you will need to publish the Bot app to Azure, create a Web App Bot or Bot Channels Registration, and then Connect to Cortana or other channels.

image

 

Publish your Bot to Azure

With Visual Studio 2017, you can publish your Bot directly to your Azure portal. You can either create a new Azure App Service or deploy over an existing one.

image

 

image

 

Create Bot Channels Registration

On the Azure portal, search on “Bot” and select Bot Channels Registration. Make sure that you enter the Messaging endpoint. Note: If you haven’t published the app to the portal yet, you can leave it blank and complete it later on.

image

 

Update your Visual Studio Bot project with Microsoft App ID and Password

Select the Bot Channel Registration (listed under Type in the resource group of your Azure portal), and select Settings as shown below. If you create the web app bot directly from the Azure portal, the service is listed under “Web App Bot” type.

From the Setting page, you can update the Messaging endpoint. Also, you can see the Microsoft App ID here.

To find the Microsoft App Password for the Bot, click on Manage. This opens the application registration portal, which is a separate portal but works seamlessly with the Azure portal now.

image

 

Because the password is partially hidden, you will need to create a new one, make a note of it, and delete the old one.

image

Now, go back to your Visual Studio, update the App ID and Password values in the web.config file. Then re-publish the app to the Azure portal.

image

 

Connect your Bot app to Cortana

Test your bot app with web chat, and fix any issue you find.

image

Then, select Channels, and select Cortana, and click on register it with the organization you previously created in the knowledge store.

image

If this is the first time you are using the knowledge store portal, you can create an organization there. Also, you can review the Cortana channels you have created.

image


Test your Bot with Cortana on your PC or Harmon Kardon Invoke device

You can either speak to Cortana or type in something like “Ask BenjaminChat” to initiate your Bot app. From there, you can speak or type in your questions and commands that your Bot app knows how to process.

image

That’s all. It’s fun!

Capture a NETSH network trace

$
0
0

Here are the official details on this one.  I was needing to do this and realized that I had never written a post on this one.  Although we are moving into the cloud and this isn’t needed so much anymore, all the IT pros who will continue to work with Windows Server within their own data centers might find it useful.

See also These articles:

In my scenario there is a outgoing request, server side that is failing.  I.e. a client calls an API on the server and that API makes a request that leaves the server and is having some problems.  I access the server and execute this command.  All commands are shown in Figure 1.

netsh trace start scenario=InternetClient,InternetServer,NetConnection globalLevel=win:Verbose capture=yes report=yes traceFile=C:temptracetrace001.etl

image_thumb7

Figure 1, capturing a NETSH TRACE to find out why there is a network connection issues

Here are the details of the scenarios I used, see Figure 2 for a complete list.

  • InternetClient –> Diagnose web connectivity issues
  • InternetServer –> Troubleshoot server-side web connectivity issues
  • NetConnection –> Troubleshoot issues with network connections

Here are some other optional parameters I used:

    • capture –> Specifies whether packet capture is enabled
      in addition to trace events. If unspecified, the default entry for capture is
      no.
    • persistent -> Specifies whether the tracing session
      resumes upon restarting the computer, and continues to function until the “Netsh
      trace stop” command is issued. If unspecified, the default entry for persistent
      is no.
    • maxSize –> default is 250MB-ish, if set to 0 then there
      is no maximum

Next, after the NETSH TRACE is started, reproduce the issue.  The execute the following command:

netsh trace stop

To read about how I analyzed the trace see here.

To view all the NETSH TRACE scenarios enter the following command, see Figure 2.

netsh trace show scenarios

image_thumb8

Figure 2, how to find NETSH TRACE scenarios

to find the values for setting the global level verbosity, execute the following command, see Figure 3.

netsh trace show globalkeywordsandlevel

image_thumb9

Figure 3, how to find NETSH TRACE verbosity settings

Analyze NETSH traces with Wireshark or Network Monitor, convert ETL to CAB

$
0
0

I wrote a post about how I captured a NETSH trace here "Capture a NETSH network trace".  I like to use Wireshark to analyze my network traces, this post describes how I analyzed a NETSH .ETL trace file in Wireshark.

NOTE:  Wireshark is not a Microsoft product it is a 3rd party tool.

Basically, I exported the .ETL file into a .CAB file using Microsoft Message Analyzer, downloadable from here.

Here is a link to the archived version of Network Monitor 3.4.

First, after installing Microsoft Message Analyzer, open it and select File –> Open –> From File Explorer, as see in Figure 1.

image

Figure 1, how to analyze export open an NETSH .ETL ETL trace in Wireshark or Network Monitor

Select the ETL trace and open it in Microsoft Message Analyzer.  Once loaded, select File –> Save As and then Export, as shown in Figure 2.

image

Figure 2, how to analyze export an NETSH .ETL ETL trace in Wireshark or Network Monitor

Once exported, open the .CAP file in Wireshark or Network Monitor.  See my other post here that explains some about how I analyzed the trace.

How to analyze a trace taken using NETSH TRACE

$
0
0

I wrote article "Capture a NETSH network trace" here, where I discussed how to capture a NETSH trace, I will discuss how I analyzed it now.

I wrote another here that explains how to convert the ETL into a CAP file so it can be analyzed in Wireshark or Network Monitor.  "Analyze NETSH traces with Wireshark or Network Monitor, convert ETL to CAB"

Here is also a good article on analyzing the trace –> Introduction to Network Trace Analysis Using Microsoft Message Analyzer.

Here are the code snippets I am tracing, I know they are invalid and will not work, which makes it easier to spoke, the the symptoms would be similar in this context.

using (TcpClient client = new TcpClient())
{
  try
  {
    client.Connect("123.123.123.112", 80);
    client.Close();
  }
  catch (Exception ex)
  {
    //do something with the ex.message and stack trace  
  }
}

using (var client = new HttpClient())
{
  try
  {
    var result = await client.GetStringAsync("http://www.contoso.com");
  }
  catch (Exception ex)
  {
    //do something with the ex.message and stack trace
  }
}

The first failures I expect to see in my NETSH trace happens when I try to make a TCP connection to an invalid or unavailable IP address.  When I look in Wireshark I see Figure 1.

image

Figure 1, Wireshark, netsh trace, TCP

I know that in my code I actually loop 5 times and therefore see that 5 TCP connection starts, the GREEN lines in Figure 1.  Then after each of the GREEN lines I see 2 attempted retransmissions, then it fails out and we conclude the resource at the provided IP is not available or accessible.  I can see a similar pattern in Message Analyzer, Figure 2.

image

Figure 2, Message analyzer, netsh trace, TCP

Next for the HTTP Client calls I see Figure 3 in Wireshark.

image

Figure 3, Wireshark, netsh trace, HTTP/DNS

The reason is there in the Info column that the DNS lookup resulted in ‘No such name’.  In Message Analyzer, I had to compare a good DNS lookup with the bad on and compare the differences,  What I found and was happy to learn something on this one, Figure 4, is that there are codes called RCode and NXDomain codes.  You can read about RFC 1035 here and search for RCODE and you will find out that RCode=3 and NXDomain(3) means:  “Name Error - Meaningful only for responses from an authoritative name server, this code signifies that the domain name referenced in the query does not exist.” and that makes 100% sense.

image

Figure 4, Wireshark, netsh trace, HTTP/DNS

It looks like, same like debugging code with WinDbg that the analysis has some sense of interpretation and the root causes do not always just right out.  However, they can at least tell you that there is or is not an issue with the network communications.  Both of the above example are network failures but they are caused by my code calling invalid URLs and IPs.  Perhaps in the future, now that I have added this skill to my repertoire I will use it more and be able to match more patterns and share more on this topic.

Cartographie Partenaires sur Secteur Commerce et Marketing

$
0
0

Petite Cartographie des partenaires avec qui nous travaillons sur le secteur du commerce et marketing.

Cette liste n'est pas exhaustive, n'hésitez pas à me contacter si vous souhaitez y figurer.

Liste des entreprises citées:

ABTasty
Adobe
AFS
Alkemics
AP2S/Openfield
Aprimo
Aras
Azalead
Beezup
Blue-yonder
Braineet
Brainsonic
Cegid
cognitivescale
Conexance
Damco
Digitaleo
DocuSign
Energisme
EPIServer
Esri
Externis
FlintFox
Freedomplay
Genetec
Geoconcept
Hitachi
HMY
Icertis
Iconics
IER
Ingenico
Instore
Intershop
Kantar retail
Lakeba / 360
Lakeba / Shelfie
LiveTiles
Lokad
Manahatan Associate
Mojix
Moskitos
Multivote
NealAnalytics
Opencell
OpenText
Orckestra
Osisoft
Platform.sh
Plexure
Powell365
Powershelf / Avaretail
Pros
Pros
RateGain
Rayonnance
Realytics
SAP
SBSoft
Schneider Electric
SES Imagotag
SiteCore
Sprinkler
Tilkee
TokyWoky
TransparencyOne
TxTSoftware
Vekia
WelcomeTrack / Vigicolis
SouthPigalle.io
Dotsoft
Orchestra

STEP Program on the Microsoft Educator Community

$
0
0

STEP offers the gold standard in teacher preparedness for the use of technology in the classroom.

The Student Teacher Education Program (STEP) has been developed to prepare pre-service teachers:

  • For a career that is resilient to the ever-changing landscape of technology in education
  • Encourage learning design that will future-proof their own students
  • Provide them with the tools to manage workload by working smarter

 


This program is intended for pre-service teachers in any form of initial teacher training:

  • Degree or Post-Graduate program at a university
  • Teacher training college
  • On-the-job training program

The robust curriculum is also appropriate for newly qualified teachers in their first placements or as a complete, school-wide Continuing Professional Development (CPD) program.


Top tips:

  • Pace Yourself! It can be tempting to rush through to get your badge but for the learning to be meaningful, it must be applied.
  • Use it or lose it! Use the technology for your coursework, assignments and plan it into the learning activities you create for your students
  • Eye on the Prize! Don't lose sight of the end goal of learning how to effectively integrate technology into teaching and learning.

 


Leed Beckett University's STEP Model

Microsoft partnered with Leeds Beckett University's Carnegie School of Education to ensure that the curriculum on the Microsoft Student Teacher Education Program is rigorous and relevant to the needs of student teacher training programs.

At Leeds Beckett, the STEP program will be implemented over a span of 3 years, beginning with the BA Primary Education courses with recommendation for Qualified Teacher Status (QTS). Find more information about Leeds Beckett's education courses on their website.

If you have inquiries regarding this implementation model, please contact msftSTEP@microsoft.com

 


To achieve the STEP badge, you need to complete the 40 hours of learning in the curriculum and pass the associated assessments built into the courses.

 

Click here to complete the course.


Late night Blockchain thoughts

$
0
0

The beginning

In the past few months I’ve been involved in several activities whose subject was one of the most trending and discussed topic in the tech world: Blockchain.

To be honest, the first time I was asked to work with this “new” technology my first thought has been:

Blockchain? That stuff behind the hacker crypto-currency? COOL!

Obviously I had  no clue of the technology behind Bitcoin and my knowledge at that time was based on some lectures and chats with colleagues.

Actually many people talk about blockchain thinking to have a clear vision and awareness about it, but the (sad) truth is that most of them have just a slight knowledge built on internet. Well, just to clarify, I’m not saying that I am an expert or that only experts can talk and discuss about it, but keep in mind that complex topics, like blockchain, require at least a basic learning which is, in my opinion, more then reading blogs or websites on internet.

Why this post

The reason why I've decided to write a blog post about Blockchain is not because I want to add a(nother) quick-start or easy explanation of the technology into the internet archive. So why should you continue to read it? 

As I said before, I’ve had the chance to study, work and learn from the field what this technology may offer (a lot of potential opportunities) but of course also when it would be better to invest in something different.

So my goal is to give you some food for thought and to feed your curiosity sharing my honest feedbacks and thoughts.

The Five Ws

Ok so, let’s start the discussion using the journalistic rule of the Five Ws which, I hope, will give us a more critical approach: 


WHO?

The first questions you should ask when start studying or working with a technology are:

Who decided to use it?

You and your team, both technical and business, are the only one to be responsible into the difficult decision on which technology to invest, and to use, during the analysis and architecting process. Generally even if you or someone in your team is not 100% confident with the technology chosen there’s a common though that everything is mutable or replaceable; well, consider that if you’ll take the decision to remove or replace the Blockchain components from your ecosystem you will probably need to reinvent end re-engineer your whole ecosystemSo yes, you can break your relationship with Blockchain, but it will be an expensive divorce 🙂

Who is going to use it? 

The most simple and abstract definition of a Blockchain is “ a distributed database of transactions where each transaction (or block) encapsulates some information from the
previous transaction, forming a chain of transactions
“. Keep in mind that even if the literature definition contains the database word, a Blockchain is built to be an irreversible database! The most correct term to use is Ledger and the “ Right to be forgotten” is incompatible with the immutable laws of ledgers! So be sure that the utilizer of your system agrees with these laws. 

 

WHAT? 

Looking at the second “Who” question you will probably better understand why it’s crucial to have a deep knowledge of what exactly blockchain is before to start any project which is more than an experiment. So here's a list of questions you should be able to answer:

  1. What is hashing?
  2. What is a Ledger?
  3. What is a distributed database/ledger?
  4. What is a shared (and shared) database/ledger?
  5. What does it means mining?
  6. What is the consensus?

If you can’t answer to all the questions above, in a detailed way, it’s clear that you need to study more and deeper this topic.


WHEN?

Blockchain is not so young as you may believe! The first work on a cryptographically secured chain of blocks was described in 1991 and the most famous public blockchain project (Bitcoin) was conceptualized in 2008. This is just to say that if you feel some pressure about timing or you can’t wait to start because tomorrow“ it will be too late”, let’s relax and take your time to have a complete envision and to avoid bad situations. Remember that it’s never too late to make the right decision.


WHERE? 

When talking about a technology like Blockchain, distributed and decentralized, it’s obvious (at least for me) that the best place where to deploy it’s on Cloud. If you think to design and distribute a Blockchain on your computer or your local server, you are probably wasting your time in the worst way I’ve ever seen 🙂 

Furthermore all the most important Cloud providers are investing so much in this technology, and you should invest in cloud too because it will let you save a lot of time and energies! 

Another important consideration is that Blockchain solutions are more than just Distributed Ledger Technology, which would be actually just one piece of the whole puzzle! Imagine for example all the involved components of a supply chain scenario like below:

So when you ask “where” it’s very important to imagine where to place your whole system, and I think that Microsoft has one of the best Blockchain solution approach on the market.

WHY?
And here’s the last, but not least, question:  why should I use Blockchain?

Before to show you a very useful graphic I want to share with you the most important think I’ve learnt in these months: even if you are able to use the Blockchain in every market industry, this does not mean that it’s always the best choice! So, how to make the right decision? Try to check how many of the following requisites you need for your project:

Conclusion

I hope you enjoyed my post and I'm open to your feedbacks and opinions. As I've already said before, the goal of this discussion is to share with you my experience and, hopefully, to give you some hints about this topic.

Why is the daylight saving time cutover time 1 millisecond too soon in some time zones?

$
0
0


If you dig into the Windows time zone database,
you'll see that some time zones list the moment when
the time zone transitions into or out of daylight saving time
as 23:59:59.999 instead of midnight.
Why a millisecond too soon?



My colleague

Matt Johnson

spends a lot of time working with time zones
and explains many of the strange and wacky artifacts
of time zones
in

this StackOverflow answer
.



In it, I learned these fascinating facts:




  • If the official time zone change occurs at midnight,
    Windows intentionally misreports the change as occurring
    at 23:59:59.999 to work around code which
    "incorrectly used <= instead of <
    to evaluate the transition",
    which means that they would unwittingly
    bounce the date back and forth,
    resulting in much havoc.


  • Governments will from time to time announce time zone changes on
    very short notice,¹
    leaving computer programmers scrambling to get the time zone
    data updated in time,
    or applying workarounds (like "Use this other time zone for now")
    until the real fix can be deployed.


  • I was aware that Brazil and Israel change their time zone
    rules every year,
    but I wasn't aware that

    Morocco changes its clocks four times a year
    ,
    and

    this Antarctic research station

    changes its clocks

    three times a year
    .



Read

Matt's StackOverflow answer

for all the juicy details.
Note in particular his conclusion:



That all said,
if you are able to avoid using Windows time zone data in your application,
I recommend doing so.
Prefer IANA data sources,
or those derived from them.
There are many routes to working with IANA time zone data.
Newer Windows APIs like
Windows.Globalization.Calendar
and
Windows.Globalization.Date­Time­Formatting.Date­Time­Formatter
in WinRT/UWP do indeed use

IANA time zones
,
and that is clearly the path forward.
In the standard C++ space,
I highly recommend using

Howard Hinnant's date/tz libraries
,
or those provided by the

ICU project
.
There are several other viable choices as well.


If you want to keep your finger on the pulse of Microsoft's
responses to time zone changes
around the world,
you should check out the

Daylight Saving Time and Time Zone Hot Topics
page
and

the Microsoft Daylight Saving Time and Time Zone Blog
.



Bonus reading:

How Microsofts 'Time Lords' Keep the Clocks Ticking
.



Bonus watching:

How to Have the Best Dates Ever!

by Matt Johnson.



¹
In one notable case,
a government announced a change to the date the country
would change to daylight saving time,
and then less than two weeks before the change was scheduled to take effect,

the legislature passed a law abolishing daylight saving time outright
,
upon which the legislature and the government
got into a shouting match over who was right,
and then confusing the matter even further,
another government representative mentioned
a third cutover date,
later reaffirmed by the state news agency,
and then four days before the third cutover date,
the state news agency announced that the
change passed by the legislature is the one that would be implemented
after all.
Temporary workaround:

Disable daylight saving time adjustments

until the time zone information can be updated.

Choosing the right icon for the Store in a UWP or Desktop Bridge app

$
0
0

The manifest editor included in Visual Studio 2017 for UWP or Desktop Bridge apps is a great starting point to handle the various assets of your application. Thanks to an option added starting from the 2017 release, in fact, you have the chance to automatically generate all the required assets (including support for the various scaling factors) starting from a single high resolution image.

However, if you want a more polished result, you can also differentiate the various assets, by choosing to use different images for the various sizes which are required.

One of the assets that you can customize is the one for the Store listing, which is highlighted in the image below:

image

Often, it happens that many developers try to handle this asset by setting the one called Package Logo in the manifest editor:

image

However, if you set it and then you upload the application on the Store, you will notice that that it isn’t actually used. As you can notice, the name of the file is StoreLogo.png and this has led many developers to believe that this is actually the image used by the Store when it lists your application. Unfortunately, the name is misleading because this asset is actually used for other scenarios, like:

  • The App Installer (the screen you see when you sideload an application by manually double clicking on an AppX package)
  • The Dev Center
  • The report an app option on the Store
  • The write a review option on the Store

The real asset used for Store listing is the same one used for the Medium tile, which file name is called Square150x150Logo.png:

image

What if I want to have a different asset for the Medium Tile and for the Store listing? Good news, since a while the Dev Center has added an option to further customize the Store logo by uploading a custom image during the submission instead of using the asset included in the package.

You’ll find this option in the section called Store logos, inside the Store listing step of the submission process:

image

Here you have the option to upload:

  • One image with resolution 1080 x 1080 or 2160 x 2160 pixels, which will be used on every Windows 10 device
  • One image with resolution 720 x 1080 or 1440 x 2160 pixel, which will be used on the Xbox Store. If you don’t upload this image, the previous one will be used.
  • One image with resolution 300x300, which however applies only to Windows Phone 8.1 applications.

Once you have uploaded the images, you also have to check the option below titled For customers on Windows 10 and Xbox, display uploaded logo images instead of the images from my packages. This way you’ll be sure that the Store will use the images you have provided instead of the Medium Tile asset from your package.

Wrapping up

In this post we have seen a great way to polish even further the Store listing of your application, making it even more appealing for your customers. As an advice, my suggestion is to use this option to provide assets with a better quality and more polished than the ones in the package. It shouldn’t be used to provide an asset completely different than the one that you’re going to use as tile or in the Start menu listing. It could generate confusion for the final user, who expects to find in his computer the same or a similar icon than the one he saw on the Store.

You can find more details on the official documentation https://docs.microsoft.com/en-us/windows/uwp/publish/app-screenshots-and-images#store-logos

Happy coding!

API customer text translations no longer logged by Microsoft Translator

$
0
0

Microsoft Translator Text API translations are no longer logged for training purposes to improve the quality of the Translator serviceNo data sent for translation through the Microsoft Translator Text API will be kept, and no record of the submitted text will be saved in persistent storage in any Microsoft data center.  

Previously, Microsoft Translator recorded a small sample (10%) of randomnon-sequential, anonymized data coming through the API. This was used for training purposes to improve translation quality. Paid subscribers, subscribing to a minimum of 250 million characters per month, were able to request what was known as the "no trace option" to ensure that their data was not used. Now, all traffic using the free or paid tiers through any Azure subscription is no trace by design. 

This also means that if you access Microsoft Translator through one of our many partners and customers offering services such as website translation, social media and communication tools, games, etc., the translations are also not recorded on our servers. 

This change comes as a result of Translator’s commitment to privacy and security, and the overall move towards GDPR compliance. The Microsoft Cognitive Services Trust Center also reflects this mid-January 2018 change. 

However, because all API calls are now no trace, the ability for Microsoft technical support to investigate any specific Translator Text API translation issues you experience with your subscription is no longer available. For instance, if an error in your code were to request the same translation multiple times, we would only be able to report the total number of characters used, and not what text was sent, when, and from which IP address. 

Additionally, please note that text translation in Microsoft Office products are already no trace by default. Specifically, text translation in the following Microsoft Office products are no trace: ExcelOneNoteOutlookPowerPointPublisherSharePointVisioWord, and Yammer. 

The Translator Speech API is not yet no trace, and some data will be kept for training purposes. No trace translation for the Presentation Translator add-in for PowerPoint is available for translation of slides, but is not available for speech translation. 

small portion of text translations in free Microsoft Translator end-user products including the Microsoft Translator appsTranslator for Bing, and Translator for Microsoft Edge are potentially kept for training purposes and service improvements. You can learn about the protections for your data that are in place even without the no trace option in the Microsoft Translator Privacy Statement. 

 

Learn more: 

All good things…

$
0
0

After more than 23 years at Microsoft, I’ve decided it’s time for me to take a break. Starting March 12th 2018, I’ll be taking a leave of absence for a year. Deciding to do this has been one of the most gut-wrenching decisions of my life. As someone who has largely defined myself by the work I’ve done, it’s incredibly hard to imagine life without going to work and working 10 hours every day. But, after a few years of debate with my wife, we’ve decided that it’s time to take a break and dedicate more time to home and family for a while.

I have no fear of having nothing to do. I have learned over the past 10 years that a farm is an endless source of work. I have a farm backlog so long I’m not sure I will be able to finish half of it in a year (and yes, I use VSTS to manage it). It feels like an infinitely long list. While farming will likely be the lion’s share of my time investment while I’m away, I’m planning a bunch of other things too. We’re going to do some traveling that we’ve never gotten around to. I’m looking forward to that. There are also some “hobbies” that I’d like to spend some more time on. I haven’t had much time in a while for woodworking and I’m looking forward to getting some more time for that – particularly lathe work. I’ve also, recently, picked up baking (the Great British Baking Show was the clincher for me) and I’ve really been enjoying it. As usual, I suspect my eyes are bigger than my stomach and I’ll never get to everything I imagine I’d like to do, but I can dream, right?

Taking a year off, of course, means that I will no longer be leading the TFS/VSTS team. That’s a very hard decision for me to make. I started the team about 15 years ago and have grown it from 2 people working in a “spare room” to almost 800 people spread across the globe. I’m incredibly proud of what we’ve built and very fond of the team I’ve had the privilege to work with. Stepping away from both is a big decision. It’s reassuring to me that we’ve built a very strong and talented team. I know the product is in good hands and it will keep getting better.

Nat Friedman (of Xamarin fame) will be assuming the leadership role for TFS/VSTS. I’ve had the opportunity to work with Nat over the past couple of years (since the Xamarin acquisition) and I’ve always been incredibly impressed with him. I really admire the principled way that he works and the great culture that he builds. He’s a very clear thinker and an excellent communicator. He understands development and developers deeply. I’m confident that Nat is going to do an excellent job leading the team and continuing to advance the product. Please join me in welcoming Nat.

My 23 years at Microsoft have been some of the best of my life. I simply can’t express enough how grateful I am for the opportunities I’ve been given. I know there are those who don’t always think incredibly fondly of Microsoft. Although I can’t say that I agree with everything we’ve ever done, I can say, I have worked with a tremendous number of people across Microsoft and they are terrific people who want nothing more than to create great products and make customers happy. I wish everyone could see Microsoft the way I see it. I am proud to have been a small part of what we have become. It is a great place to work and a great place to do good work. I think things have gotten even better in the past few years with some of the cultural and strategic changes that Satya has brought. I’m looking forward to coming back to Microsoft and finding a new challenge in a year.

My expectation is that I’ll be fully stepping away for a year – meaning, among other things, that I will discontinue posting to this blog. I have really enjoyed sharing my thoughts over the years and engaging in vigorous debate. Through my blog, I have attempted to put a humble and understanding face on Microsoft. I’ve tried to provide my perspective on some of the things we do and why I think they make sense, while acknowledging when they don’t. I’ve also tried to be an available ear for problems and to help get them routed to someone at Microsoft who can resolve them – no matter how big or small. There are countless good leaders at Microsoft who will, no doubt, continue to do that.

There’s no good time to leave a team and a product you love. But sometimes you gotta do what you gotta do. I have to say that I’m incredibly excited about the future of TFS and VSTS. We’ve made a lot of progress over the last year or two and the next year is going to be one of the most exciting yet. We’ve been hard at work on some really cool investments  that I think are going to significantly improve the experience. It’s disappointing to not be able to see those changes through to the end but I know Nat and the team will do a great job carrying forward. I encourage you to keep and eye out for great news in the coming months. The best way to continue to track the latest and greatest on TFS and VSTS is the DevOps blog.

For those of you who have been with me on this journey, thank you. I’ve enjoyed it and I hope you have too. Good luck over the next year and I hope our paths cross again. I think I will create a personal blog somewhere to, at least, continue to share thoughts and stories on farming and whatever else crosses my mind. Once I get that sorted out, I’ll post a link to it on this blog.

Thank you very much,

Brian

 

Top stories from the VSTS community–2018.03.09

$
0
0

Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics.

TOP STORIES

(A FEW) SUMMIT TWEETS

VIDEOS

  • DevOps Lab new episode: Deploying Database changes alongside your code with SSDT projects and VSTS - Damian Brady and Abel Wang
    Damian is joined by Abel Wang to show us one way of tracking and deploying database changes alongside your code. The SQL Server Data Tools (SSDT) project type in Visual Studio allows you to keep an up to date version of your code in source control. Visual Studio Team Services (VSTS) natively supports deploying any changes to an SSDT project to your SQL Server instance.

TIP: If you want to get your VSTS news in audio form then be sure to subscribe to RadioTFS .

FEEDBACK

What do you think? How could we do this series better?
Here are some ways to connect with us:

  • Add a comment below
  • Use the #VSTS hashtag if you have articles you would like to see included

Save 40% on pre-orders

$
0
0

New titles are coming soon to the Microsoft Press Store! Buy now to save 40% on pre-order books or ebooks: https://www.microsoftpressstore.com/store/browse/coming-soon.

Note: new titles become available for pre-order 90 days prior to availability. Some books in the image above will become available during the duration of the sale.


Enhanced Knowledge Base Usage Analytics with Azure Application Insights and Power BI

$
0
0

When managing a self-service knowledge base, understanding how customers and employees are searching for and consuming self-service knowledge is an important part of ensuring your content remains relevant and useful for end users.

Dynamics includes a number of in-built capabilities to help analyze the usage of the knowledge base, including:

  • Tracking of article views by source
  • Tracking of article ratings and feedback
  • Tracking of case deflections from knowledge
  • Tracking of articles associated with, or used in resolving cases

In some instances, we may wish to extend beyond the in-built capabilities, to understand usage patterns to a greater depth: what your users are searching for, which searches are not returning any results, and how users are navigating through your Dynamics portal, for example. These metrics can help you create targeted content to meet the needs of your users.

In this post, we will walk through an example of how we can track and analyze portal-based knowledge base usage patterns in more detail, with Azure Application Insights and Power BI.

We will augment the in-built analytics with reporting on:

  • Search patterns – including top searches and top failed searches
  • Article View patterns – including views by user and views by referring page

 

Our goal will be to empower knowledge managers with a visualization such as this:

 

Prerequisites

The prerequisites for building and deploying our enhanced analytics include:

  • An instance of Dynamics 365 for Customer Service (Online)
    • You can request a trial of Dynamics 365 for Customer Service here
  • A Dynamics 365 Portal connected to your Dynamics 365 instance
    • In this walkthrough, we will use a portal configured with Portal Audience: Customer, and type of portal as Customer Self-service Portal
  • A Microsoft Azure subscription for enabling our Application Insights resource
  • Power BI Desktop, and a Power BI account, which can be obtained here

 

Setting Up Application Insights

Azure Application Insights is an extensible Application Performance Management (APM) service that works across multiple platforms. Among its capabilities are the ability to track and understand what users are doing with web applications, to continually improve performance and usability.

We will use it to track and report on specific user actions within the Dynamics 365 portal, leveraging Application Insights for web pages, and the Application Insights SDK JavaScript API.

First, we authenticate to the Azure Portal, and create a new Application Insights resource. This link will take us directly to the Application Insights resource creation blade. We complete the required information, specifying the Application Type as ASP.NET Web Application, and click the Create button:

 

Once the Application Insights resource has finished deploying, navigate to the resource. Scroll down and click the Getting Started button in the resource blade, click MONITOR AND DIAGNOSE CLIENT SIDE APPLICATION, then click to copy the JavaScript code snippet to your clipboard:

 

In the Dynamics 365 Web Client, we can paste this code into a Content Snippet named Tracking Code, which is suited for this purpose. The code we are pasting includes our Application Insights key. This code will be included in each page that is rendered on our portal, enabling us to track portal usage:

 

We will now leverage the Application Insights SDK JavaScript API to track specific events that occur on the portal. Within the JavaScript SDK, we are able to make use of the trackEvent method to log instances of specific user actions, including our own properties that we can specify. We will use this to log user searches, and knowledgebase article page views. We can also ensure that our events are tracked to the specific portal user by setAuthenticatedUserContext method to associate searches and views to individual authenticated portal users.

 

Tracking Searches

When users search on our portal, we will track what query the user entered when searching, as well as how many results were returned. If we have an authenticated portal user, we will also include that in the context of our event. We do this by editing the default Web Template called Faceted Search – Results Template, which is used to render the results of the search on the portal. The portal leverages Handlebars, the popular JavaScript templating engine, to render the results.

The updated template code below shows demo-grade updates which will register a Handlebars helper JavaScript function called logresults, which can be called from within the Handlebars template, where we have access to the count of search results that are being rendered. The helper will:

  • Receive the result count as a parameter (note that we are tracking the number of results across all types; articles, forums, portal pages, etc.)
  • Ensure that the count is a numeric value
  • Set the authenticated user’s GUID into context, if we have an authenticated user (making use of Liquid templating to inject the authenticated user’s ID)
  • Log a custom event named search, with the results count, and the query phrase (also injected vi Liquid) included as custom properties
  • Use a counter to ensure that the search event is logged only once per page render, and not logged again on subsequent uses of the search faceting

 

[snippet slug=faceted-search-results-template line_numbers=false lang=js]

 

Tracking Article Views

Although Dynamics 365 portals track article views as an in-built capability, we can gain more insight into usage patterns by logging custom events. We can track page views by specific users, track how they arrived at the page (by search vs navigation), and more.

To do this, we will navigate to the portal Web Page called Knowledge Base – Article Details (drill down into the Localized Content for the page, if applicable) and we add some Custom JavaScript to the page, found under the Advanced tab in the default Web Page form.

The following demo-grade JavaScript code will:

  • Use a JavaScript regular expression to obtain the Article ID from the page URL (ensure that the pattern used matches your article public number convention)
  • Checks the referrer URL against a number of the Site Settings pages that we anticipate article traffic to come from, and setting a source variable accordingly
  • Sets the user’s GUID into event context, if the user is authenticated
  • Logs a custom event named view, with the articleId and source as parameters

 

[snippet slug=knowledge-base-article-details-custom-javascript line_numbers=false lang=js]

 

After saving these changes, we can open up our portal in a browser, and start searching and viewing knowledgebase articles, to generate some data for our reporting and analytics.

 

Viewing and Exporting the Event Data in Application Insights

Back in the Azure portal, on our Application Insights resource blade, we navigate to the Overview, which allows us to click the Analytics button to open Analytics:

 

Analytics is the search and query tool that accompanies Application Insights. It has its own query language, which can be used to run queries against the data. Queries we write can also be used to export data to Power BI.

In Analytics, we open a new tab to query our data, and type in a query that will return all of our search events. Note how we retrieve the query term and result count from our custom parameters:

 

Note that we also retrieve all data from the last 90 days. Application Insights stores analytics data for up to 90 days. If we wish to report on usage patterns going back further than 90 days, we could make use of the Continuous Export feature of Application Insights to export the data to a storage account, and into SQL Azure for continued availability in our analytics reports (not covered in this post).

We will now export the query for use in Power BI, by selecting Export to Power BI from the Export menu:

 

We will be prompted to open or save the text-based exported query. We can open and then copy the query to our clipboard, for use in Power BI:

 

We then open Power BI Desktop, and click the Get Data button, and choose Blank Query:

 

We insert our search query, and click the Done button:

 

We can repeat these steps for our article view query, using the following query syntax:

  customEvents | extend source = customDimensions.src, id = customDimensions.id | where timestamp > ago(90d) and name == "view" | order by timestamp desc

 

Building our Power BI Dashboard

We can now start building a dashboard in Power BI to help us visualize and understand knowledgebase usage trends.

For example, we can use our Search Query data in a Bar Chart visualization, selecting itemCount and query, with a filter in which resultcount is equal to 0, to create a visualization of top failed searches:

 

We can continue to build out a dashboard, creating visualizations that will help us understand our users’ search and article viewing patterns, using attributes such as authenticated user, timestamp, user location, and more. We can also pull data from Dynamics 365 into Power BI to augment our visualizations further:

 

Finally, we can use the process outlined here to embed our dashboard in Dynamics 365:

 

We can now analyze the patterns of usage in our knowledge base on an ongoing basis, to make informed decisions about how best to augment our knowledge, and how to optimize our end-users' self-service experiences.

These same techniques can be extended to track and analyze many other aspects of user behavior on Dynamics 365 portals as well.

Physical Data Center Security

$
0
0

I don't spend a lot of time talking to customers about physical data center security.

As a developer using mostly PaaS or IaaS compute platforms, I just assumed that that cloud provider had taken care of it. Helping customers with Data Security (Data at Rest, Data in Transit, Secure Compute), and Application Code security takes up most of my time.

In this post, I wanted to step back a bit and look at the bigger picture. Lets toss out the above assumptions and look at the entire stack where we run our applications. With the proliferation of cloud platforms, you now have a wide choice of IT infrastructure to meet your business needs. A typical IT infrastructure today can encompass a mixture of SaaS, PaaS, IaaS and On-prem solutions.

With On-prem infrastructure, an IT Manager is responsible for the core capabilities and tenets below to make sure they have a secure infrastructure. As you move left in the diagram below, you notice that responsibilities for those core capabilities start to shift towards your cloud provider (in this case Microsoft).

The first shift accountability that a cloud provider takes on is hosting the physical infrastructure and providing ComputeStorage as a Service (i.e. typical IaaS scenarios). Along with hosting the physical hardware and making sure that it is operational, the cloud provider also takes on the responsibility of making sure that the physical hardware is secure.

Obviously, you don't want someone walking into a data center and walking out with a disk containing your data. So what does it take to physically secure a data center?

  • Restrict Access to white-listed authorized users with prior time boxed approvals
  • Fences, Gates to prevent unauthorized entry
  • 24/7 Security to monitor internal and external environments
  • Multi Factor Authentication to establish visitor access
  • Security Searches of PersonnelBags on entry and exit to make sure nothing unauthorized enters or leaves the building.
  • Etc.

The above is not meant to be an exhaustive list but to help you get started thinking about Physical Datacenter Security. Layering in multiple levels of access controls and measures helps protect against an adversary who may defeat a particular layer.

How does Microsoft do it? Here is a link to a detailed post by Ryan describing the efforts of his team to make sure that Microsoft Data centers are physically secure. It is a great behind the scenes look at how Microsoft physically secures our global data centers. Microsoft has invested over a billion dollars into making sure that Azure is the most secure it can be so that we can earn our customers trust. Without that trust, customers would never be comfortable in handing over their data and infrastructure. Here are some more resources if you would like to dig deeper.

Microsoft Trust Center - Design and Operational Security

Microsoft Azure Security, Privacy and Compliance Whitepaper

 

Do all customers need to physically secure their data centers to the same level as Microsoft? Well...it depends. What is the business impact and cost of a security incident to your business? Does that justify in implementing some or all of the measures above to protect your infrastructure? Hopefully the resources in this article help you understand how to better protect your infrastructure. If you would like to learn how Microsoft Services helps our customers secure their infrastructure, here is a great overview by our Secure Infrastructure team.

Microsoft Cloud Security for Enterprise Architects Whitepaper

 

Change PHP_INI_SYSTEM configuration settings

$
0
0

PHP_INI_SYSTEM level settings cannot be changed from .user.ini or ini_set function. To make change for PHP_INI_SYSTEM settings on Azure web app, follow the steps,

1. Add an App Setting to your Web App with the key PHP_INI_SCAN_DIR and value d:homesiteini

2. Create an settings.ini file using Kudu Console (http://<site-name>.scm.azurewebsite.net) in the d:homesiteini directory.

3. Add configuration settings to the settings.ini file using the same syntax you would use in a php.ini file. For example, if you wanted to point the curl.cainfo setting to a *.crt file and set 'wincache.maxfilesize' setting to 512K, your settings.ini file would contain this text:

 ; Example Settings
 curl.cainfo="%ProgramFiles(x86)%Gitbincurl-ca-bundle.crt"
 wincache.maxfilesize=512

4. Restart your Web App to load the changes

Reference: https://docs.microsoft.com/en-us/azure/app-service/web-sites-php-configure

Supporting Micro-frontends with ASP.NET Core MVC

$
0
0

In this post, App Dev Manager John Abele explores micro-frontend design with ASP.NET Core and MVC.


Many development teams have spent the last few years organizing and empowering cross-functional teams, building independently managed microservices, and implementing DevOps pipelines to go faster than ever!

These industry shifts, critical for organizations to plan less and react more, solved old problems while creating new ones. As we focused on designing domain-aligned microservices, we also engineered JSON-hungry responsive UIs, Single Page Apps, and portals to consume them. A ton of client-side code has been thrown into our frontend layers creating monoliths, which are often maintained by a different team. Front ends have become increasingly more complex, interdependent and highly coupled to whatever Angular-React-Ember-Vue framework was cool when it was built.

clip_image002

As a result, Micro-frontend strategies and patterns have emerged to break the monolith and promise independent, frictionless, end-to-end control of feature code. A Micro-frontend design decomposes your application into smaller isomorphic functions rather than writing large interconnected front-end UIs. Here is a quick run-down of the general implementation approaches - each come with their own tradeoffs.

Composition UI – Microservices contain backend and frontend display logic, returning html and JS/CSS dependency references to the consumer.

Multiple single-page apps – Fully independent microsites living at different URLs.

Integration at the Code Level – A more traditional approach that uses a shared code or an “app shell” with componentized, team owned functionalities added to pages.

clip_image004

Choosing the most appropriate implementation depends on your tolerance for WET (Write Every Time) autonomy versus DRY (Don’t Repeat Yourself) co-dependency.

For those already using a ASP.NET Core MVC frontend, we can leverage framework features to support code-level integrated micro-frontends.

clip_image006

View Components

If you need a way to bundle up bits of UI and related behind-the-scenes logic, chances are you're looking for View Components in ASP.NET Core MVC.

View Components don’t use model binding and depend only on the data you provide, making it an ideal choice for rendering logic like shopping carts, content lists, or any componentized feature. View Components support parameters and have a backing class making them suitable for complex functionalities. They share the same separation-of-concerns and testability benefits found between view and controllers. Additionally, View Components can be externalized, loaded from a class library, packaged via NuGet and shared across multiple applications making them an excellent ownership boundary for feature teams.

clip_image008

These characteristics allow feature teams to independently manage microservices and their frontends by deploying parameterized View Components for consuming applications.

Invoking View Components is easy within an MVC view. Tag Helpers provide a HTML-friendly experience with Intellisense support.

FrontendView.cshtml

<!--other shared code-->

<div class="row">

<div class="col-md-12">

@await Component.InvokeAsync("GoldTeam.BannerAds")

</div>

</div>

<div class="row">

<div class="col-md-4">

<!--tag helper-->

<green-team-related-items itemCount="4"></green-team-related-items>

</div>

<div class="col-md-8">

@await Component.InvokeAsync("BlueTeam.ProductDetail",new {displayType = "simple"})

</div>

</div>

<!--other shared code-->

Summary

Microservice architectures have created new challenges in unexpected places. ASP.NET Core MVC View Components provide a mechanism for teams to isolate and manage frontend feature code, clearly define ownership and enhance agility. Regardless of your implementation strategy, breaking down Frontend monoliths into independently testable and deployable features will continue to be an emerging trend.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Daylight Saving Time Arrives Sunday in the States: Prepare to Spring Forward and lose some sleep

$
0
0

Microsoft Office Clip ArtIf you have nothing to say, say nothing. -- Mark Twain

Twain was so smart.

I've returned to post after a long, self-imposed break from blogging, as so many great folks inside  the company work as part of their day jobs several of the areas of interest I've covered on my blog. Over the last year or so, I've spent much of my time being more social, mostly internal to the company (and with the occasional musings on Twitter) whilst working on a smattering of things that (when successful) just work and required no extraordinary post.

TL;DR: As Douglas Adams said, I may not have gone where I intended to go, but I ended up where I needed to be. 😉

The first rule of Daylight Saving Time is that there is no Daylight Saving Time (at least, in Hawaii and a few parts of North America). And if legislators in Florida have their way, residents in the state will make a move in the opposite direction and enjoy daylight saving time year round. (As of now, Gov. Rick Scott plans to “review the bill.” One can trust that they've seen our policy and recommendations at http://microsoft.com/time.)

The Second rule of Daylight Saving Time is that there is only one “S” in the term “Daylight Savings Time.”

Yes, that's right: daylight saving time (aka DST) is here once again, which means it's time to change your clocks this Sunday, March 11, 2018 as much of the United States and Canada will “Spring Forward” on Sunday at 2:00AM.

If you're at SXSW over the next week, please keep this change in mind. (and that time is an illusion. Lunchtime doubly so -- particularly at conferences.)

What to do

If you're a frequent visitor to the Microsoft Daylight Saving Time & Time Zone Blog you'll note there hasn't been much fanfare on the semi annual clock changes, apart from the out of band shifts that various countries and territories make from time to time. For Windows 10, you usually don’t have to do anything–updates automatically download and install whenever they’re available. 

So what should you do to make sure that your computers are ready for the change? If you use Microsoft Update on your PC at home, chances are you're already covered. The latest cumulative updates should already be installed on your PC. If you're not sure, visit Microsoft Windows Update to check your PC and install important updates. At work, if an IT Pro (aka 'hero') manages your network, chances are good that the needed updates have already been installed on your computers and devices automagically.

Be sure visit the support web sites of any other software companies to see if you need to apply any updates - it's not just Microsoft software that may require updates. Keep in mind that it's not just the US and Canada that made changes to DST and time zones.

I tend to agree with Angela Chen over at the Verge in that we should do away with the transition and remain perpetually on daylight saving time. Just think of all of the train schedules that wouldn't need to be updated, the elimination of confusing airline arrival and departure times, and even better: state and federal legislatures focusing on things of even greater importance.

Well, remember: time is a precious thing. Never waste it.

Viewing all 35736 articles
Browse latest View live