Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Common approaches for enhancing the programmatic accessibility of your Win32, WinForms and WPF apps: Part 2 – Win32

$
0
0

This post describes some of the steps you can take to enhance the programmatic accessibility of your Win32 app.

 

Introduction

By default, use standard Win32 controls in your app, and leverage the work that the Win32 UI framework can do to provide the foundation for an accessible experience.

 

Way back, when work was done to make Win32 UI accessible, the API powering programmatic accessibility in Windows, was Microsoft Active Accessibility (MSAA). Two key components of MSAA were the IAccessible interface, which enabled an assistive technology (AT) app to learn about some other app’s UI, and WinEvents, which enabled the AT app to react to changes in that UI. Support for MSAA still exists in Windows, but most of us focus solely on MSAA’s successor, UI Automation (UIA). UIA provides a richer feature set, and in some situations significantly improves performance.

For some Win32 UI, support for accessibility is still based on MSAA. So in order to interact with a UIA client app like Narrator, UIA itself performs a conversion from the MSAA data being exposed by the Win32 UI, to UIA data. The UI properties exposed through the IAccessible interface, and the notifications raised via WinEvents, are converted into UIA data by UIA’s internal MSAAProxy component.

An example of when the MSAAProxy is being used, is when a UIA client interacts with the ribbon in the Windows Explorer UI. The Inspect SDK tool is a UIA client app, (just like Narrator is,) and it can report the UIA ProviderDescription property associated with the UI that it’s interacting with. The ProviderDescription string includes details of which component is providing the UIA data associated with the UI element. The screenshot below shows the Inspect SDK tool reporting the UIA properties of the “New folder” button on the Windows Explorer Ribbon.

 

Figure 1: The Inspect SDK tool reporting UIA properties of the “New folder” button on the Windows Explorer ribbon.

 

The full text of the UIA ProviderDescription property for the “New folder” button is:

[pid:18236,providerId:0x0 Main(parent link):Microsoft: MSAA Proxy (unmanaged:uiautomationcore.dll)]

 

It’s usually the last portion of the ProviderDescription string that I’m most interested in, as that often shows which DLL contains the UIA provider exposing the UIA information that Inspect is reporting. Some UI frameworks implement the UIA Provider API themselves. For example, WPF, UWP XAML, and Edge do that. The table below lists the UIA FrameworkId properties when Inspect examines related UI, along with the DLL name included in the UIA ProviderDescription property.

 

UIA FrameworkId property

DLL shown in the UIA ProviderDescription property

Win32

MSAA Proxy (unmanaged:uiautomationcore.dll)]

WinForm

MSAA Proxy (unmanaged:uiautomationcore.dll)]

WPF

PresentationCore

XAML

Windows.UI.Xaml.dll

Edge/HTML

edgehtml.dll

 

So for both Win32 and WinForms UI, UIA’s MSAAProxy is converting MSAA data into UIA data for UIA client apps like Inspect and Narrator. This means that when using standard Win32 controls, the Windows platform can provide the foundation for an accessible experience for your customers using Narrator.

 

Enhancing the default accessible experience

Below are some of the ways that you can enhance the programmatic accessibility of your Win32 app.

 

Giving your standard controls helpful accessible names

Wherever practical, the Windows platform will make your Win32 UI accessible by default. For example, if you use a standard Button control that shows the text “Save” on it, then this control will automatically be exposed by UIA as an element whose ControlType property is Button, and whose Name property is “Save”. But what happens when a control has no visual text label associated with it?

For example, say I present an Edit control or ComboBox control, which has no visual label. There’s no visual text describing the purpose of the control that can be repurposed as the accessible name of the control. This means your customer using Narrator will be told that they’ve reached an Edit or ComboBox control, but not the purpose of the control. In some situations this could render the app unusable.

A great way to fix this problem is by adding a visual label which precedes the control in question. By default, Win32 will repurpose the text on a label that lies just before the Edit or ComboBox control, as the accessible name of for the control. By adding a visual label in this way, you’ve unblocked your customers using Narrator, and in many cases you’ll have improved the usability of the app for your sighted customers too.

In some UI designs you may feel that for some reason, you don’t want a visual string to appear before the Edit control or ComboBox control. So assuming this is a valid design, (and somehow it’s still efficient for your sighted customers to determine exactly what the purpose of the control is,) you can add a hidden label before the control. Win32 will still repurpose the label text as the accessible name of the control, despite the label having no visual representation on the screen.

For example, say I add the following to a dialog box in my Win32 file:

 

LTEXT “First name”,IDC_STATIC,8,100,0,8,NOT WS_VISIBLE

EDITTEXT IDC_FIRSTNAMEEDIT,70,100,100,10

 

I’ve prevented the label from having any visual representation on the screen by setting its width to zero, and making it not visible. Either of those steps would be sufficient for this demonstration.

When I point the Inspect SDK tool to the Edit control, I find that the control has a UIA Name property set from the text label, as shown in the screenshot below. And if I now tab to the control when Narrator’s running, I hear Narrator announce “First name, editing”.

 

Figure 2: The Inspect SDK tool reporting that the UIA Name property of the Edit control is “First name”.

 

Interestingly I also find that the zero-width, hidden label is not exposed through UIA at all.

Definitely use the Inspect SDK tool to determine whether any of your standard Win32 controls have no accessible name by default, and consider whether adding a label before the control can resolve that issue.

 

Explicitly setting some UIA property on your Win32 control

Sometimes the accessibility of your UI can be significantly enhanced by customizing a specific UIA property associated with a control. A common way of doing this with hwnd-based Win32 UI is to use SetHwndProp() and SetHwndPropStr(), available through the IAccPropServices interface. This interface was originally built to provide a way to set custom MSAA properties on UI, but it can also be used to customize some UIA properties too.

Explicitly setting properties in this way is known as Direct Annotation, and that’s a type of Dynamic Annotation.

 

The general approach is shown below. (And by the way, sorry about the wide paragraph spacing in all the code snippets. It’s not practical for me set to zero spacing when uploading these posts to MSDN.)

 

// Near the top of the file…

#include <initguid.h>

#include “objbase.h”

#include “uiautomation.h”

IAccPropServices* _pAccPropServices = NULL;

 

// When the UI is created…

HRESULT SetCustomUIAProperties (HWND hDlg)

{

HRESULT hr = CoCreateInstance(

CLSID_AccPropServices,

nullptr,

CLSCTX_INPROC,

IID_PPV_ARGS(&_pAccPropServices));

if (SUCCEEDED(hr))

{

VARIANT var;

Initialize var with the type and value required here!

hr = _pAccPropServices->SetHwndProp(

GetDlgItem(hDlg, IDC_MYCONTROL),

OBJID_CLIENT,

CHILDID_SELF,

<UIA property guid of interest>,

var);

}

return hr;

}

 

// When the UI is destroyed…

void ClearCustomUIAProperties (HWND hDlg)

{

if (_pAccPropServices != nullptr)

{

// Clear all the properties we set on the hwnd.

MSAAPROPID props[] = { <UIA property guid of interest> };

_pAccPropServices->ClearHwndProps(

GetDlgItem(hDlg, IDC_MYCONTROL),

OBJID_CLIENT,

CHILDID_SELF,

props,

ARRAYSIZE(props));

_pAccPropServices->Release();

_pAccPropServices = NULL;

}

}

 

The above steps assume the control of interest has its own hwnd.

If the property being customized is a string, then I’d use the very handy SetHwndPropStr() rather than SetHwndProp().

 

Example

I’ve seen SetHwndProp() and SetHwndPropStr() used for a number of different reasons. The example below sets custom Name, HelpText, and ItemStatus properties on a Button. It’s pretty uncommon for all these properties to be set on a single Button, but you might sometimes need to set at least one of them. The code below also turns a text label into a LiveRegion, so that a screen reader is informed when the text on the label changes.

 

// At the top of the file…

#include <initguid.h>

#include “objbase.h”

#include “uiautomation.h”

IAccPropServices* _pAccPropServices = NULL;

 

// When the UI is created…

void SetCustomUIAProperties(HWND hDlg)

{

HRESULT hr = CoCreateInstance(

CLSID_AccPropServices,

nullptr,

CLSCTX_INPROC,

IID_PPV_ARGS(&_pAccPropServices));

if (SUCCEEDED(hr))

{

// First customize the UIA properties of the Button.

WCHAR szButtonName[MAX_LOADSTRING];

LoadString(

hInst,

IDS_CONNECTION,

szButtonName,

ARRAYSIZE(szButtonName));

// Set the Name on the button.

hr = _pAccPropServices->SetHwndPropStr(

GetDlgItem(hDlg, IDC_BUTTON_CONNECTION),

OBJID_CLIENT,

CHILDID_SELF,

Name_Property_GUID,

szButtonName);

if (SUCCEEDED(hr))

{

WCHAR szButtonHelp[MAX_LOADSTRING];

LoadString(

hInst,

IDS_CONNECTION_HELP,

szButtonHelp,

ARRAYSIZE(szButtonHelp));

// Set the HelpText on the button.

hr = _pAccPropServices->SetHwndPropStr(

GetDlgItem(hDlg, IDC_BUTTON_CONNECTION),

OBJID_CLIENT,

CHILDID_SELF,

HelpText_Property_GUID,

szButtonHelp);

}

if (SUCCEEDED(hr))

{

WCHAR szButtonStatus[MAX_LOADSTRING];

LoadString(

hInst,

IDS_CONNECTION_STATUS,

szButtonStatus,

ARRAYSIZE(szButtonStatus));

// Set the ItemStatus on the button.

hr = _pAccPropServices->SetHwndPropStr(

GetDlgItem(hDlg, IDC_BUTTON_CONNECTION),

OBJID_CLIENT,

CHILDID_SELF,

ItemStatus_Property_GUID,

szButtonStatus);

}

// Now set the LiveSetting property on the label.

if (SUCCEEDED(hr))

{

VARIANT varLiveSetting;

varLiveSetting.vt = VT_I4;

varLiveSetting.lVal = Assertive; // From UIAutomationCore.h.

hr = _pAccPropServices->SetHwndProp(

GetDlgItem(hDlg, IDC_LABEL_CONNECTIONSTATUS),

OBJID_CLIENT,

CHILDID_SELF,

LiveSetting_Property_GUID,

varLiveSetting);

}

}

}

 

// When the LiveRegion label data is changing…

WCHAR szFullConnectionStatus[MAX_LOADSTRING];

LoadString(

hInst,

IDS_LABEL_CONNECTIONSTATUS_UNAVAILABLE,

szFullConnectionStatus,

ARRAYSIZE(szFullConnectionStatus));

// Set the new status text on the label.

HWND hWndStatusLabel = GetDlgItem(hDlg, IDC_LABEL_CONNECTIONSTATUS);

SetWindowText(hWndStatusLabel, szFullConnectionStatus);

// Raise an event to let Narrator know that the LiveRegion data has changed.

NotifyWinEvent(EVENT_OBJECT_LIVEREGIONCHANGED, hWndStatusLabel, OBJID_CLIENT, CHILDID_SELF);

 

// When the UI is destroyed…

void ClearCustomUIAProperties(HWND hDlg)

{

if (_pAccPropServices != nullptr)

{

// Clear all the properties we set on the controls.

MSAAPROPID propsButton[] = {

Name_Property_GUID,

HelpText_Property_GUID,

ItemStatus_Property_GUID };

_pAccPropServices->ClearHwndProps(

GetDlgItem(hDlg, IDC_BUTTON_CONNECTION),

OBJID_CLIENT,

CHILDID_SELF,

propsButton,

ARRAYSIZE(propsButton));

MSAAPROPID propsLabel[] = { LiveSetting_Property_GUID };

_pAccPropServices->ClearHwndProps(

GetDlgItem(hDlg, IDC_LABEL_CONNECTIONSTATUS),

OBJID_CLIENT,

CHILDID_SELF,

propsLabel,

ARRAYSIZE(propsLabel));

_pAccPropServices->Release();

_pAccPropServices = NULL;

}

}

 

If I then point the Inspect SDK tool to the Button, I find the custom Name, HelpText and ItemStatus properties set as required, as shown in the screenshot below.

 

Figure 3: The Inspect SDK tool reporting custom UIA Name, HelpText and ItemStatus properties set on a Button control.

 

I can also set up the AccEvent SDK tool to report UIA LiveRegionChanged events raised by my UI, and show me the UIA LiveSetting property of the element that raised the event. The event reported by AccEvent in response to executing the code snippet above is as follows:

UIA:AutomationEvent    [LiveRegionChanged] Sender: ControlType:UIA_TextControlTypeId (0xC364), Name:”No connection is available. Try again later.”, LiveSetting:Assertive (2)

 

Below are the announcements made by Narrator when I interact with the UI. First Narrator tells me I’ve reached the Connection button, and then after a brief pause, it announces the custom HelpText and ItemStatus of the Button. When I invoke the Button with a press of the Spacebar, the status label changes, and Narrator announces the new value of the status label.

 

Connection button,

Establish a network connection if possible, Disconnected,

Space

No connection is available. Try again later.,

 

All in all, this is pretty powerful customization of the accessibility of the Win32 UI.

Other examples of where you might set custom UIA properties on a Win32 control might be:

  • FullDescription_Property_GUID
  • IsControlElement_Property_GUID, IsContentElement_Property_GUID: By setting both of these to false, the control will only be exposed through the Raw view of the UIA tree, and as such Narrator will ignore the element. So these properties might be set on some custom container control which you didn’t intend to expose to any of your customers.

 

Note that for any property values that change while the app is running, you’ll want to make sure that a related UIA PropertyChanged event is raised. Sometimes this can be done by calling NotifyWinEvent() with a WinEvent, and UIA will convert the event into a UIA event. In other cases, NotifyWinEvent() might be called with a UIA event directly. For example, with UIA_ItemStatusPropertyId.

Think very carefully before customizing the UIA ControlType of a control, (and by default, don’t do it). Typically a control of a particular type supports UIA patterns associated with that ControlType. For example, a CheckBox supports the UIA Toggle pattern, and a ComboBox supports the UIA ExpandCollapse pattern. Customizing the ControlType property will not add support for the associated patterns, and so if you do change the exposed ControlType, your customers might be left wondering why they can’t perform the expected actions at the control.

And by the way, trying to set one of the Is*PatternAvailable properties on the control in the hope that this will add all the pattern support associated with the custom control type, won’t work. SetHwndProp() will return failure in that case.

So while you can’t completely customize the accessibility of your Win32 controls by using SetHwndProp() and SetHwndPropStr(), you can do a great deal of useful stuff by calling them.

 

Incrementally adding UIA support to a control that already supports MSAA

If you really want to, you could turn a control into a native UIA provider. When a UIA client like Narrator wants to interact with your hwnd-based control, UIA will ask your control whether it supports UIA natively. It does this by sending WM_GETOBJECT with UiaRootObjectId, and if you’ve implemented all the required UIA provider interfaces, UIA will call all those interfaces directly.

In practice, I don’t tend to get asked questions about that. More commonly for devs with Win32 UI, I find that they already have an existing MSAA implementation, and they need to make some small but important enhancement to the Narrator experience at their UI. They understandably don’t want to have to completely replace the existing MSAA implementation with a new UIA implementation. So what they can consider using here is IAccessibleEx.

The IAccessibleEx interface exists to provide a way to add support for specific UIA functionality to an existing MSAA provider. (Technically the UI implementing IAccessible is known as an MSAA “server”, rather than a “provider”. But I tend to use “provider” for both MSAA and UIA these days, to avoid making the subject even more complicated than it already is.)

While supporting UIA functionality through IAccessibleEx can initially seem like a lot of work, in some cases it can be the most efficient way to add the support that your customer needs.

 

Example

Say I have a custom control whose visuals can be expanded and collapsed. I’ve already implemented IAccessible, and given it appropriate MSAA properties for such things as name and role, which map to UIA’s Name and ControlType. While I’ve also set MSAA state values as appropriate using STATE_SYSTEM_EXPANDED and STATE_SYSTEM_COLLAPSED, this does not lead to full support of the UIA ExpandCollapse pattern. A Narrator customer expects to be told the current expanded state of the control, and on a touch device, to be able to change the state through touch gestures. This is only possible if the control supports the UIA ExpandCollapse pattern.

With the code below, the following actions are possible:

  • UIA determines that an IAccessibleEx object is available. It determines this by querying for that interface through IServiceProvider on the original MSAA object which implements IAccessible. The code below has a single object implementing all interfaces, but the object that implements IAccessible and IServiceProvider does not have to be the same object that implements the other interfaces.
  • Having determined that an object is available which supports IAccessibleEx, UIA gets an IRawElementProviderSimple interface from the object supporting IAccessibleEx.
  • UIA then calls IRawElementProviderSimple::GetPatternProvider() to get an object that implements IExpandCollapseProvider.
  • UIA calls into the IExpandCollapseProvider.

 

The code below does not include an IAccessible implementation, as we’re interested here in adding the UIA support to an existing MSAA provider. (No implementation of IUnknown is included either, but some QueryInterface will have to handle all the interfaces of interest.)

The code assumes that the custom control does not have any child IAccessible elements.

 

class MyMSAAProvider : public

IAccessible,

IServiceProvider,

IAccessibleEx,

IRawElementProviderSimple,

IExpandCollapseProvider

{

public:

MyMSAAProvider(HWND hWnd);

// IUnknown.

HRESULT STDMETHODCALLTYPE QueryInterface(REFIID riid, void ** ppvObject);

ULONG STDMETHODCALLTYPE AddRef(void);

ULONG STDMETHODCALLTYPE Release(void);

// IAccessible.

HRESULT STDMETHODCALLTYPE GetTypeInfoCount(UINT * pctinfo);

HRESULT STDMETHODCALLTYPE GetTypeInfo(UINT iTInfo, LCID lcid, ITypeInfo ** ppTInfo);

HRESULT STDMETHODCALLTYPE GetIDsOfNames(REFIID riid, LPOLESTR * rgszNames, UINT cNames, LCID lcid, DISPID * rgDispId);

HRESULT STDMETHODCALLTYPE Invoke(DISPID dispIdMember, REFIID riid, LCID lcid, WORD wFlags, DISPPARAMS * pDispParams,

VARIANT * pVarResult, EXCEPINFO * pExcepInfo, UINT * puArgErr);

HRESULT STDMETHODCALLTYPE get_accParent(IDispatch ** ppdispParent);

HRESULT STDMETHODCALLTYPE get_accChildCount(long * pcountChildren);

HRESULT STDMETHODCALLTYPE get_accChild(VARIANT varChild, IDispatch ** ppdispChild);

HRESULT STDMETHODCALLTYPE get_accName(VARIANT varChild, BSTR * pszName);

HRESULT STDMETHODCALLTYPE get_accValue(VARIANT varChild, BSTR * pbstrValue);

HRESULT STDMETHODCALLTYPE get_accDescription(VARIANT varChild, BSTR * pszDescription);

HRESULT STDMETHODCALLTYPE get_accRole(VARIANT varChild, VARIANT * pvarRole);

HRESULT STDMETHODCALLTYPE get_accState(VARIANT varChild, VARIANT * pvarState);

HRESULT STDMETHODCALLTYPE get_accHelp(VARIANT varChild, BSTR * pszHelp);

HRESULT STDMETHODCALLTYPE get_accHelpTopic(BSTR * pszHelpFile, VARIANT varChild, long * pidTopic);

HRESULT STDMETHODCALLTYPE get_accKeyboardShortcut(VARIANT varChild, BSTR * pszKeyboardShortcut);

HRESULT STDMETHODCALLTYPE get_accFocus(VARIANT * pvarChild);

HRESULT STDMETHODCALLTYPE get_accSelection(VARIANT * pvarChildren);

HRESULT STDMETHODCALLTYPE get_accDefaultAction(VARIANT varChild, BSTR * pszDefaultAction);

HRESULT STDMETHODCALLTYPE accSelect(long flagsSelect, VARIANT varChild);

HRESULT STDMETHODCALLTYPE accLocation(long * pxLeft, long * pyTop, long * pcxWidth, long * pcyHeight, VARIANT varChild);

HRESULT STDMETHODCALLTYPE accNavigate(long navDir, VARIANT varStart, VARIANT * pvarEndUpAt);

HRESULT STDMETHODCALLTYPE accHitTest(long xLeft, long yTop, VARIANT * pvarChild);

HRESULT STDMETHODCALLTYPE accDoDefaultAction(VARIANT varChild);

HRESULT STDMETHODCALLTYPE put_accName(VARIANT varChild, BSTR szName);

HRESULT STDMETHODCALLTYPE put_accValue(VARIANT varChild, BSTR szValue);

// IServiceProvider. Provides access to an IAccessibleEx interface.

HRESULT STDMETHODCALLTYPE QueryService(REFGUID guidService, REFIID riid, LPVOID *ppvObject);

// IAccessibleEx interface.

HRESULT STDMETHODCALLTYPE GetObjectForChild(long idChild, IAccessibleEx ** pRetVal);

HRESULT STDMETHODCALLTYPE GetIAccessiblePair(IAccessible ** ppAcc, long * pidChild);

HRESULT STDMETHODCALLTYPE GetRuntimeId(SAFEARRAY ** pRetVal);

HRESULT STDMETHODCALLTYPE ConvertReturnedElement(IRawElementProviderSimple * pIn, IAccessibleEx ** ppRetValOut);

// IRawElementProviderSimple interface.

HRESULT STDMETHODCALLTYPE get_ProviderOptions(ProviderOptions * pRetVal);

HRESULT STDMETHODCALLTYPE GetPatternProvider(PATTERNID patternId, IUnknown ** pRetVal);

HRESULT STDMETHODCALLTYPE GetPropertyValue(PROPERTYID propertyId, VARIANT * pRetVal);

HRESULT STDMETHODCALLTYPE get_HostRawElementProvider(IRawElementProviderSimple ** pRetVal);

// IExpandCollapseProvider interface.

HRESULT STDMETHODCALLTYPE Expand();

HRESULT STDMETHODCALLTYPE Collapse();

HRESULT STDMETHODCALLTYPE get_ExpandCollapseState(ExpandCollapseState * pRetVal);

private:

LONG _cRef = 0;

HWND _hWnd = NULL;

BOOL _fExpanded = FALSE;

};

 

MyMSAAProvider::MyMSAAProvider(HWND hWnd)

{

// Cache the hwnd associated with this custom Win32 Control.

_hWnd = hWnd;

}

 

// IServiceProvider implementation. Provides access to an IAccessibleEx interface.

HRESULT STDMETHODCALLTYPE MyMSAAProvider::QueryService(REFGUID guidService, REFIID riid, LPVOID *ppvObject)

{

if (ppvObject == NULL)

{

return E_INVALIDARG;

}

*ppvObject = NULL;

if (guidService == __uuidof(IAccessibleEx))

{

// This object implements both IServiceProvider and IAccessibleEx.

return this->QueryInterface(riid, ppvObject);

}

return E_NOINTERFACE;

};

 

// IAccessibleEx interface.

HRESULT STDMETHODCALLTYPE MyMSAAProvider::GetObjectForChild(long idChild, IAccessibleEx ** pRetVal)

{

// This implementation does not have any children.

return NULL;

}

HRESULT STDMETHODCALLTYPE MyMSAAProvider::GetIAccessiblePair(IAccessible ** ppAcc, long * pidChild)

{

// This element is not a child element.

*pidChild = CHILDID_SELF;

return this->QueryInterface(IID_IAccessible, (LPVOID*)ppAcc);

}

HRESULT STDMETHODCALLTYPE MyMSAAProvider::GetRuntimeId(SAFEARRAY ** pRetVal)

{

// MSDN states that it’s ok to not implement this method, but UIA is less efficient as

// a result.

return E_NOTIMPL;

}

HRESULT STDMETHODCALLTYPE MyMSAAProvider::ConvertReturnedElement(

IRawElementProviderSimple * pIn, IAccessibleEx ** ppRetValOut)

{

*ppRetValOut = NULL;

return E_NOTIMPL;

}

 

// IRawElementProviderSimple interface.

HRESULT STDMETHODCALLTYPE MyMSAAProvider::get_ProviderOptions(ProviderOptions * pRetVal)

{

// This example assumes that we’re always running in the provider process.

// If in some cases the code can run in the UIA client process, this will

// need to be updated.

return ProviderOptions_ServerSideProvider | ProviderOptions_UseComThreading;

}

HRESULT STDMETHODCALLTYPE MyMSAAProvider::GetPatternProvider(PATTERNID patternId, IUnknown ** pRetVal)

{

// This custom control only adds support for the UIA ExpandCollapse pattern.

if (patternId == UIA_ExpandCollapsePatternId)

{

return this->QueryInterface(IID_IUnknown, (LPVOID*)pRetVal);

}

return E_NOTIMPL;

}

 

HRESULT STDMETHODCALLTYPE MyMSAAProvider::GetPropertyValue(PROPERTYID propertyId, VARIANT * pRetVal)

{

// This IRawElementProviderSimple only supplies a custom UIA pattern implementation.

// and does not supply any custom UIA properties. Do NOT return E_NOTIMPL here.

pRetVal->vt = VT_EMPTY;

return S_OK;

}

HRESULT STDMETHODCALLTYPE MyMSAAProvider::get_HostRawElementProvider(IRawElementProviderSimple ** pRetVal)

{

return UiaHostProviderFromHwnd(_hWnd, pRetVal);

}

 

// IExpandCollapseProvider interface.

 

// IMPORTANT: The sample code adds support for the ExpandCollapse pattern. Typically apps

// also have visual UI which must be kept in sync with the programmatic state. This means

// that whenever one of the visual state or programmatic state changes, the other of the

// two must be updated. The related implementation depends on the design of the control.

// For example, while this sample code caches a current state through _fExpanded, you

// could decide to have no cached state. Instead whenever the get_ExpandCollapseState()

// is called, you query the control for its current expanded state. (Also, when your

// Expand() and Collapse() are called, you call into the control to tell it to change

// its visual state.

 

// IMPORTANT: Whenever the expanded state of the control changes, an ExpandCollapseState

// property state changed event must be raised to let UIA client apps know of the change.

// Care must be taken to make sure the event is only raised once when the state changes,

// despite it needing to be raised in response to action through either UIA or visual UI.

 

HRESULT STDMETHODCALLTYPE MyMSAAProvider::Expand(void)

{

_fExpanded = TRUE;

// Always raise the event AFTER setting the new state on the element. (Some UIA

// clients ask UIA to cache the current state of the element beneath the call to

// raise the event.)

NotifyWinEvent(UIA_ExpandCollapseExpandCollapseStatePropertyId, _hWnd, OBJID_CLIENT, CHILDID_SELF);

return S_OK;

}

 

HRESULT STDMETHODCALLTYPE MyMSAAProvider::Collapse(void)

{

_fExpanded = FALSE;

NotifyWinEvent(UIA_ExpandCollapseExpandCollapseStatePropertyId, _hWnd, OBJID_CLIENT, CHILDID_SELF);

return S_OK;

}

 

HRESULT STDMETHODCALLTYPE MyMSAAProvider::get_ExpandCollapseState(ExpandCollapseState * pRetVal)

{

*pRetVal = (_fExpanded ? ExpandCollapseState_Expanded : ExpandCollapseState_Collapsed);

return S_OK;

}

 

Other options

The above approaches for enhancing the accessibility of Win32 UI are the ones I’ve hit most commonly in practice.

Another option I’ve implemented once, a long time ago, is Value Map Annotation, which is another type of Dynamic Annotation. I used it to map the values on a slider to strings which the customer will find more meaningful. So instead of the customer using Narrator hearing “one”, “two”, “three” and “four” as they interact with the slider, they might hear (say) “Tiny”, “Small”, Medium” and “Large”. It worked great, so I’d recommend it if you have need of such a thing,

 

Summary

Say you have existing Win32 UI, and you want to enhance its accessibility. Often you don’t want to invest in building a full native UIA implementation, given that you only need to address specific issues with the UI. (Note that if you did have a full native UIA implementation, you could call UiaRaiseAutomationEvent() to make UIA client apps like Narrator aware of changes in your UI, rather than calling the NotifyWinEvent() mentioned below.)

 

So consider the following options first:

  • Replace custom UI with a standard Win32 control which is accessible by default. Perhaps the custom UI isn’t really essential.
  • If some control has no accessible name, consider whether a visible or hidden label added immediately before the control could be used to provide the control with an accessible name.
  • Consider whether use of SetHwndProp() or SetHwndPropStr() could be used to customize specific UIA properties on a control.
  • If you already have an IAccessible implementation, consider whether support for targeted UIA patterns could be added through use of IAccessibleEx.
  • If you need to raise an event to make a screen reader aware of a change in your UI, call NotifyWinEvent, passing in WinEvent ids or in some cases UIA event ids, as listed at Event Constants.

 

Thanks for helping everyone benefit from all the great features of your Win32 app!

Guy

 

Posts in this series:

Common approaches for enhancing the programmatic accessibility of your Win32, WinForms and WPF apps: Part 1 – Introduction

Common approaches for enhancing the programmatic accessibility of your Win32, WinForms and WPF apps: Part 2 – Win32

Common approaches for enhancing the programmatic accessibility of your Win32, WinForms and WPF apps: Part 3 – WinForms

Common approaches for enhancing the programmatic accessibility of your Win32, WinForms and WPF apps: Part 4 – WPF


Common approaches for enhancing the programmatic accessibility of your Win32, WinForms and WPF apps: Part 1 – Introduction

$
0
0

This series of posts describes some of the steps you can take to enhance the programmatic accessibility of your Win32, WinForms and WPF apps.

 

Introduction

Well, it’s been a while since I’ve had a chance to share some of the things I’ve learnt around building accessible apps. I’ve been working closely with a number of teams who are working hard to enhance the accessibility of their apps. This has been a fascinating time for me, because of the wide range of UI frameworks being used across the various teams. It’s been a reminder for me that while the principles around accessibility are the same regardless of the UI framework you’re using, the implementation details are quite different.

And sometimes this is a source of confusion for devs, who are trying to figure out exactly what they need to do in code to improve the accessibility of their apps. Someone contacted me a few weeks ago, saying that they’d heard they could fix a particular accessibility-related bug in their app, through use of an AutomationPeer, but they’d not been able to figure out how to do that. It turned out that their app was a Windows Forms (WinForms) app, and AutomationPeers are not related to WinForms apps. The dev had lost valuable time trying to figure out how to leverage something that they can’t leverage.

So in this series of posts, I’ll call out some of the specific classes and functions that might be able to help you enhance the programmatic accessibility of your app, based on the UI framework you’re using.

After my recent experiences, the first thing I ask when someone sends me a question about how they might fix an accessibility-related bug in their app, is “What UI framework are you using?”. The answer’s usually one or two of Win32, WinForms, WPF, UWP XAML, or HTML. I had one large team say they have a mix of UI built with Win32, WinForms, WPF, and HTML. I’ve yet to find a team that owns UI built with all five of the UI frameworks I deal with, but maybe there’s one out there somewhere…

 

Big quiz

After getting the reminder that the term “AutomationPeer” gives no indication to someone new to accessibility as to what UI framework it relates to, I thought about how the same issue applies to a number of accessibility-related terms. So here’s a big quiz.

Which of the terms below relate to which of Win32, WinForms, WPF, UWP XAML and HTML?

 

  • AccessibilityNotifyClients
  • AccessibilityObject
  • AccessibleName
  • ARIA
  • AutomationPeer
  • AutomationProperties
  • IAccessible
  • IAccessibleEx
  • IRawElementProviderSimple
  • NotifyWinEvent
  • RaiseAutomationEvent
  • SetHwndPropStr
  • UiaRaiseAutomationEvent

 

If you know the answer, then you’ve already won the prize. That prize being the power to reach out and help many devs build accessible apps, and so indirectly help many of their customers.

For those of us who aren’t so familiar with all those terms, I’ll connect them to related UI frameworks in this series of posts, particularly focusing on Win32, WinForms and WPF.

 

What about UWP XAML?

As far as accessibility goes, UWP XAML evolved from WPF. Many of the details around the accessibility of WPF, also apply to UWP XAML. However, with each release of Windows, the support for accessibility in UWP XAML is enhanced, and so there are some very handy things you can do with UWP XAML apps that aren’t practical with a WPF app. But the fundamentals of how to expose certain UI Automation (UIA) properties, or add support for certain UIA patterns, is the same for both WPF and UWP XAML. Details on building accessible UWP XAML apps can be found at UWP App Accessibility.

 

New for the Windows 10 Creators Update

Since we’re on the topic of UWP XAML, I can’t resist the urge to mention one of the things that I find most exciting about the enhancements to accessibility in the Windows 10 Creators Update.

Providing a way for your customers to use the keyboard to efficiently leverage all the functionality of your app, is of huge value to your customers. Your app may show a busy page full of useful controls, and your customer might be able to use the keyboard to tab through all the controls today. Some people might claim the app is keyboard accessible, and indeed, it is essential that your customer can leverage all the great functionality in your app through use of keyboard alone.

But really – why would your customer want to have to move through all the controls in your app before they reach the control that they want to interact with? After all, you don’t make your customers who use a mouse move the mouse cursor to all the controls between the control they last worked with and the control they want to reach, before they can continue with their task. All customers want to progress through the steps in the task at hand, with no delays whatsoever.

You can satisfy your customers’ desires here by adding access keys to your app. Access keys are those keyboard shortcuts you can assign to controls, which enable your customer to trigger action at the control by pressing the Alt key plus some control-specific character. That’s the functionality that you’ve always been able to add to your Win32 and WinForms apps with no effort at all, (by adding an ampersand somewhere in the text set on the control). Well, now, it’s a piece of cake to do the same thing with UWP XAML apps. For example, if you want to set an access key of “C” on a Button or TextBox, add this to your control’s XAML:

AccessKey=”C”

 

This is so little work for devs, and provides so much power to their customers, that I’d say every team shipping UWP XAML apps should consider leveraging this cool new functionality.

 

What about HTML hosted in Edge?

The accessibility of HTML is a huge subject, as indicated by the great deal of discussion on the subject on the web. In this series of posts, I can only cover whatever I have time for in flights back and forth over the Atlantic. So I’m going to concentrate on trying to help with the confusion around what options a dev has with their Win32, WinForms and WPF apps.

That said, to give an introduction into the programmatic accessibility of HTML hosted in Edge, it’s all about UIA. Whatever UI framework you’re using to build your UI, be it Win32, WinForms, WPF, UWP XAML or Edge-hosted HTML, an assistive technology (AT) app like the Windows Narrator screen reader uses the UIA API to interact with your UI. Narrator is a UIA client app, and if it’s interacting with your UI, then something somewhere has implemented the UIA Provider API.

Often the UI framework itself will implement the UIA Provider API on your behalf. You don’t want to have to do all that work yourself unless you really have to. And sure enough, Edge is doing all that work on your behalf, so that Narrator and other UIA client apps can interact with your HTML-based UI.

Edge will expose data about your UI, through the UIA API, based on how you defined your UI. For example, if you added a button tag, Edge will expose a related UIA element whose UIA ControlType property is UIA_ButtonControlTypeId, and whose UIA Name property is whatever text is shown on the button.

And where you need to enhance the default accessibility of your UI, it can sometimes be appropriate to do this with ARIA. For example, say your button shows no text string on it, and instead shows some cool glyph from some font, and you specified this by referencing the associated Unicode value. In this situation Edge has no friendly string to repurpose as the UIA Name property, so you can help by adding the string that your customers need through use of the aria-label attribute. Edge will expose that data as the UIA Name property of the button.

By using semantic HTML, Edge will expose your UI through UIA in an intuitive way, and where necessary, use of ARIA can further influence that UIA representation. Details on building accessible UWP XAML apps can be found at Edge Accessibility.

 

By default, use standard controls and widgets

The above example around using an HTML button tag touches on one of the most important points relating to building accessible UI.

All the UI frameworks that I deal with do a ton of work on your behalf to provide the foundation for an accessible experience. And in some cases, portions of your UI can be accessible by default. Wherever practical, you really want to leverage the help that the UI framework can provide. By leveraging that help, it can be much quicker for you to build accessible UI, and it reduces the risk that you ship some severe accessibility bugs.

This point applies to all of Win32, WinForms, WPF and UWP XAML, but let’s consider an HTML example. Say I want to show something in my app that looks like a bird feeder, and when invoked, some action occurs. It might be tempting for me to think that I won’t add a button to my HTML, because my UI doesn’t look like a typical button at all. So perhaps I’d add a clickable div, and style it such that it looks like a bird feeder.

But that’s exactly what I don’t want to do.

If I do that, Narrator won’t be told that the UI has a ControlType of Button. The ControlType sets my customer’s expectations around what actions they can take at the element. Ok, fair enough, maybe I could add a role of “button” to the div, and so have the ControlType exposed through UIA as Button. Well, that still won’t make the element keyboard accessible. Use of ARIA, (including the role attribute,) won’t affect how the UI reacts to keyboard input. Ok, fair enough, maybe I could add a tabindex attribute to the div, which would mean I could tab to it. Well, that still doesn’t mean my customer can invoke the element once it has keyboard focus. Ok, fair enough, maybe I could add a key event handler until I mimic the behavior of standard HTML buttons. (Hmm, should that react to the Spacebar or Enter, and for keydown or keyup?)

Well, yes, perhaps in some cases it is possible to patch up the inaccessible UI. But really – why would you want to spend time doing that? If you use a button tag in the first place, it’ll be exposed through UIA as having a ControlType of Button, and your customer can tab to it, and invoke it through the keyboard, all by default. The only work on your plate is to account for whatever custom visual styling you’ve done, such that keyboard focus feedback is clear when the UI has focus, and the UI is as usable as possible when a high contrast theme is active.

So by default, use standard controls and widgets.

 

Accessibility is more than just programmatic accessibility right?

Mentioning keyboard accessibility and use of colors above, is a reminder that there are a number of topics relating to accessibility. I tend to group all these topics into a few areas, as mentioned at Building accessible Windows Universal apps. In practice, the three areas of colors & contrast, keyboard accessibility and programmatic accessibility, still need very close attention if we’re to avoid shipping some severe bugs.

And while your customers depend on you to build apps which are accessible in all these areas, this series of post focuses on programmatic accessibility.

 

And finally…

If you’re to deliver a great experience for all your customers, then you need to feel confident that your app is programmatically accessible. Today, that means its representation through UIA is rock-solid. That is, the UIA properties it exposes, the control it provides through UIA patterns, and the notifications it provides through UIA events, all work together to deliver a full and efficient experience to your customers.

As part of the process for me to learn about an app’s UIA representation, I use the Inspect and AccEvent SDK tools. I couldn’t do my job without these tools. So I can’t recommend enough that you, or someone on your team, gets familiar with these tools. They’re not exactly the most intuitive of tools, but once you’ve got the hang of Inspect, it can quickly draw your attention to the sorts of bugs which could render your app unusable to many people. Figure 1 below shows the Inspect tool reporting the UIA properties of a button in Visual Studio’s WPF UI.

Note also that Inspect can be extremely helpful for providing information on the hierarchy of the elements exposed through the UIA tree. An unexpected order of elements in the UIA tree can be a source of severe bugs. I call this out explicitly in Part 2 of this series, because it’s so easy to hit the problem when building WinForms apps. But it applies to all UI, and in fact I hit a bug related to this just a few days ago with Win32 UI, because the order of the controls in a dialog box as defined in the .rc file was very different to the visual layout of the UI.

For an introduction into UIA and the related SDK tools, take a look at this training video, Introduction to UIA: Microsoft’s Accessibility API.

I’m sure there are a few options around accessibility that I’ve not mentioned in this series of posts. For example, how to write a full UIA Provider API implementation. In some situations, you may get involved with writing a full UIA implementation, but in general, you’ll not want to have to. Rather you’ll want to leverage all that the Windows platform can do on your behalf, and deliver a great accessible experience to all your customers with as little work on your part as possible. So this post concentrates on some of the more commonly used options for enhancing accessibility that I’ve seen used in practice.

 

Thanks for helping everyone benefit from the all great features of your apps!

Guy

 

Posts in this series:

Common approaches for enhancing the programmatic accessibility of your Win32, WinForms and WPF apps: Part 1 – Introduction

Common approaches for enhancing the programmatic accessibility of your Win32, WinForms and WPF apps: Part 2 – Win32

Common approaches for enhancing the programmatic accessibility of your Win32, WinForms and WPF apps: Part 3 – WinForms

Common approaches for enhancing the programmatic accessibility of your Win32, WinForms and WPF apps: Part 4 – WPF

 

 

Figure 1: The Inspect SDK tool showing that the “New Project” item in the Visual Studio toolbar supports being both programmatically invoked and expanded.

 

 

 

 

【製造業の最前線!】ハノーバーメッセからお得な情報をお届け! – DevWire (2017/6/26)

$
0
0
DevWire Newsletter 組み込み業界必読の無料ニュースレター 2017 年 6 月号
TwitterTwitter でつぶやく  FacebookFacebook でシェアする  LinkedInLinkedIn でシェアする
Index
Hot Topics
エッジのソリューションも続々登場! Azure IoT 最新アップデート情報
興奮冷めやらぬ de:code 2017 レポート!
「IoT ビジネス共創ラボ」立ち上げから 1 年半。ドローン ワーキング グループが発足!
DevWire のバック ナンバーをご紹介
正規販売代理店 情報
セミナー・トレーニング情報
Column
IoT デバイスのデータ見える化 (Time Series Insights 編)
ほっとひと息
「名人が負けてしまいましたが・・・」DevWire 編集部 加藤 大輔
Hot Topics
エッジのソリューションも続々登場! Azure IoT 最新アップデート情報
2 か月ぶりのアップデートとなります。今回は 4 月にドイツで行われた国際産業見本市、通称ハノーバメッセでの出来事を中心にご紹介します。
Connected Factory
Connected Factory Microsoft Azure IoT Suite に、Connected Factory という新しい仲間が増えました。システムの構成図は、下図のようになっています。左側部分の工場と、右側部分のクラウドに分かれたシンプルな構成となっています。
図 1 Azure IoT Suite - Connected Factory 構成図図 1 Azure IoT Suite – Connected Factory 構成図
Storage account (Standard-LRS)
Virtual Machine (Standard D1 v2 (1 core, 3.5 GB memory))
IoT Hub (Standard S1, 3 units)
Key vault (Standard)
Azure Time Series Insights (Standard S1)
Web App Service (Standard S1)

左側の部分は、工場のラインを仮想マシン上でシミュレーションしています。左側の部分を実際に OPC-UA Server が稼働している設備に置き換えることで、Connected Factory がそのまま利用することができますので、すでに工場で稼働している OPC-UA に対応した設備があれば、簡単に Factory IoT を実現していただくことができます。

IoT Edge
工場と Azure (クラウド) とを接続できるように IoT Edge を用意しました。オープン ソースですので、GitHub 社外サイトへ からソース コードをダウンロードしていただくことができます。IoT Edge をするにあたって、事前に確認が取れているオペレーティング システムは以下のようになっています。
Ubuntu 14.04
Ubuntu 15.10
Yocto Linux 3.0 on Intel Edison
Windows 10
Wind River 7.0

IoT Edge には、OPC パブリッシャー モジュールと OPC プロキシ モジュールが追加されました。パブリッシャー モジュールは、工場設備内の OPC サーバーと接続し OPC サーバーから送られてくるテレメトリ データなどを JSON 形式に変換して、IoT Hub へと送信します。パブリッシャー モジュールを介して IoT Hub に送られてくるテレメトリ データは、後述する Time Series Insights で可視化、データ解析が簡単にできるようになっています。プロキシ モジュールは、Azure 間とのトンネルを作成し、Azure 上に展開されている OPC クライアントと工場設備内に展開されている OPC サーバーと、制御コマンドのやり取りができるように橋渡しをしてくれます。

テレメトリしたいデータは、遠隔で操作で Publish (下図参照) することによりモニタリングすることができます。

Publish

また、メソッドをコールすることにより、機械を遠隔で操作することができます。この例では、圧力バルブを解放するコマンドを送信しています。前述のように、テレメトリ データの Pressure をモニタリングすることにより、圧力が実際に解放されたかどうかリアルタイムで確認することができます。

メソッドをコール

Time Series Insights
新しく登場したタイム シリーズ インサイトは、IoT Hub に直結できるようになっているため、非常に簡単に、よりリアルタイムに近いデータ解析が可能になりました。

Time Series Insights
送られてくる大量のデータを高速で処理できるように設計されているため、オペレーターは快適にビッグ データを解析することができます。現在プレビュー期間中により、タイム シリーズ インサイトが利用できるリージョンは、米国西部、米国東部、ヨーロッパ西部、ヨーロッパ北部に限定しております。日本でも早く利用できるようになるよう頑張りますので楽しみにしてください。

Time Series Insights については、今号のコラムで安川情報システムの中田さんが分かりやすく紹介していただいていますので、ぜひ参考にしてください。

興奮冷めやらぬ de:code 2017 レポート!
de:code 2017 少し前になりますが、、、5 月 23、24 日に IT エンジニア向けカンファレンス「de:code 2017」を開催しました!
今年は、AI とクラウド コンピューティングを基本として多くのテクノロジが紹介されました。編集部 加藤が特に注目したのは、MR (Mixed Reality) です。基調講演で紹介された小柳建設様の取組み 社外サイトへ は印象的なのでぜひご覧になってください。EXPO 会場 (ソリューション展示会場) では多くのパートナー様が HoloLens を活用した展示をしていました。
基調講演の内容は、Channel 9 でご覧いただけます。日本語版なので、気軽にご覧ください。セッション動画も順次公開していますので、ぜひご覧ください。会場のようすは、この記事で伝えるよりも #decode17 のタイムライン 社外サイトへ を見てもらうほうがベストです。盛り上がっている臨場感が伝わってきますよ!
「IoT ビジネス共創ラボ」立ち上げから 1 年半。
ドローン ワーキング グループが発足!
IoT ビジネス共創ラボはもうご存知ですよね? IoT/ビッグ データ領域のエキスパートが集まり Microsoft Azure をプラットフォームとする IoT プロジェクトの共同検証を通じてノウハウを共有するコミュニティですが、IoT のそれぞれの分野のエキスパート 10 社で 2016 年 2 月に発足しました。それが、1 年後にはなんと 318 社が会員となり、活発な勉強会やイベントが行われています。

ラボの活動の中で、これまでは 6 つのワーキング グループが作られ、IoT シナリオ検証や POC を共同作成していました。

  1. ビジネス WG
  2. 製造 WG
  3. 流通 WG
  4. ヘルスケア WG
  5. ロボティクス WG
  6. 分析 WG

この 6 つに加えて、今回 7 つ目のワーキング グループであるドローン WG が発足しました。

ドローン ドローンは今までデータ獲得が難しかった場所やシーンにおいてもデータを収集できるため、IoT 分野で大きな注目をあびています。このワーキング グループを率いるのは、ドローン用フライト コントローラーの開発、販売およびドローン用クラウド サービスを手掛けるドローンワークス株式会社 代表取締役の今村博宣氏です。
共同検証の第 1 弾として、「Skype for Business」をドローンで使用し、多点の遠隔地を結んだリアルタイム映像でのインフラ保守、点検を検証します。
IoT ビジネス共創ラボの大きな利点の 1 つが会員間のビジネス マッチングです。このワーキング グループでも早速その利点が活用されています。ウイングアーク1st株式会社様の「MotionBoard」と「Microsoft Azure」を使用しドローンからのセンサー情報をリアルタイムに可視化するサービスを構築いたしました。また同じく会員の アバナード株式会社様とは、作業現場における画像解析やドローンで取得した情報と周辺システム の情報を組み合わせたデータの可視化など、ドローンのビジネス活用におけるシナリオ検討、Proof of Concept の実施、業務への適用を実現しています。
Realtime Drone Status
ドローンを活用したビジネスにご興味のある方はぜひ、次回の IoT ビジネス共創ラボの勉強会 社外サイトへ にご参加ください。

IoT ビジネス共創ラボ 社外サイトへ へのご参加、活動はこちらからいつでもご覧いただけます。

DevWire のバック ナンバーをご紹介
DevWire のバックナンバーをご紹介 とっても役に立つ、みんな大好き DevWire のバック ナンバーです。

DevWire バック ナンバー サイトはこちら

【正規販売代理店 情報】
アドバンテック株式会社
統合 IoT ソリューション 社外サイトへ
IoT 産業の発展を促進するため、マイクロソフトとの協力のもと WISE-PaaS IoT ソフトウェア プラットフォーム サービスを開発。お客様が迅速に IoT アプリケーションを構築できるオールインワン SRP (ソリューション レディ パッケージ) サービスをワンストップで提供していきます。
セミナー・トレーニング情報
セミナー・トレーニング情報 多くのセミナー、トレーニングを開催しております。
ぜひご活用ください。

●アヴネット株式会社 トレーニング 社外サイトへ
●岡谷エレクトロニクス株式会社 セミナー/トレーニング情報 社外サイトへ
●東京エレクトロン デバイス株式会社
トレーニング 社外サイトへ セミナー・イベント 社外サイトへ
●菱洋エレクトロ株式会社 イベント・セミナー情報 社外サイトへ

Column
IoT デバイスのデータ見える化 (Time Series Insights 編)  安川情報システム株式会社 中田 佳孝
2017 年 4 月に、Microsoft 社から新しい IoT システム向けのサービス「Time Series Insights」が公開 (現時点ではパブリック プレビュー) されたことをご存知でしょうか?
IoT 技術を利活用するシステムでは、まずデータを収集して見える化する必要があります。データの項目や量、将来的なスケールアップを考えると、見える化のシステム構成もいろいろと検討されているのではないでしょうか?

この Time Series Insights というサービスは、Azure で提供されている IoT Hub や Event Hubs と接続し、すぐに時系列データを取得、表示させることが可能なサービスとなっています。従来、Azure で提供されていたサービスで IoT デバイスから送信されてきたデータを見える化するには、下図の (1) のようなシステムを構成していました。IoT Hub でデータを受け取り、Stream Analytics で処理を行い、Azure SQL Database/プローブ ストレージにいったん格納後、データ見える化の独自アプリケーションまたは Power BI (Microsoft 社が提供するセルフ BI ツール) で見える化することを検討していたかと思います。この方法では、少数のデータのトレンド グラフを表示するだけでも、複数のサービスを結合する必要があり、設定の手間がかかっていました。

Time Series Insights を使用すると、IoT Hub や Event Hubs との接続設定を行う程度で、すぐにそれらのサービスからデータを取得し、折れ線グラフやヒートマップといった形式で取得したデータの見える化ができます (ただし、データは JSON 形式である必要があります)。

(1) Azure の各種サービスを使用した IoT システム構成例、(2) Time Series Insights での IoT システム構成例

また、Time Series Insights 側でデータを自動的に解析し、含まれている項目を列挙してくれます。そして、表示するデータ系列を取捨選択することが簡単にできます。

たとえば、IoT システムの PoC (Proof of Concept: 概念検証) で、温湿度や電流といった時系列データを取得、見える化することがよくあります。デバイス側に Azure Certified for IoT 対応のデバイスを使用し、IoT Hub へ JSON 形式のデータを上げるようにするだけで、Time Series Insights による見える化システムがすぐさま構築することができます。

表示するデータ系列の選択、表示時間範囲の設定

Time Series Insights の価格は設定したレベル (扱うデータ量) によります。 レベルは現在 2 種類あり、それぞれ以下の仕様となっています。

[1 日あたり受け取れるデータ量] S1: 最大 1 GB or 1,000,000 イベント、S2: 最大 10 GB or 10,000,000 イベント、[蓄積できる合計データ量] S1: 最大 30 GB or 30,000,000 イベント、S2: 最大 300 GB or 300,000,000 イベント、[データ保持期間] S1: 31 日、S2: 100 日

2017/6/12 時点では、Time Series Insights が提供されているリージョンは以下の 4 つに限定されています。IoT Hub や Event Hubs を違うリージョンに作成した場合、リージョン間の送信データは転送データ量に応じて課金される点にご注意ください。

  • 米国西部
  • 米国東部
  • 西ヨーロッパ
  • 北ヨーロッパ

なお、本サービスは前述のとおり現状パブリック プレビューとして提供されています。一般公開のタイミングで、仕様や価格などが変更される可能性があります。ご承知おきください。

参考ページへのリンク
ほっとひと息
「名人が負けてしまいましたが・・・」DevWire 編集部 加藤 大輔
今月は、またまた将棋のお話です。5 月 20 日に電王戦第 2 局が行われまして、第 1 局に続き現名人が AI ソフトに負けてしまいました。電王戦は今年で幕を閉じることになっていて、名実ともに AI ソフトが人間を超えた象徴になる出来事でした。でも、悲壮感はあまりないというのが正直なところです。もしかすると名人が勝てるかも。とは思っていましたが、勝って当然とは思っていなかったからだと思います。

将棋から少し目を離してみると、、、「チェス」の世界では、20 年も前に当時の世界チャンピオンがソフトに負けた歴史を持っています。だからと言ってチェスのプロ プレーヤーが廃業したかといえば、そんなことはありません。今でも世界大会が定期的に行われていますし、AI ソフトでの研究 (練習) を嫌う方が世界チャンピオンになったりすることもあって、一概に AI ソフトで研究することが最善というわけでもないようです。多くのプレーヤーは AI ソフトで研究しているようですが。。。チェスでは人間と AI ソフトの対戦は、いまでも行うことがありますが、話題にならならないそうです。圧倒的に AI ソフトが強くて面白みがないので、人間がハンデをもらって対戦しているのです。

「AI が人間を超える」。。。これは将棋やチェスだけではなく、もっと身近なところで起こる避けられないことですね。AI が奪う職業ランキングの予測があちこちで発表されていますが、日本に限って言えば人手不足を補う手段になりえると思いますが皆さんはどう思いますか?

DevWire 7 月号は、お休みとなります、次回配信は 8 月 28 日 (月) です。お楽しみに!

ページのトップに戻る
Windows Embedded DevWire に関するお問い合わせはこちら:
kkoedadmin@microsoft.com
 

Dashboard Designer error: “The document library no longer exists or you do not have permissions to view it. The data source cannot be saved.”

$
0
0

You may see the following error when trying to connect to a SharePoint list data source in PerformancePoint Dashboard Designer:

 

The document library no longer exists or you do not have permissions to view it. The data source cannot be saved. Additional details have been logged for you administrator.

 

The SharePoint ULS log and Event Viewer may show the following error:

 

The user “<domainusername>” attempted to save an item in the following location: https://<SharePoint web app>/sites/<site name>/Data Connections. Verify that the location exists and that the user has the “Add Items” permission.

 

The error may occur if the SharePoint database is using remote blob storage. The error can be fixed by giving the PerformancePoint Services account the db_rbs_reader and db_rbs_writer permissions in the SharePoint content database.

SourceLink – creating exciting new mixed-reality shopping experience with information about the producers behind the food we buy

$
0
0

Ever wondered who grows the food you buy in your local supermarket, delicatessen or farm market outlet?  Ever wanted to support local producers, and share their experiences? Ever wanted to visit their farm without ever having to leave the store?

All of this may soon be possible. SourceLink, a team of graduate and undergraduate computer science students from Melbourne University, has developed a new app that will provide the background information on the products on the shelves, whether grown or manufactured, in a mixed-reality experience.

The SourceLink team members – Karen Zhang, Jack Qian and Matilda Stevenson – are off to Seattle to compete as the Australian entry in Microsoft’s global Imagine Cup competition this coming July.

Matilda is a second year computer science undergraduate. Jack and Karen are second year Masters post-graduate students. They all met at the CodeBrew hackathon held by Melbourne University’s Computing and Information Systems Students Association (CISSA) in March this year. In an example of how exciting new concepts can appear when clever people meet and discuss challenges and ideas for the first time, the concept for the app was scoped and conceived at this hackathon.

Microsoft Australia has a partnership with CISSA , and the hackathon acted as a preliminary step to the Imagine Cup Australian finals. The SourceLink team had a pretty tight turnaround between the CISSA hackathon and the Australian Imagine Cup finals, only about five weeks, to get their app developed to a point where they could enter the Imagine Cup. It’s been an impressive effort by Matilda, Jack and Karen, and we’re very excited about the Imagine Cup finals in July!

 

Sharing new experiences: from the store to the field without taking a step

Developed on Unity, a development tool known for gaming and augmented/virtual reality applications, the app in its current concept will enable consumers in-store to use their mobile phone’s camera, or a HoloLens or other mixed-reality headset, to scan items of interest. The cameras will identify the product and present information about the producer or farmers, where the produce was sourced, the journey it’s taken to get to the store, and more, as a mixed-reality experience. This will allow ethically-minded consumers, who don’t necessarily have the time to attend farmers’ markets, make informed decisions. The consumer can also provide instant feedback to the producer. The focus is on local Australian producers, bringing their stories to the person in the shop.

“Our aim with the app is to create a very different kind of immersive experience”, say Jack, Matilda and Karen. “For specialty stores in particular, we think this is something consumers will love – being able to really share something of the people and companies who produce what they are about to buy, at the point of purchase.”

What excited us about the SourceLink app was how it promised to change the buying experience completely. The team could have gone down the route of connecting data to a barcode and simply provide consumers with information about the product. What they have created is something completely different, that promises something unique. Basically, the app will take the consumer out of the store and into the world of the producer – without having to take a step.

There are two parts to the app: a data management back-end hosted on Microsoft Azure cloud, and the software required to drive the mixed-reality experience.

The app is in its early stages of development and the Imagine Cup will be an important boost to any next steps, but the SourceLink team is taking care not to constrain the app’s potential. Food supply chain provenance is just one example of future data that can be included in the user mixed-reality experience. Another is closing the gap between the information required by law, and what consumers actually want to know. And the app can be used with other products and items such as locally-made clothing.

Data management is, unsurprisingly, one of the largest challenges of the project. Caching data store by store is one solution. Another is to use Azure’s data scaling capabilities to manage the large amounts of data that any future development of the app would require.

 

The journey to Seattle

The SourceLink team will be competing against more than 50 other teams from around the world for a first prize of US$100,000 and access to mentoring and guidance from a number of senior Microsoft global executives. For us here at Microsoft Australia, the Imagine Cup is part of a larger, longer-term commitment to student startups, acting as a catalyst to help them achieve their objectives.

The SourceLink team will be flown to Seattle for the Imagine Cup by Microsoft Australia.

As the team members themselves describe it, their app is about bridging the divide between makers and consumers, to restore the social, human side of shopping. Win or lose, their app shows that future shopping experiences are set to change forever.

Create Bot for Microsoft Graph with DevOps 7: BotBuilder features – Dialog 101

$
0
0

I already setup basic DevOps CI/CD pipeline, I will focus on BotBuilder features from now on. Let’s start from Dialog system.

Overview

What makes a chatbot intelligent is understanding conversation or dialog. However, if you think about implementing such function to remember previous conversations with users and handle it later is tedious but troublesome. BotBuilder provides Dialog which already has these capabilities. The official document contains good enough information, so I won’t duplicate the effort. Please read it first. https://docs.microsoft.com/ja-jp/bot-framework/bot-design-conversation-flow

Use Dialog in O365Bot

By default, Bot Application template already implemented RootDialog concept. In MessagesController.cs, it always calls RoogDialog only. Last time, I implement getting events code inside RootDialog, but I should’ve create child dialog for it.

Create a child Dialog

1. Add GetEventsDialog.cs in Dialogs folder and replace the code.

using Autofac;
using Microsoft.Bot.Builder.Dialogs;
using Microsoft.Bot.Connector;
using O365Bot.Services;
using System;
using System.Threading.Tasks;

namespace O365Bot.Dialogs
{
    [Serializable]
    public class GetEventsDialog : IDialog<bool> // the type of returend value.
    {
        public Task StartAsync(IDialogContext context)
        {
            context.Wait(MessageReceivedAsync);
            return Task.CompletedTask;
        }

        private async Task MessageReceivedAsync(IDialogContext context, IAwaitable<object> result)
        {
            var message = await result as Activity;

            using (var scope = WebApiApplication.Container.BeginLifetimeScope())
            {
                // Resolve IEventService by passing IDialog context for constructor.
                IEventService service = scope.Resolve<IEventService>(new TypedParameter(typeof(IDialogContext), context));
                var events = await service.GetEvents();
                foreach (var @event in events)
                {
                    await context.PostAsync($"{@event.Start.DateTime}-{@event.End.DateTime}: {@event.Subject}");
                }
            }

            // Complete the child dialog
            context.Done(true);
        }
    }
}

2. Replace RootDialog.cs code with following.

  • Call GetEventsDialog for getting events.
  • Add callback method when the child dialog completed.
  • Remember the original message when redirected to authentication.
    *I utilizes State Service to remember/restore message, which I will explain in later article.
using AuthBot;
using AuthBot.Dialogs;
using Microsoft.Bot.Builder.Dialogs;
using Microsoft.Bot.Connector;
using System;
using System.Configuration;
using System.Threading;
using System.Threading.Tasks;

namespace O365Bot.Dialogs
{
    [Serializable]
    public class RootDialog : IDialog<object>
    {
        public Task StartAsync(IDialogContext context)
        {
            context.Wait(MessageReceivedAsync);
            return Task.CompletedTask;
        }

        private async Task MessageReceivedAsync(IDialogContext context, IAwaitable<object> result)
        {
            var message = await result as Activity;

            // Check authentication
            if (string.IsNullOrEmpty(await context.GetAccessToken(ConfigurationManager.AppSettings["ActiveDirectory.ResourceId"])))
            {
                // Store the original message.
                context.PrivateConversationData.SetValue<Activity>("OriginalMessage", message as Activity);
                // Run authentication dialog.
                await context.Forward(new AzureAuthDialog(ConfigurationManager.AppSettings["ActiveDirectory.ResourceId"]), this.ResumeAfterAuth, message, CancellationToken.None);
            }
            else
            {
                await DoWork(context, message);
            }
        }

        private async Task DoWork(IDialogContext context, IMessageActivity message)
        {
            // Call child dialog
            await context.Forward(new GetEventsDialog(), ResumeAfterGetEventsDialog, message, CancellationToken.None);
        }

        private async Task ResumeAfterGetEventsDialog(IDialogContext context, IAwaitable<bool> result)
        {
            // Get the dialog result
            var dialogResult = await result;
            context.Wait(MessageReceivedAsync);
        }

        private async Task ResumeAfterAuth(IDialogContext context, IAwaitable<string> result)
        {
            // Restore the original message.
            var message = context.PrivateConversationData.GetValue<Activity>("OriginalMessage");
            await DoWork(context, message);
        }
    }
}

Dialog 101

I am sharing what I feel important to utilize Dialog.

Serializable attribute

BotBuilder serializes entire Dialog to manage its state, thus you have to mark the class as Serializable. Same rule applies to its memebers, too. I often forget this rule and BotBuilder yells at me at runtime.

IDialog<T> inheritance

Dialog inherits IDialog interface directly or indirectly. T is a type of returned object, so if you know the type, you shall specify the type rather than leave it as ‘object’.

StartAsync method

This method is called at the beginning of the dialog. It is like a constructor of a class, but you can also use constructor. So use StartAsync method to prepare the things for conversation.

Context.Done method

Whenever the child dialog completes it job, you need to call Context.Done method which handles the operation to it’s parent.

Summery

Next time I will explain advanced topic of Dialog system. Don’t forget to check-in the code. All the tests should pass as no logic has been changed.

GitHub: https://github.com/kenakamu/BotWithDevOps-Blog-sample/tree/master/article7

Ken

Azure SQL Database の Point in Time Restore を使用したデータベースの復元方法と注意点

$
0
0

皆さん、こんにちは。BI Data Platform サポートチーム です。

SQL Database をご利用いただいている方の多くにその便利さを感じていただいているであろう機能の一つとして Point in Time Restore があるかと思います。
大変便利な機能ですが、利用にあたり少し留意いただきたい点がありますので、Point in Time Restore の利用方法とともに、説明します。

Point in Time Restore とは

SQL Database は、マネージド型のサービスのため、手動でバックアップを取得する必要はありません。
そのかわりに、SQL Database 自身が内部的にデータベースの完全バックアップ (週に1回)、差分バックアップ (数時間に1回)、トランザクションログバックアップ (5分~10分程度に1回) を取得しております。
ユーザーの皆さんは、バックアップの保持期間内であれば任意の日時の SQL Database を復元することが可能です。

バックアップの保持期間は、ご利用いただいているサービスレベルで異なりますので、詳細は下記をご参照ください。

SQL Database バックアップについての詳細情報
https://docs.microsoft.com/ja-jp/azure/sql-database/sql-database-automated-backups

 

利用方法

例えば、Point in Time Restore が役に立つシナリオとして、下記が考えられるかと思います。

  • 操作ミスなどで、SQL Database 内の重要なデータ (レコードやテーブル) を削除してしまい、削除前に戻したい場合
  • 特定の日時の SQL Database を復元し、その環境に対して検証作業を行いたい場合

前者の場合は、誤って削除する前のタイミングのデータベースを復元し、クライアントアプリケーションからの接続先を復元したデータベースにするか、元のデータベースを別の名前に変更したうえで復元したデータベースに元のデータベース名をつけることが考えられます。
後者の場合は、復元を行ったデータベースを検証したいアプリケーションの接続先として指定するだけで実現可能です。

Point in Time Restore を使用した復元作業は簡単です。
Azure Portal 画面を使う方法、PowerShell を使う方法、RestAPI を使う方法の3つの方法で復元ができます。
例えば Azure Portal 画面から復元する場合には、下図のように Azure Portal 画面上で復元を行いたい SQL Database の [概要] ページを開き、[復元] ボタンを選択し、復元したい日時や復元先の価格レベル、新しいデータベース名を指定します。


<図1. Azure Portal 画面>

これで、指定した日時までのデータを持った新しいデータベースが作成できます。

 

注意点

一点ご注意いただきたい点として、あくまで Point in Time Restore 機能は指定した日時のデータを持った “新しい SQL Database” が作成される点です。
つまり Point in Time Restore で復元したデータベースは、復元時点より前のバックアップデータは保持しておらず、復元したデータベースに対して Point in Time Restore を使った復元を行おうとしても復元可能な “最も前の復元ポイント” は復元されたタイミング以降になります。
例えば、下図のように Point in Time Restore を使って復元されたデータベース B は、データベース B が作られたタイミング以前に戻すことができず、データベース B が作られたタイミングより前に戻すためにはデータベース A に対して Point in Time Restore を行う必要があります。

<図2. Point in Time Restore バックアップ保持期間イメージ図>

では、この動作はどのようなときに注意する必要があるのでしょうか。
それは、データベース B を Point in Time Restore で作成した後に、元となったデータベース A を削除する場合です。
この場合、仮にデータベース B 作成時点よりも前に戻したいと思っても、データベース B のバックアップにはデータベース B が作成されたタイミング以降のバックアップデータしか含まれていないためそれ以前の状態に戻すことはできません。また SQL Database には削除されてから一定の期間であれば削除済みのデータベースを復元する機能がありますが、この機能を使って復元できるのはあくまでデータベース A を削除したタイミングのデータベースだけであり、それに紐づくバックアップファイルは復元できず Point in Time Restore を使って削除されたタイミングより前に戻すこともできません。

以上のことから、Point in Time Restore の作成元となったデータベース (データベースA) を削除する際には、それよりも前の時点に戻す必要が無いのかを十分にご検討いただいてから削除頂くか、復元後十分な時間が経過し Point in Time Restore で新しく作成されたデータベース (データベースB) に対する十分な期間のバックアップが取得された後に元のデータベースを削除頂ければと思います。

※ 本Blogの内容は、2017年6月28日現在の内容となっております。

Post invitado: Buscando la rentabilidad de campañas de marketing

$
0
0

Tenemos un nuevo post invitado, en este caso de la mano de José Ángel Fernández Ortiz, programador .NET en Ilitia Technologies. Hace poco, el equipo de José Ángel colaboró con Codere para ayudarles a analizar la rentabilidad de sus campañas de publicidad, para lo cual se apoyaron en Application Insights. En este artículo nos relata cómo fue su experiencia.

Regreso a Ilitia

Mientras subía las escaleras que llevaban a la oficina de Ilitia, repasaba mentalmente los obstáculos a los que nos habíamos enfrentado en el último proyecto. Nada fuera de lo commún: como ocurre a veces, lo que en un principio iba a ser un desarrollo con tecnologí­a interesante y una complejidad de negocio moderada, por muy buenas razones (siempre son buenas razones) se había complicado y habí­a resultado inesperadamente duro. Gracias a la profesionalidad y a la destreza técnica de mis compañeros, las dificultades se sortearon con elegancia, pero yo seguí­a dando vueltas a cómo podríamos haberlas detectado antes, ahorrándonos unos cuantos disgustos y dolores de cabeza. Entonces recordé las palabras de uno de mis mentores, un gran tipo, ante una situación similar: “Jose, si todos los que estamos involucrados en el proyecto, desarrolladores, gestores y cliente, nos pudiésemos instalar una DLL en la cabeza que nos dijera en qué estábamos pensando cada vez que hemos metido la pata, y además nos avisara cada vez que tenemos un derrape mental, otro gallo cantaría. Pero esto no es Matrix, así­ que sólo nos queda solucionarlo a base de sudor y buena voluntad. Pedazo de llorón”.

Sumido en este recuerdo y con media sonrisa nostálgica en la cara, llamó al timbre de Ilitia.

Tras los saludos de rigor a compañeros y jefes, busqué un sitio libre, arranqué el ordenador y comencé a buscar cursos para renovarme en alguna tecnologí­a de las que tení­a en el radar: me parecía la mejor forma de recuperarme de las heridas sin tener la sensación de estar perdiendo el tiempo. Estaba en mitad de un vídeo sobre balanceadores de carga en Azure cuando Jon, director general de la empresa, se acercó a mi sitio:

– Jose, ve con Rubén a una de las salas, que te va a hablar de una pequeña colaboración que estamos llevando a cabo en un cliente. ¿Por qué miras a los lados?

Buscaba una escapatoria, pero no la había.

Application Insights

Una vez en la sala, Rubén decidió averiguar si yo estaba familiarizado con las tecnologí­as a utilizar en el proyecto:

– ¿Qué sabes de Application Insights?

Tras tragar saliva y carraspear, contesté con una de mis mejores sonrisas:

– He visto que, cuando creas un nuevo proyecto web en Visual Studio, hay una checkbox que te ofrece incorporar Application Insights a la solución.

– Vamos, que no tienes ni idea.

– Vamos, que no tengo ni idea.

Rubén miró al techo de la sala como quien se encomienda a las alturas, cogió aire, y me dio una introducción:

Application Insights es un servicio de Microsoft que sirve para monitorizar aplicaciones. Permite medir cosas como número de sesiones y usuarios, peticiones web, tiempos de respuesta, errores… Y también indicadores de rendimiento de los servidores, como el uso de la CPU, de la RAM y del tráfico de red. Mejor que cualquier sistema de logging que hayamos usado hasta ahora, mucho más completo. Además, permite ver las métricas casi en directo: puedes montar dashboards en el portal de Azure con los datos que más te interesen, hacer que se te enví­en avisos cuando ocurran determinados eventos, y hasta puedes hacer consultas en caliente con el lenguaje Analytics, que recuerda un poco a T-SQL y a Linq.

 

– ¿Y es difícil de instalar en una aplicación?

– ¡Para nada! Ya sea en páginas de HTML puro y duro o en vistas de ASP.NET MVC o SharePoint, basta con pegar un código JavaScript que te proporciona el portal de Azure una vez has dado de alta tu aplicación en Application Insights. Y para la parte de servidor, hay un paquete de NuGet, o sea que la instalación es bastante sencilla.

Al oír que se podía instalar tanto en la parte de cliente como en la de servidor, me volvió el recuerdo de la DLL de la que me habló mi mentor:

-O sea, que se puede instalar tanto en cliente como en servidor… ¿Y se podría instalar en la cabeza de la gente?

Con la cara muy seria y mirándome fijamente a los ojos, Rubén me contestó:

-Para algunas personas en concreto vendría muy bien. Por cierto, ¿qué sabes del sector de las apuestas deportivas?

Apuestas deportivas

Al pensar en apuestas deportivas, lo primero que me vino a la cabeza fue lo que había visto en el cine: gangsters, cobradores de deudas, combates de boxeo amañados, ir siempre un paso por delante de la policía. Camino de la primera reunión con el cliente, no podía contener mi excitación:

– ¡No puedo creer que vaya a conocer al mismísimo Al Capone! Cuando lo cuente en casa no se lo van a creer.

Jon apartó por un momento su vista de la carretera para mirarme: yo iba en el asiento del copiloto. Tras un par de segundos de silencio, comenzó a hablar:

– No, Jose, no vamos a conocer a Al Capone. Olvida lo que has visto en las películas: las apuestas deportivas son legales en España, y mueven muchos millones de euros cada año. Gran parte de esos millones se juegan online, a través de aplicaciones web y móviles, y ahí­ es donde entramos nosotros: vamos a ayudar a la gente de Codere a sacar rendimiento a los datos que su aplicación Apuestas Deportivas está guardando en Application Insights.

Anunciarse es una apuesta

Ya en Codere, Carlos Garcí­a Sánchez, jefe de proyecto, y José Antonio Esteban Sánchez, director de tecnología, nos dieron los detalles de qué datos querían explotar. Comenzó José Antonio:

– En el sector de las apuestas deportivas la competencia es feroz. Esto hace que se destine una gran cantidad de dinero a campañas de publicidad. El problema es que es muy difí­cil saber cuánto dinero se ha aprovechado y cuánto se ha tirado a la basura.

– De hecho -apuntó Carlos-, hay una vieja cita al respecto: “La mitad del dinero que gasto en publicidad se desperdicia: el problema está en que no sé qué mitad es ésa“.

Dicha cita, famosa en el sector de la publicidad, a menudo se atribuye a John Wanamaker, un hombre de negocios estadounidense que hizo fortuna en el siglo XIX y principios del XX con una cadena de grandes almacenes, y que es considerado uno de los padres del marketing moderno. En la época de Wanamaker se tenía la sensación de estar haciendo una apuesta cada vez que se poní­a un anuncio en el periódico, pues nunca se sabí­a a ciencia cierta qué campañas funcionaban ni por qué. El hecho de que la cita siga siendo célebre se debe a que la sensación de estar “jugándosela” cada vez que se lanza una campaña de publicidad nunca ha abandonado a los anunciantes.

José Antonio siguió exponiendo sus necesidades:

-Lo que nosotros queremos saber es cuál es la mitad buena, es decir, qué campañas están siendo efectivas. Y no nos sirven los informes que nos dan las empresas con las que contratamos la publicidad: ellos calculan el valor y la efectividad de una campaña de forma bastante… esotérica. Y cuando pides explicaciones, no suelen ser transparentes.

-Afortunadamente -prosiguió Carlos-, tenemos los medios para saber si una campaña ha funcionado o no. Nosotros tenemos claro cuándo una campaña es un éxito: cuando hace que se registren nuevos usuarios, y estos usuarios ingresan dinero en sus carteras virtuales para poder empezar a apostar. El flujo es el siguiente: una persona que está fuera de nuestro sistema, navegando por internet, se encuentra con publicidad de una campaña de Codere; pincha sobre la publicidad y es conducido a la landing page de la campaña, que ya está dentro de nuestro sistema; si, una vez en nuestro sistema, se registra y, aún mejor, deposita dinero, y además esto lo hace no una sola persona, sino un gran número de personas, la campaña ha sido un éxito. Si, por el contrario, poca gente llega hasta la landing page, aún menos gente se registra, y menos aún deposita dinero, la campaña ha sido un fracaso.

-¿Y toda esta información la estás guardando en vuestra aplicación? -pregunté.

-Toda esta información la estamos guardando en Application Insights. Por suerte, Application Insights es extensible, por lo que aparte de generar datos de uso y rendimiento predeterminados, podemos guardar nuestros propios datos personalizados mediante custom events. Así­, para cada usuario o, más bien, para cada identificador de sesión, estamos guardando: llegada a una landing page, inicio de registro del usuario, registro completado, y depósito de dinero. Como cada landing page está asociada a una determinada campaña, y cada campaña la contratamos con una agencia, podremos saber si la labor de una agencia ha sido rentable comparando lo que nos ha costado la campaña con el dinero que han depositado los usuarios que han accedido al sistema mediante la landing page de la campaña. Y, yendo un paso más allá, como también se está guardando el URL referrer cada vez que se visita una landing, podrí­amos saber en qué tipos de sites es más rentable colocar anuncios: periódicos, redes sociales, foros, etc.

-¿Y los datos los vamos a consultar directamente en Application Insights?

-En principio no. Application Insights permite exportar los datos de forma continua, y nosotros hemos creado un proceso que hace que dichos datos acaben en una base de datos de Azure SQL. La tenemos abierta para que la puedan consultar las agencias, y la idea es que extraigáis los datos de ahí­. El problema está en que el rendimiento de la base de datos no está siendo bueno, y no tenemos tiempo de analizar por qué está siendo así­.

Con las cosas mucho más claras gracias a José Antonio y Carlos, y con un riesgo en el radar, volvimos a Ilitia para ponernos manos a la obra.

Solución técnica

El planteamiento técnico del proyecto estaba claro: mediante un webjob que se ejecutaría periódicamente extraeríamos la información de la base de datos que contenía la información de Application Insights, y los agregaríamos de modo que, para cada campaña, supiésemos cuántos usuarios habían llegado a la landing page, cuántos habí­an comenzado el registro, cuántos habían llegado a completarlo, y cuántos habí­an depositado dinero; dichos datos agregados se guardarían en una base de datos secundaria, y se mostrarían a los interesados mediante una aplicación web. Como en Codere ya disponían de un dashboard con numerosas gráficas hechas con Canvas.js, nosotros decidimos utilizar en nuestra aplicación una librería de JavaScript similar, Chart.js, montando un widget integrable en el dashboard de Codere, y una página accesible desde el widget con la información desglosada.

Con la ayuda de nuestro talentosísimo diseñador Isaac, hicimos algunas pruebas de concepto con Chart.js, y comenzamos a avanzar a muy buen ritmo con la extracción de datos y su representación. Solventamos unos cuantos problemas menores, contando con la ayuda incondicional de José Antonio y Carlos, que siempre estuvieron dispuestos a resolvernos cualquier duda. Por desgracia, a medida que se acercaba la fecha de entrega, comprobamos que las consultas contra la base de datos que contenía la información de Application Insights eran cada vez más lentas, hasta llegar a un punto insostenible.

-Carlos, tenemos un problema.

-La base de datos, ¿verdad?

-Eso es.

Tal y como nos tenía acostumbrados ante cualquier dificultad, Carlos teí­a preparado un capote:

-Sabíamos que teí­na que pasar, así­ que he hecho una prueba de concepto contra la API REST de Application Insights. Se le pueden enviar consultas escritas en Analytics, y devuelve al instante la información que nuestra base de datos, por alguna extraña razón, tarda minutos en devolver. Lo mejor va a ser cambiar la capa de acceso a datos para que deje de atacar la base de datos y empiece a consumir la REST API. Te enví­o un ejemplo.

Cambiamos la extracción de datos para consumir la API, y los tiempos de ejecución del webjob se redujeron considerablemente, por lo que desplegamos el proyecto en la cuenta de Azure del cliente, y acabamos con la impresión de haber resuelto un problema considerable con una solución relativamente sencilla.

-Esto va a sernos muy útil -se despidió Carlos.

-Por fin vamos a saber qué mitad del gasto en publicidad es la mitad buena -puntualizó José Antonio.

De vuelta en Ilitia, tuvimos una pequeña reunión para hablar del proyecto.

Conclusiones

-¿Qué has sacado de todo esto? me preguntó Jon.

-Para empezar, me he familiarizado un poco con el mundo de las apuestas deportivas, lo cual no viene mal. No sabí­a que fuese un negocio tan instaurado ni que moviese tanto dinero. Y hemos desarrollado una solución para valorar la rentabilidad de una campaña de marketing, que es un problema que siempre han tenido los anunciantes, por lo que dirí­a que el proyecto aporta bastante valor. Además, he tenido ocasión de familiarizarme con tecnologí­as que no había tocado hasta ahora: Chart.js es una librería fácil de utilizar, por lo que deberemos tenerla en cuenta en el futuro; pero lo que ha sido una pasada es la potencia y la versatilidad de Application Insights. Además, es muy, muy fácil de instalar en las aplicaciones, tanto en la parte de servidor como en la de cliente. Sin embargo, hay una cosa que no me ha quedado clara.

-¿El qué?

-¿Se puede o no instalar en la cabeza de las personas?

-Vete a estudiar Azure.

Asusórdenes.

 

José Ángel Fernández Ortiz

Ilitia Technologies

 


Announcing: BizTalk Server Migration tool

$
0
0

We are very happy to announce a wonderful tool provided by MSIT, the tool will help in a multiple scenarios around migrating your environment or even taking backup of your document applications.

It comes with few inbuilt intelligence like

  • Connectivity test of source and destination SQL Instance or Server
  • Identify BizTalk application sequence
  • Retain file share permissions
  • Ignore zero KB files
  • Ignore files which already exist in destination
  • Ignore BizTalk application which already exist in destination
  • Ignore Assemblies which already exist in destination
  • Backup all artifacts in a folder.
Features Available Features Unavailable
  • Windows Service
  • File Shares (without files) + Permissions
  • Project Folders + Config file
  • App Pools
  • Web Sites
  • Website Bindings
  • Web Applications + Virtual Directories
  • Website IIS Client Certificate mapping
  • Local Computer Certificates
  • Service Account Certificates
  • Hosts
  • Host Instances
  • Host Settings
  • Adapter Handlers
  • BizTalk Applications
  • Role Links
  • Policies + Vocabularies
  • Orchestrations
  • Port Bindings
  • Assemblies
  • Parties + Agreements
  • BAM Activities
  • BAM Views + Permissions

 

  • SQL Logins
  • SQL Database + User access
  • SQL Jobs
  • Windows schedule task
  • SSO Affiliate Applications

 

Download the tool here

For a small guide take a look here

Встречайте Windows Template Studio 1.1

$
0
0

Мы очень рады объявить о выпуске Windows Template Studio 1.1. В сотрудничестве с заинтересованным сообществом мы поставили на поток и циклический выпуск новые функции и общую функциональность. Мы постоянно ищем помощников, и, если вас интересует эта тема, пожалуйста, зайдите на сайт GitHub: https://aka.ms/wts.

Windows Template Studio

Как получить обновление:

Есть две возможности получить обновление до последней сборки.

  • Если уже установлено: Visual Studio должна автоматически выполнить обновление. Чтобы самостоятельно запустить обновление, откройте меню Сервис (Tools)-> Расширения и обновления (Extensions and Updates). Затем перейдите к расширителю Обновить (Update), расположенному слева, и вы увидите Windows Template Studio — щелкните Обновить (Update).
  • Если еще не установлено: пройдите по ссылке https://aka.ms/wtsinstall, щелкните «загрузить (download)» и дважды щелкните Установщик VSIX (VSIX installer).

Улучшения Мастера (Wizard):

  • переупорядочение страниц;
  • первая страница уже не будет пустой;
  • переименование страниц и фоновых задач;
  • улучшение автономной (offline) работы;
  • начата работа по локализации;
  • добавлен анализ кода.

Обновления страниц:

  • добавлена страница Сетка (Grid);
  • добавлена страница Диаграмма (Chart);
  • добавлена страница Media/Video;
  • улучшена страница веб-представления (Web View).

Обновления функций:

  • добавлено хранилище Store SDK Notifications;
  • SettingStorage теперь может сохранять в двоичном виде (а не только в строковом).

Улучшения шаблона (Template):

  • панель навигации (Navigation panel) перенесена в UWP Community Toolkit;
  • отрегулировано Задание стиля (Styling);
  • улучшена производительность загрузчика ресурсов (ResourceLoader).

Чтобы познакомиться с полным списком решенных проблем релиза 1.1, перейдите на Github.

Что готовится в следующих версиях

Нам очень приятны поддержка и соучастие сообщества. Мы сотрудничаем с фреймворком Caliburn.Micro и создали ветку разработки с Найджелом Сэмпсоном (Nigel Sampson). Мы ведем обсуждения с представителями Prism и Template 10, чтобы понять, как добавить и эти платформы. А ниже мы представляем вашему вниманию список того, что мы собираемся добавить:

  • новый дизайн шаблонов (Fluent design);
  • функции Project Rome как дополнительные опции проектов;
  • поддержку кликов правой кнопки мыши для уже существующих проектов;
  • локализацию Мастера (wizard);
  • поддержку специальных возможностей (Accessibility) в мастере и в шаблонах.

Если вы хотите помочь нам, зайдите, пожалуйста, на https://aka.ms/wts.

Источник: https://blogs.windows.com/russia/2017/06/25/windows-template-studio-1-1/#6dj81QoGGOfd6LRL.97

Capture a StackOverflowException and make a dump 0xc00000fd

$
0
0

I read in this article that “Starting with the .NET Framework 2.0, you can’t catch a StackOverflowException object with a try/catch block, and the corresponding process is terminated by default. Consequently, you should write your code to detect and prevent a stack overflow.”  That is the reason why the following code was crashing my process instead of the exception being caught within my try{}…catch{}.

private void ThisIsARecursiveFunctionUsedToTriggerAStackOVerflow()
{
  try
  {
    for (int i = 0; i < 1000; i++)
    {
      AnotherFunctionToMakeTheStackLookCooler();
      ThisIsARecursiveFunctionUsedToTriggerAStackOVerflow();
    }
  }
  catch(StackOverflowException ex)
  {
    lableMessage.Text = ex.Message + "<-*******->" + ex.StackTrace;
  }
  catch(Exception ex)
  {
    lableMessage.Text = ex.Message;
  }
}

But I needed to capture the exception because I wanted to look at it in a memory dump.  Had I been able to capture the exception in the code then I could dump out the stack into a log and see what was going on.

The StackOverflowException was happening in my W3WP process and I used procdump to capture the exception.  Here is the command I used, also show in Figure 1 where I took it via KUDU/SCM on an Azure App Service:

procdump -accepteula -e 1 -f C00000FD.STACK_OVERFLOW -g -ma 9400 d:homeDebugToolsDumps

image

Figure 1, capture a stackoverflowexception, 0xc00000fd, C00000FD.STACK_OVERFLOW memory dump

Then navigate to the directory where the dump was created, Figure 2, download it and open in in WinDbg.

image

Figure 2, download a stackoverflowexception, 0xc00000fd, C00000FD.STACK_OVERFLOW memory dump

When I open the dump in WinDbg the tool recognizes the First chance exception: C00000FD.STACK_OVERFLOW’, dumps out the method (so!so._default.ThisIsARecursiveFunctionUsedToTriggerAStackOVerflow()+0xa) and changes focus to the thread (~28s) which triggered the exception, see Figure 3.

image

Figure 3, analyze a stackoverflowexception, 0xc00000fd, C00000FD.STACK_OVERFLOW memory dump

I have WinDbg configured also to view source code, so when I originally opened the dump and WinDbg found the exception, it was able to open the source code to the line in which it happened.  See Figure 4.  Also, enter k to view the stack on the thread which caused the exception.

image

Figure 4, analyze a stackoverflowexception, 0xc00000fd, C00000FD.STACK_OVERFLOW memory dump

The code is a bad pattern only used to trigger the exception, if you use recursive methods you need to protect yourself by adding a counter of some kind, becuase as you now know you cannot capture this kind of exception via a try{}…catch{} and becuae of that it is an unhandled excpetion which will crash the process.

Post invitado: Manos a la obra con Power BI

$
0
0

Si estás aquí es que PowerBI te interesa, ya conoces los principales tutoriales de introducción y has conseguido hacer funcionar algunos informes. Ahora quieres empezar a trabajar en algo real, donde SABES que no todo será tan fácil.

En este artículo, Javier Fernández, consultor de ilitia Technologies te explica, paso a paso, cómo crear un control personalizado a partir de los datos importados y los filtros del informe :

  • Importación y modelado de los datos
  • Definición de dimensiones
  • Creación de tablas
  • Creación de medidas con DAX
  • Creación de informes
  • Creación de títulos de informe dinámicos
  • Desarrollo de una visualización personalizada
  • Publicación del informe

El ejemplo se basa en un caso real donde el cliente trataba de analizar los gastos y presupuestos de varias campañas de marketing en relación al área geográfica de influencia, el producto, el departamento y los medios implicados. El informe mostrará información de costes de las campañas a lo largo del año en curso.

Importación y modelado de datos

En primer lugar, abrimos Power BI Desktop y nos conectamos a un origen de datos. Para este ejemplo vamos a obtener la información de tres archivos Excel: Campañas, Gastos y Presupuestos.

Opciones de importación de datos de Power BI

 

Información a importar desde el Excel

 

Una vez obtenidos todos los datos, procederemos al modelado de los mismos.

 

Panel de datos importados 

 

En primer lugar, tenemos que revisar el formato de las columnas. Para ello seleccionamos la columna Campaña de la tabla Gastos. En la pestaña Modelado del menú superior, seleccionamos “No resumir” en “Resumen predeterminado”, de este modo no realizará la suma de los identificadores, lo que podría perjudicarnos al obtener los datos en el informe.

 

Opciones de la pestaña modelado

 

A continuación, para obtener el total del Gasto, seleccionamos la columna Gasto. En este caso, comprobaremos que, en la opción “Resumen predeterminado”, esté seleccionado “Suma”.

 

 

 

Seleccionamos el Formato -> Moneda -> Divisa general, para mostrar la información en Euros.

 

Selección del formato numérico 

 

Para poder comparar los datos de gastos y presupuestos, realizamos las mismas transformaciones sobre las columnas Campaña y Presupuesto, de la tabla de presupuestos.

Después queremos relacionar los importes de ambas tablas con los conceptos a los que van asociados:

 

Panel de Relaciones de PowerBI Desktop 

 

Seleccionamos el Código de la tabla Campaña y lo arrastramos sobre las tablas Gastos y Presupuestos, que hemos importado y transformado en los pasos anteriores.

 

Relaciones entre tablas ya creadas 

 

Definir una dimensión

Nuestro cliente necesita poder filtrar y analizar la evolución de los costes a lo largo del año y para poder representarlo en nuestro informe necesitaremos crear la dimensión de tiempo. En este caso, para definirla, debemos diseñar una tabla de fechas.

En la pestaña Modelado, seleccionamos Nueva tabla y añadimos la siguiente expresión, con la que vamos a crear una tabla Fechas rellena con las fechas desde el 1 de enero del año en curso hasta el 31 de diciembre.

 
Fechas = CALENDAR(CONCATENATE(YEAR(TODAY());"/01/01");CONCATENATE(YEAR(TODAY());"/12/31"))

 

Debemos crear esta expresión, para poder representar nuestros datos en el tiempo.

 

 

Una vez creada la tabla, vamos a añadir las columnas Año, NMes, Mes, y Día.  En la pestaña Modelado, seleccionamos Nueva columna y añadimos:

Año = 'Fechas'[Date].[Año]

NMes = 'Fechas'[Date].[NroMes]

Mes = 'Fechas'[Date].[Mes]

Día = 'Fechas'[Date].[Día]

Posteriormente, en la pantalla de relaciones, establecemos la relación entre el campo Fecha de la tabla Gastos y el campo Date de la tabla Fechas.

 

 

Una vez creada la dimensión de tiempo, como siguiente paso, vamos a crear las medidas que necesitaremos en el informe.

Creando medidas con DAX

Necesitamos dos medidas para calcular el gasto acumulado y el porcentaje del gasto sobre el presupuesto.

Seleccionamos la tabla de Gastos y la opción Nueva medida en la pestaña de Modelado.  Para este proceso es necesario tener unas nociones de DAX. Si estamos familiarizados con PowerPivot y SQL Server Analysis Services nos sonará.

La creación de medidas con DAX nos permite trabajar con datos relacionales de manera dinámica.

Añadimos la siguiente expresión en lenguaje DAX, que calcula el gasto acumulado mes a mes:

 
Gasto acumulado = CALCULATE(SUM(Gastos[Gasto]); FILTER(ALL(Fechas[Date]); Fechas[Date] <= MAX (Fechas[Date])))

 

 Repetimos la operación y añadimos la siguiente expresión, que calcula el porcentaje total del gasto acumulado frente al presupuesto:

 
Porcentaje total = IF(SUM(Presupuestos[Presupuesto]) = 0; 0; CALCULATE('Gastos'[Gasto acumulado] / SUM(Presupuestos[Presupuesto]) * 100 ))

 

Creamos una medida sobre la tabla Fechas, que calcule el mes seleccionado:

 
Mes seleccionado = MAX(Fechas[NMes])

 

Creamos una medida sobre la tabla Títulos, que muestre uno de los títulos de la tabla y el mes seleccionado. Más adelante en el apartado Creación de título de informe dinámicos, veremos cómo crear esta tabla.

Gastos vs. Presupuesto = CONCATENATE(CONCATENATE(CONCATENATE(LOOKUPVALUE('Títulos'[Título]; 'Títulos'[ID]; 1); " "); FORMAT(DATE(1; Fechas[Mes seleccionado];1); "MMMM")); " ") 

Creando el informe

Ahora vamos a representar la evolución a lo largo de los meses del año actual, del gasto sobre el presupuesto.

Para representar el tiempo, nos descargamos la visualización Timeline Slicer de la Store de Office.

Una vez descargada, seleccionamos Importar un objeto visual personalizado en la sección de Visualizaciones del Power BI Desktop, y añadimos la visualización.

 

Importamos un objeto visual 

 

La seleccionamos y añadimos al informe como otra visualización cualquiera.

 

 

En la sección Time del panel, añadimos el campo Date de la tabla Fechas. La visualización mostrará entonces el filtro de fechas.

 

Aspecto de la visualización con el campo Date de la tabla Fechas

 

A continuación, para mostrar la evolución de los gastos frente al presupuesto, elegimos un gráfico de columnas agrupadas. Sobre este gráfico añadimos: el campo Nombre de la tabla Campañas al Eje; la medida Gasto acumulado de la tabla Gastos y el campo Presupuesto de la tabla Presupuestos al Valor.

 

 

Como resultado, podemos ver la evolución del gasto en el tiempo, seleccionado meses diferentes en el filtro de tiempo.

Para completar el informe, vamos a añadir una tarjeta para mostrar el porcentaje del gasto sobre el presupuesto total. Seleccionamos la visualización Tarjeta y la medida Porcentaje total de la tabla Gastos.

 

 

Creación de títulos de informe dinámicos

Como siguiente paso, necesitamos un título para nuestro informe, y necesitamos que el título sea dinámico para que siempre muestre los conceptos y filtros que tenemos seleccionados “Gastos vs. Presupuestos enero 17 (o febrero 17 o marzo 17…)”.

En primer lugar, obtenemos los títulos de un archivo Excel.

 

En segundo lugar, modelamos los datos para que todas las columnas tengan el formato correcto. En este caso, seleccionaremos No resumir para la columna ID.

En nuestro caso, queremos que el título sea dinámico. Para que esto sea posible nos basamos en las medidas que ya hemos creado y desarrollamos una visualización. Vamos a ver el proceso poco a poco.

Desarrollo de una visualización personalizada

Antes de nada, tendremos que instalar NodeJS (https://nodejs.org) y las Power BI Visuals Tools (npm install -g powerbi-visuals-tools).

Posteriormente, hemos de habilitar las herramientas de desarrollo en Power BI, accediendo a las opciones de configuración en nuestra cuenta de http://powerbi.microsoft.com

 

 

En la pestaña General y sección Desarrollador, marcamos la opción “Habilitar objeto visual de desarrollador para realizar pruebas”.


Una vez realizado, abrimos la consola Símbolo del sistema y creamos un nuevo proyecto de nombre “titlebox”. Para ello utilizamos el comando:

pbiviz new titlebox

A continuación, abrimos el nuevo proyecto con el Visual Studio Code. Para ello abriremos la carpeta titlebox generada en el paso anterior.

 

 

Dentro de la pestaña Extensiones, buscamos la extensión PBIViz CLI Control y la instalamos.

 

 

Después, volvemos a la pestaña Explorador y abrimos el archivo pbiviz.json. En este fichero completaremos la información del nombre, el nombre para mostrar, la versión, el nombre y email del desarrollador…

 

 

A continuación, desde la pestaña Explorador abrimos el archivo capabilites.json.

Vamos a añadir los dataRoles y los dataViewMappings. Aquí definimos el valor que vamos a mostrar en la visualización, su nombre, y el tipo (Grouping, Measure, GroupingOrMeasure). Además, añadiremos una condición para que sólo permita seleccionar un valor.

 

 

Cuando despleguemos el paquete de la solución, el panel de Power BI Desktop, veremos lo siguiente:

 

 

A continuación, vamos a añadir una serie de propiedades que nos van a permitir formatear el texto. En el mismo archivo capabilites.json.

 

 

Cuando despleguemos la solución, en la visualización de Power BI Desktop, tendremos nuevas opciones de configuración de nuestro control:

 

 

Ahora, queremos añadir los estilos de maquetación para el control, para ello vamos a la pestaña Explorador y editamos el archivo visual.less

 

El siguiente paso es crear un método para obtener los valores del texto, tamaño, color, fuente…, esto lo añadimos en el archivo objectEnumerationUtility.ts de la carpeta src, donde introducimos el siguiente método:

 

 

En el Explorador, modificamos el icono de la carpeta assets. Será el icono de la aplicación que veremos en la galería de visualizaciones.

El siguiente paso, es ir a la pestaña Explorador del Visual Studio Code y abrir el archivo visual.ts donde añadimos las interfaces siguientes:

 

 

A continuación, en el mismo archivo, añadimos el método visualTransform que sirve para aplicar las configuraciones sobre nuestro texto:

 

 

Después, en el archivo visual.ts, en la clase Visual añadimos lo siguiente:

 

 

El método update que actualiza el formato del texto:

 

 

Y en el método enumerateObjectInstances, añadimos las propiedades:

 

 

Por último, en el archivo tsconfig.json, añadimos todos los archivos de la carpeta src, para que el proyecto los reconozca y pueda compilar.

 

 

En el mismo Visual Studio Code, Pulsamos Ctrl + F8 para iniciar la ejecución, cuya evolución veremos en la consola de salida.

 

 

En el entorno web (http://powerbi.microsoft.com), podemos ver la simulación de nuestra visualización y así probar que todo funciona.

Para ello, vamos a la pestaña del conjunto de datos y añadimos la siguiente visualización.

 

 

Una vez situada en el informe, le añadimos un valor de nuestro origen de datos:

 

 

De esta forma, podemos comprobar que nuestra visualización funciona correctamente.

Volvemos al Visual Studio Code y pulsamos Ctrl + F9 para detener la ejecución.

 

 

A continuación, empaquetamos la solución para importarla a Power BI Desktop. Desde el Visual Studio Code, pulsamos Ctrl + Shift + B para generar el paquete:

 

 

Como resultado, en la carpeta “dist” de nuestra solución, encontraremos nuestro paquete:

 

 

De nuevo, volvemos a el Power BI Desktop, seleccionamos “Importar un objeto visual personalizado” en las Visualizaciones y seleccionamos nuestro paquete del disco duro.

 

 

 

Añadimos la nueva visualización a nuestro informe.

En el campo Title, arrastramos la medida “Gastos vs. Presupuestos” de la tabla Títulos que hemos creado anteriormente.

 

 

En “Text settings”, rellenamos los valores:

 

 

El resultado es el siguiente título, que cambiará dinámicamente cuando seleccionemos distintos meses en el filtro de fechas:

 

 

Que quedará así en nuestro informe final.

 

Publicación del informe

Para terminar, publicaremos el informe en nuestra área de trabajo. Para ello, pulsamos el botón Publicar en la pestaña Inicio.

 

 

Seleccionamos el área de trabajo donde lo queremos publicar y pulsamos Seleccionar.

 

Released: System Center Management Pack for SQL Server, Replication, AS, RS, Dashboards (6.7.31.0)

$
0
0

We are happy to announce new updates to SQL Server Management Pack family!

  • Microsoft System Center Management Pack for SQL Server enables the discovery and monitoring of SQL Server Database Engines, Databases, SQL Server Agents, and other related components.

Microsoft System Center Management Pack for SQL Server

Microsoft System Center Management Pack for SQL Server 2014

Microsoft System Center Management Pack for SQL Server 2016

  • Microsoft System Center Management Pack for SQL Server Replication enables the monitoring of Replication as set of technologies for copying and distributing data and database objects from one database to another and then synchronizing between databases to maintain consistency.

Microsoft System Center Management Pack for SQL Server 2008 Replication

Microsoft System Center Management Pack for SQL Server 2012 Replication

Microsoft System Center Management Pack for SQL Server 2014 Replication

Microsoft System Center Management Pack for SQL Server 2016 Replication

  • Management Pack for SQL Server Analysis Services enables the monitoring of Instances, Databases and Partitions.

Microsoft System Center Management Pack for SQL Server 2008 Analysis Services

Microsoft System Center Management Pack for SQL Server 2012 Analysis Services

Microsoft System Center Management Pack for SQL Server 2014 Analysis Services

Microsoft System Center Management Pack for SQL Server 2016 Analysis Services

  • Management Pack for SQL Server Reporting Services (Native Mode) enables the monitoring of Instances and Deployments.

Microsoft System Center Management Pack for SQL Server 2008 Reporting Services (Native Mode)

Microsoft System Center Management Pack for SQL Server 2012 Reporting Services (Native Mode)

Microsoft System Center Management Pack for SQL Server 2014 Reporting Services (Native Mode)

Microsoft System Center Management Pack for SQL Server 2016 Reporting Services (Native Mode)

  • Management Pack for SQL Server Dashboards

Microsoft System Center Management Pack for SQL Server Dashboards

Please see below for the new features and improvements. More detailed information can be found in guides that can be downloaded from the links above.

New SQL Server 2008-2016 MP Features and Fixes (6.7.31.0)

  • Added new “Availability Database Backup Status” monitor in Availability Group to check the existence and age of the availability database backups (this monitor is disabled by default)
  • “Database Backup Status” monitor has been changed to return only “Healthy” state for the databases that are Always On replicas, since availability database backups are now watched by the dedicated monitor
  • Improved performance of DB Space monitoring workflows
  • Added new “Login failed” alerting rule for SQL Server event #18456
  • Fixed issue: “Active Alerts” view does not show all alerts
  • Fixed issue: DB space monitoring scripts fail with “Cannot connect to database” error.
  • Fixed issue: PowerShell scripts fail with “Cannot process argument because the value of argument ‘obj’ is null” error
  • Fixed issue: Alert description of “Disk Ready Latency” and “Disk Write Latency” monitors displays the sample count instead of the performance value that was measured
  • Fixed issue: Different file location info from “sys.master_files” and “sysfiles” causes error when Availability Group secondary database files are in different path
  • Fixed issue: “DB Transaction Log Free Space Total” rules return wrong data
  • Introduced minor updates to the display strings
  • Deprecated “Garbage Collection” monitor and the appropriate performance rule
  • Resource Pool Discovery is disabled by default for pools not containing databases with Memory-Optimized Tables
  • “XTP Configuration” monitor now supports different file path types (not only those starting with C:, D:, etc.)
  • Fixed issue: “Resource Pool State” view shows incorrect set of objects
  • Fixed issue: Invalid group discovery in SQL Server 2016 Always On
  • Updated the visualization library

New SQL Server 2008-2016 Replication MP Features and Fixes (6.7.31.0)

  • Added Distributor name caching to Subscription discovery
  • Restricted the length of some string class properties
  • Improved the internal structure of SQL scripts storage
  • Fixed variable types in SQL scripts
  • Fixed connectivity issues in SmartConnect module
  • Introduced minor updates to the display strings
  • Updated the visualization library

New SQL Server 2008-2016 Reporting Services MP Features and Fixes (6.7.31.0)

  • Reimplemented Instance seed discovery: replaced the managed module with a PowerShell script
  • Reimplemented Deployment seed discovery: added a retry policy and improved error handling
  • Updated the visualization library

New SQL Server 2008-2016 Analysis Services MP Features and Fixes (6.7.31.0)

  • Restricted length of some string class properties
  • Updated the visualization library

New SQL Server Dashboards MP Features and Fixes (6.7.31.0)

  • Increased the version number to comply with the current version of SQL Server MPs

We are looking forward to hearing your feedback. This release is in English only. If you are an international customer looking for a localized version of this release, we would like to hear from you at sqlmpsfeedback@microsoft.com.

SQL Server Workgroup Cluster FCM Errors

$
0
0

Background

One of the new features of SQL Server 2016 is the ability to use SQL Server with Failover Cluster in a workgroup rather than joined to Active Directory. When working with SQL Server and Failover Clustering in a workgroup, many of the abilities that are normally used with Active Directory are no longer available, for example using Windows Authentication.

When using SQL Server Availability Groups in a workgroup cluster, some administrative items such as creating new listeners need to be completed manually.

The Problem

Attempting to use Failover Cluster Manager (FCM) to create resources such as a Client Access Point ([CAP], Also known as Network Name) may result in an error such as, “Error in Validation.”,Unable to determine if the computer ‘<CAPName>’ exists in Domain ‘<WorkgroupName>’. The server is not operational.”. The picture below shows a sample of the error that may occur.

Resolution

Utilizing PowerShell to work with the Workgroup Cluster will allow for the customization of each resource instead of the defaults that the GUI (FCM) may impose.

Below example of creating a CAP/Network Name utilizing PowerShell for use as a listener. Please note, to reuse the below script, you’ll need to change the values to those you’d like to use and match your environment. This example creates the same listener that is show in the error using the FCM to the right.

Add-ClusterResource -Name &quot;IPAddress1&quot; -ResourceType &quot;IP Address&quot; -Group &quot;WGAG&quot;
Get-ClusterResource -Name IPAddress1 | Set-ClusterParameter -Multiple @{&quot;Network&quot; = &quot;Cluster Network 1&quot;;&quot;Address&quot; = &quot;20.250.250.9&quot;;&quot;SubnetMask&quot; = &quot;255.0.0.0&quot;;&quot;EnableDHCP&quot; = 0}
Add-ClusterResource -Name &quot;IPAddress2&quot; -ResourceType &quot;IP Address&quot; -Group &quot;WGAG&quot;
Get-ClusterResource -Name IPAddress2 | Set-ClusterParameter -Multiple @{&quot;Network&quot; = &quot;Cluster Network 2&quot;;&quot;Address&quot; = &quot;30.250.250.9&quot;;&quot;SubnetMask&quot; = &quot;255.0.0.0&quot;;&quot;EnableDHCP&quot; = 0}
Add-ClusterResource -Name &quot;TestName&quot; -Group &quot;WGAG&quot; -ResourceType &quot;Network Name&quot;
Get-ClusterResource -Name &quot;TestName&quot; | Set-ClusterParameter -Multiple @{&quot;DnsName&quot; = &quot;TestName&quot;;&quot;RegisterAllProvidersIP&quot; = 1}
Set-ClusterResourceDependency -Resource TestName -Dependency &quot;[IPAddress1] or [IPAddress2]&quot;
Start-ClusterResource -Name TestName -Verbose

Microsoft’s Custom Vision Cognitive Service

$
0
0

I’m in Sydney, Australia this week to speak at the Build Tour. Besides the keynote, I’m presenting a session on Artificial Intelligence and Microsoft’s new Custom Vision Service. Custom Vision has received significant developer interest since Cornelia Carapcea introduced it in the Build keynote this year. As I prepared for my Australia trip, I thought I’d build something neat with the Custom Vision Service and document my findings/experience in this blog post.  Let me introduce the Snakebite Bot…a bot that can help visually identify venomous snakes in North America.

Figure 1: Snakebite Bot (https://snakebite.azurewebsites.net)

Background

Cornelia demonstrated the use of the Custom Vision Service to identify plants by their leaves. I immediately saw the value in identifying poisonous plants like Poison Ivy. When I heard I would be presenting Custom Vision in Australia, I immediately thought…everything in Australia can kill you…I’ll start with snakes. Why snakes? When I was a little kid, I was bitten by a poisonous Copperhead snake at Possum Kingdom Lake (any Toadies fans out there?). You would think this would make me scared of snakes, but it did the opposite…dreams of becoming a herpetologist quickly emerged. Fast-forward and my snake career never took off, but becoming a developer did. That said, I never lost my fascination with reptiles. Enter the Snakebite bot…Custom Vision, bots, and snakes…what could be better?

Building the model

Microsoft’s AI stack includes a number of Cognitive Services that are more commodity than “custom”…you just call the services with your data (ex: facial recognition). Custom Vision is a true “custom” service…you build and train a model specific to your needs. “Custom” doesn’t mean complex as building and consuming a Custom Vision model is as easy as 1-2-3**.

  1. Upload/tag photos
  2. Train and test the model
  3. Call into the model to get the most probably tag matches for a photo

**An important fourth step would be to continually enhance the model by evaluating photos with low probability matches. The Custom Vision Service will improve in this area as the service matures.

Upload/tab photos

While in preview, a model in the Custom Vision Service supports 1000 photos and 50 tags. You can work with Microsoft to expand these quotas, but this is at Microsoft’s discretion during preview. I would caution that multiple small models might be better than one monolithic model. For example, if I wanted to identify any venomous species, I might create specialized models for spiders, snakes, frogs, etc and use the Computer Vision cognitive services to determine which vision model to use.

I worked with Microsoft to expand my model to 10000 photos and 100 tags…and it still wasn’t enough as North America has 129 species of snakes. What did I do? I consolidated very similar species (ex: there are 15 different Garter Snake species that all look very similar so I consolidated them into one “Garter Snake” tag). Although my model will identify the most probable snake species, the more important identification is if a snake is dangerous or not and that can be accomplished with two tags (venomous vs non-venomous). After consolidating common species I ended up with 84 tags (81 species and 3 venomous classifications).

Photos should also be at least 256px on shortest edge but no larger than 6MB in size. Additionally, Microsoft recommends at least 30 photos for every tag. The ideal photos for a Custom Vision model capture the subject(s) from multiple perspectives against a solid background (ex: white background). Backgrounds proved particularly challenging with snakes photos. Not only are solid backgrounds impossible, many snakes are camouflaged against their background. Any unique background attributes could cause a false positive between species. Consider training a model with photos of snakes being held…Custom Vision algorithms might false match on hands instead of snakes. Rather than trying to crop out backgrounds, I concentrated on image volume with minimal background patterns/accents. My initial model was loaded with about 3000 snake photos from Bing/Goolge image searches.

Figure 2: Example of human/holding causing false positive match

Train and test the model

Model training is a bit of a black box in the Custom Vision Service. You don’t have an opportunity to manipulate the algorithm only train on the existing algorithm. The final trained model has a 61.8% precision across all species. However, I’ve found it to be incredibly accurate considering only the venomous vs non-venomous tags (which was the ultimate goal).

Figure 3: Trained model precision and recall

Call into the model

Like the other Microsoft Cognitive Services, the Custom Vision Service provides a REST endpoint secured by a subscription key. Specifically, you POST an image in the body of the REST call to get tag probabilities back. The Custom Vision Service will use the default training iteration unless an iteration ID is specified in the API call. The Custom Vision documentation shows how to do this in .NET, but here is the same call in Node/Typescript. I’m offering this code because I spent the good part of a day getting the correct format for sending the image into the service correctly as a multipart stream.

Figure 4: Can sample for calling Custom Vision Service from Node/TypeScript

function (session, results) {
  var attachment = session.message.attachments[0];
  request({url: attachment.contentUrl, encoding: null}, function (error, response, body) {
    // Take the image and post to the custom vision service
    rest.post(process.env.IMG_PREDICTION_ENDPOINT, {
      multipart: true,
      headers: {
        'Prediction-Key': process.env.IMG_PREDICTION_KEY,
        'Content-Type': 'multipart/form-data'
      },
      data: {
        'filename': rest.data('TEST.png', 'image/png', body)
      }
    }).on('complete', function(data) {
      let nven = 0.0; //non-venomous
      let sven = 0.0; //semi-venomous
      let fven = 0.0; //full-venomous
      let topHit = { Probability: 0.0, Tag: '' };
      for (var i = 0; i < data.Predictions.length; i++) {
        if (data.Predictions[i].Tag === 'Venomous')
          fven = data.Predictions[i].Probability;
        else if (data.Predictions[i].Tag === 'Non-Venomous')
          nven = data.Predictions[i].Probability;
        else if (data.Predictions[i].Tag === 'Semi-Venomous')
          sven = data.Predictions[i].Probability;
        else {
          if (data.Predictions[i].Probability > topHit.Probability)
            topHit = data.Predictions[i];
        }
      }
      let venText = 'Venomous';
      if (nven > fven)
        venText = 'Non-Venomous';
      if (sven > fven && sven > nven)
        venText = 'Semi-Venomous';

      session.endDialog(`The snake you sent appears to be **${venText}** with the closest match being **${topHit.Tag}** at **${topHit.Probability * 100}%** probability`);
    }).on('error', function(err, response) {
      session.send('Error calling custom vision endpoint');
    }).on('fail', function(data, response) {
      session.send('Failure calling custom vision endpoint');
    });
  });
}

Conclusions

In all, I was incredibly impressed by the ease and accuracy of Microsoft’s Custom Vision Service (even with poor backgrounds). I envision hundreds of valuable scenarios the service can help deliver and I hope you will give it a test drive.


Running Dinner – Arbeitest du noch oder kochst du schon?

$
0
0

Es hätte ein Mittwochabend wie jeder andere werden können. Stattdessen fiel an jenem Mittwoch um 18 Uhr der Startschuss für das erste Microsoft Running Dinner! Im Rahmen des monatlichen Social Get Together traf man sich in Zweierteams, die nacheinander die Kreation einer Vorspeise, Hauptspeise oder Nachspeise auftischten. Die Koch-Locations wurden auf die Münchner Innenstadt begrenzt, um Hauptspeise und Nachspeise rechtzeitig erreichen zu können, häufig ging der Plan auf. Und wenn nicht, entstanden kurzerhand neue Treppenhaus-Bekanntschaften. Wie von „Rudi rockt“ oder anderen Formaten inspiriert, wurden jeweils zwei Teams von einem dritten STEP-Koch-Team kulinarisch verwöhnt. Neben selbstgemachtem Pita-Brot und Pflaumensekt wurden schon bei der Vorspeise die verschiedensten Positionen und Zusammenhänge in der Firma erkundet. Gestärkt mit Nahrung, neuen Bekanntschaften und aufgefrischtem Wissen konnte man Gäste und Köche bei Hauptgang und Nachspeise sofort einordnen sowie interessante Ideen für zukünftige Projekte sammeln. Im Anschluss an die Nachspeise fanden sich alle teilnehmenden STEPS um 23 Uhr zusammen in einer Schwabinger Bar und ließen den Abend gemeinsam ausklingen.

Aber kein Wettbewerb ohne Sieger – beim nächsten STEP Lunch fanden sich alle für ein Voting auf der Dachterrasse zusammen. Nach dem Das-Perfekte-Dinner-Prinzip vergab jeder eine Punktzahl zwischen eins und zehn. Die Spannung stieg, hätte es doch fast jedes Team verdient, den Preis zu gewinnen. Letztendlich entschied eine Stichwahl, der süße Zahn siegte: „Dreierlei mit Erdbeertiramisu, selbstgemachten Mandel-Zimt-Eis und Früchten der Saison“ lautete das 10-Punkte-Dessert und somit Nicolas und Fabian die Gewinner des Abends.

 

Artikel verfasst von Marc Senninger, aktueller Werkstudent im Bereich Account Management im Großkundenvertrieb bei Microsoft.

Using PowerShell as an Azure ARM REST API Client

$
0
0

Yes, this is probably another post explaining how to use Azure ARM REST API using PowerShell, I’m aware of this, but what I would like to show you is something deeper in the Azure platform that you may not have noticed or seen before. Fortunately, Microsoft provides many SDKs for almost all your favorite languages, then how the platform works at the lower level, from an API perspective at least, is almost entirely managed for you and then invisible. I’m assuming here you are (or going to) use Azure Resource Manager (ARM) API, Azure Service Management (ASM) API are not recommended today since obsolete and not interested by platform new features development.

Introduction and Goals

Just mentioned above that there are already several tools available on the Internet that will permit you to use ARM REST API directly and making web requests, let me mention first the most popular and my favorites:

Just mentioned above that there are already several tools available on the Internet that will permit you to use ARM REST API directly and making web requests, let me mention first the most popular and my favorites:

  • Postman: you can download from here versions for Windows, Linux, MacOS and as a Chrome extension. Nice tool, available in free basic version and pro paid. powerful GUI platform to make your API development faster & easier, from building API requests through testing, documentation and sharing.
  • HTTP Master: grab your free basic or professional paid version from here. in addition to basic features, it provides almost complete coverage for development and testing of API applications and services, including HTTP tool to simulate client acitivity.
  • ARM Client: it is not an official Microsoft tool, instead is an OSS Project you can find on GitHub or can install via Chocolatey. Nice article from David Ebbo if you want to know more.

There are also many articles that will explain you how to use Azure ARM REST API, then why I decided to publish this post focused on PowerShell? Long story short, I recently worked on two different projects aiming the same kind of final result: write everything necessary to create and manage a series of Azure resources, using plain ARM REST API calls. In the first case, my partner wanted then to include these REST calls in their Java application. In the second case, my other partner wanted to include REST API in their deployment engines using different components written in different languages (Python, .NET, PowerShell), then requiring a common denominator to avoid re-writing code. Another important aspect for the first project was the requirement to have access to new Azure features immediately, or at least as soon as possible once the new feature is announced.

How new Azure API are published?

This is an important first detail to be aware of: practically, there is no new Azure feature if there is no REST API available to use it, and obviously you need to have API reference and documentation to use. What happen normally, when Microsoft release a new feature, is that REST API are built at the very first, and described  here on GitHub:

What you will find in that repository is are the REST API full details for all Azure things, decorated with Swagger specs. If you don’t know what is Swagger and want to know more, you can go here, or take my short and trivial scholastic definition: Swagger is a specification for machine-readable interface files for describing, producing, consuming, and visualizing RESTful Web services.

Finally, what Microsoft internally does is using a tool called AUTOREST (and something more called extensions) to automatically generate client SDKs and make available to you for consume. You can find it here on GitHub:

Swagger (OpenAPI) Specification code generator featuring C# and Razor templates.

Supports C#, Java, Node.js, TypeScript, Python and Ruby.

Let me emphasize this: official ARM REST API documentation should be updated to reflect the change soon, but it is worth nothing that this would require several weeks since human intervention is necessary to write and publish content:

Azure Resource Manager

https://docs.microsoft.com/en-us/rest/api/resources

Then, if you want to know as quickly as possible about how to use a new Azure feature, you should go to GitHub and look at REST API specs. Documentation will follow soon.

NOTE: At the time of writing, there are no Swagger specs for Azure Storage API.

If you are a Visual Studio 2017 user, and interested in building Web API using Swagger, you may want to review the article below:

Visual Studio 2017 and Swagger: Building and Documenting Web APIs

https://www.simple-talk.com/dotnet/net-development/visual-studio-2017-swagger-building-documenting-web-apis

PowerShell Pre-Requisites

Now that you are a bit familiar with Azure REST API and how they are published, and heard why I used PowerShell to consume these APIs, let me start showing you a practical example on how to build REST API calls, invoke them and manage the results. But before going into details, you need something to prepare at the beginning of your PowerShell script, here is the list of logical steps:

  • Initialize your script with common variables values you will use all over the script. This will eventually include Subscriptions details (ID and Name), Azure region, create common objects like a default Resource Group, you Azure Active Directory (AAD) tenant.
  • Authenticate to your Azure subscription using Azure Active Directory (AAD) account: this step will use interactive authentication (Login-AzureRmAccount) just to create Application and Service Principal objects in AAD. This is required only one time: once created the necessary objects for application logon (see later), your application/script will be able to logon to AAD programmatically without any user intervention.
  • Create an Application object in AAD (New-AzureRmADApplication). Azure offer developers the possibility to build applications that can be integrated with Azure Active Directory (AAD) to provide secure sign in and authorization for their services. To integrate an application or service with AAD, a developer must first register the details about their application with Azure AD. This can be done using the Azure Portal or using PowerShell script as in my code sample below. The important piece of information you will need to obtain from this step is ApplicationID that you will use as a surrogate User Name for creating logon credentials.

Use portal to create an Azure Active Directory application and service principal that can access resources

https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal

 

  • Once created the Application object in AAD, and obtained its ApplicationID, you need to create a Service Principal object using  New-AzureRmADServicePrincipal cmdlet, associated to the ApplicationID. Why you need this object? It is important to understand the difference between Application and Service Principal in AAD. You can read more at the link below, but in short, you can consider the application object as the global representation of your application for use across all tenants, and the service principal as the local representation for use in a specific tenant.

Application and service principal objects in Azure Active Directory (Azure AD)

https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-application-objects

 

  • With the Service Principal object in your hands, you can use now use to assign permissions in the current AAD tenant. In the example below you can see both scopes supported: at the entire Azure subscription level, or scoped to a particular Resource Group (recommended). You can use both default built-in AAD roles or create new custom ones, depending on your requirements.
  • Next step consists in creating a new Key in AAD, associated to your just created Application, and then store for later use in your script. You can read here instructions to do this using the Azure Portal (key chars scrambled).

  • Using nice PowerShell mechanisms, as you can see in the example below, you can now create encrypted credential set (user name and password) to pass to your subsequent cmdlets. Be aware that “ApplicationID” is your username and must be used in the form of “username@domain-name”, where the domain-name looks like “<something>.onmicrosoft.com“. Last piece of information you need to retrieve is AAD tenant name for your subscription, you can easily retrieve using last PowerShell script line shown below.
  • From now and going forward, you can logon using just created Application identity and secret, you don’t need interactive logon anymore. After execution of the new instance of Login-AzureRmAccount with application parameters, you will be now authenticated as an application in the script.

We are now at the final preparation step, probably the most important one, that is getting a Bearer authentication token from AAD endpoint (https://login.windows.net/72f988bf-86f1-41af-91ab-2d7cd011db47/oauth2/token) using Oauth2:

As you can see above, I used “Invoke-RestMethod” PowerShell cmdlet to send HTTPS requests to Representational State Transfer (REST) web services, in this case a PUT to Azure AAD Oauth2 passing ApplicationID and Key, for the ARM resource endpoint (https://management.core.windows.net) . Token as a form similar to the one below (text is scrambled). Be aware that the default life time for the token is 60 minutes (3600 seconds), after that you will need to generate another request and acquire a new one.

 

Azure recently introduced the possibility to change the default lifetime of an AAD token feature is still in preview and I did not personally use it yet, but you may want to read the article below:

Configurable token lifetimes in Azure Active Directory (Public Preview)

https://docs.microsoft.com/en-us/azure/active-directory/active-directory-configurable-token-lifetimes

Building and Executing REST calls

Now we have a valid AAD token and we can finally do something interesting with Azure ARM REST API. It is important to highlight that every REST call must include authentication token in the header, but I am sure will be clear once you will see below the first trivial example, that is how to retrieve details about your Azure subscription with a simple GET verb:

Please be aware that the result is not JSON formatted by default, you need to explicitly convert, then you will see a similar output as the one in the picture below. It is also worth mentioning that specifying the API version is mandatory. Finally, you can see there is no Body part defined in this simple GET request.

The sample above is pretty simple, at the end is a GET request getting synchronous answer. But what happen if I want to write (PUT for example) something? Look at the example below used to create a storage account:

NOTE: in the code snippet above, I started using Invoke-WebRequest instead of Invoke-RestMethod because I had problems in correctly parsing and processing the response header, I normally prefer the former.

In the response content above there are several interesting things: first, the status code returned in StatusCode is (202), that is Accepted (StatusDescription) in HTTP response code vocabulary. It is indicating that the request you submitted is a long running asynchronous operation, then the system returned your code a result, but the backend operation is still running in the Azure ARM infrastructure. For synchronous operations like typical GET methods to read objects and resources, you normally receive 200 (OK).

Microsoft REST API Guidelines

https://github.com/Microsoft/api-guidelines/blob/master/Guidelines.md

According the excellent documentation above, “Retry-After” HTTP header indicating the minimum number of seconds that clients SHOULD wait before attempting the operation again. Why this is important? Azure REST API infrastructure tends to protect itself from being overloaded, then this value is returned in all async responses. But, if you do not  want to obey to this hint, there is another safe limit that Azure enforces, you can guess looking to x-ms-ratelimit-remaining-subscription-writes element. For each subscription and tenant, Azure Resource Manager limits read requests to 15,000 per hour and write requests to 1,200 per hour. When you reach the limit, you receive the HTTP status code (429), that is “Too many requests“. You can see all the limits in the article below along with additional details.

Throttling Resource Manager requests

https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-request-limits

Now, if you want to check the status of your running async operation, you need to execute some code like the example below:

 

And here is the content of the variable $URIAsyncCheck (single line scrambled):

General rule to track async operation, for Azure ARM REST API, requires you to check a specific dynamically crafted URL for the status: this URL is generated depending on the ARM Resource Provider (RP) you previously sent the async request to, in this case the Microsoft.Storage RP, and the OperationID of that request. You can see these elements highlighted in red in the above text. Unfortunately, there is no single place you can retrieve that OperationID: as you can see in the code snippet above, you have first to check for the existence of value for Location attribute in the response header, then if nothing is in there, you need to check Azure-AsyncOperation.  With this knowledge in mind, let us check the long running operation using the code snippet below:

This is the output you should see:

As you can see, this is a long running async operation (Response Code = 202 Accepted). You can also note that until the Response Code will not change to 200 – OK, there is no body content returned (Content Length = 0). You should then wait in a loop and check periodically for this condition, once happened, you should check for provisioningState attribute value returned: if, and only if, the operation is completed, then possible values are “Succeeded“, “Failed” or “Canceled“. Different values could be returned, depending on the Resource Provider you may see different values, but this indicates that the request is not completed yet. Some operations that you could think as being asynchronous, in reality they are not, as in the example below related to the deletion of a storage account (Sample[5] in the sample code):

Here the only difference is the “Method = ‘Delete’” part, and result HTTP code will be 200 – OK (if the storage account exists and you have permissions). No attribute “Location” or “Azure-AsyncOperation” will be generated to check for async operation. You can see details for this particular REST API at the location below:

Storage Accounts ARM REST API

https://docs.microsoft.com/en-us/rest/api/storagerp/storageaccounts

Going through the examples, I refined and sometimes modified the logic to track async long-running operations, a pretty complete example (extracted from the GitHub code) is below:

Building REST API request Header and Body can become difficult for very large objects as Virtual Machine for example. You can see an example below, please note that only the minimum required parameters are used, there are many more optional that I did not include. If you want to work using this approach, you need to keep in your hands the Azure ARM REST API reference pages below, and the -Debug switch in PowerShell, you can read more details on the latter in the next section.

Azure REST API Reference

https://docs.microsoft.com/en-us/rest/api

 

Code Samples on GitHub

Here on GitHub, you can find a complete example of how to deploy, end to end, an Azure Virtual Machine, including NSG, VNET and IPs, in a new Resource Group all using REST API direct call in PowerShell. Code for Application and Service Principal creations in Azure Active Directory is also provided. Here is the sample list divided in PowerShell regions:

Purpose of this code sample and blog post is to show how to work directly with Azure ARM REST API using PowerShell as an ARM client. Code is highly de-normalized to make each section pretty autonomous and self-contained to facilitate reuse. At the end, what you should be able to obtain is the following list of resources, all of them built using plain REST API built and executed in PowerShell.

PowerShell Hidden Gems

I have been surprised many times why most people don’t know the power of Azure cmdlets when adding -Debug switch at the end of the command string. Azure PowerShell module is a wrapper built on top of Azure ARM (in this case) REST API, but if you add it, you will see the raw underlying REST API request built for you, and the correspondent result:

Get-AzureRmStorageAccount -ResourceGroupName $rgname -Debug

Then, if you want to work at the pure REST API level, but want your life a bit easier, you could use higher-level PowerShell cmdlets and then reverse-engineer them using the -Debug switch. Another very useful feature you can use in PowerShell is Stop Watch. Starting Sample[6] in my GitHub code example, I used it extensively to monitor execution time of my REST API: below you can find a very simple general structure of its usage.

Java Sample

If you want a Java version of something similar I have shown you here, I would encourage you to look at Silvano Coriani (from AzureCAT) sample work on GitHub here, very nice if this is your favorite development language.

https://github.com/scoriani/ServiceBuilder/blob/master/src/com/microsoft/azure/provisioningengine/ProvisioningEngine.java

References

GitHub Code Sample Repository: https://github.com/igorpag/PSasRESTclientAzure

Azure REST API specs: https://github.com/Azure/azure-rest-api-specs

Azure Resource Manager: https://docs.microsoft.com/en-us/rest/api/resources

Azure REST API Reference: https://docs.microsoft.com/en-us/rest/api

Track asynchronous Azure operations: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-async-operations

Throttling Resource Manager requests: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-request-limits

Resource providers and types: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-supported-services

Use portal to create an Azure Active Directory application and service principal that can access resources: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal

Use Azure PowerShell to create a service principal to access resources: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authenticate-service-principal

Use Resource Manager authentication API to access subscriptions

https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-api-authentication

 

Getting the most out of your Premier Support for Developers Contract

$
0
0

In this post, Application Development Manager, Deepa Chandramouli shared some tips on getting the most of your Premier Support for Developers contract.


Microsoft Premier Support manages the highest tier support programs from Microsoft. Premier Support for Developers (PSfD) empowers developers and enterprises to plan, build, deploy and maintain high quality solutions. When you purchase a Premier Support for Developers contract from Microsoft, an Application Development Manager(ADM) is assigned. He or she will guide you to use the contract in an efficient way that will benefit your developers and the business.

Premier Support for Developers and your ADM does not replace a development team, rather, it complements your team and helps with best practice guidance, products and technology roadmaps, and future proofing your solutions. Your ADM becomes a trusted advisor and a persistent point of contact into Microsoft with the technical expertise to understand your development needs, pain points, and recommend services that are right for you.

A Premier Support contract can be leveraged to validate architecture, perform design/code reviews against best practices and help teams to ramp up on the new technology as needed.  As with any Premier Support relationship, customers have ways to engage support — Reactive and Proactive.

Reactive Support – Reactive or the Problem Resolution Support provides a consistent way to engage Microsoft to open support cases when you run into issues with any of Microsoft Products and Service still covered under Lifecycle Policy.   You can use http://support.microsoft.com or call into the 1800-936-3100 to open a support case with Premier.

Proactive Support – Proactive or Support Assistance is used for advisory consulting engagements and trainings. Examples would be best practice guidance, code reviews, migration assessments, trainings etc…

A common misconception about the proactive support is that it is only meant to be used for training and workshops. It’s also common practice to use proactive hours for remediation work that comes out of critical, reactive support issues that may come up. There are many types of services and engagements customers can leverage through proactive hours to reduce the likelihood of reactive issues in the future. We understand ONE SIZE DOESN’T FIT ALL. So most of the services can be customized to fit your needs. As with any successful projects, the key to get the most of your investment in Premier Support is by Planning, Planning and Planning ahead of time with your ADM.

Premier proactive services can be grouped into 3 broad categories.

  • Assess – Assessments are a great place to start since the results drive other engagements and services. If you don’t know where to start using Premier, start with an assessment of your most critical workload that has pain points. These findings can help align and prioritize next steps and Premier Services that can help.
  • Operate – Operate is the next step after assessments to help address issues with applications and infrastructure. It could be front-end or middle tier or database. For example, performance assessment could lead to optimizing stored procedures. SQL Performance and Optimization Clinic is a huge favorite of lot of Premier customers because it addresses performance issues as well as educate developers around how to address bottlenecks in the future.
  • Educate –  Educate focused on empowering developers with the skills and the tools they need to deliver successful applications. You have access to Premier open enrollment workshops and webcasts that you can register for at any time. There is broad list of topics available that your ADM can share with you on a regular basis. You can also plan custom trainings that are more focused and targeted to your needs and relate to the projects that your team is currently working on.

PremierServices

This is only a small subset of services to give you an idea of how best to use the Premier Support for Developers contract. Application Development Managers (ADMs) can provide more information on each of these topics and the huge list of services that applies to your specific needs and environment.  Another strong value proposition of Premier Support for Developers are custom engagements that cater to your needs and help achieve your goals.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.  For more information on Premier Support for Developers, check out https://www.microsoft.com/en-us/microsoftservices/premier-support-developers.aspx

Insider Fast release notes: 15.37 (170627)

$
0
0

This week brings our first version 15.37 update to reach Insider Fast. Here’s a quick look at the update:

 

Top improvements and fixes:

  • Favorites: Folders can now be added or removed via clicking star icon displayed on hover of folder
  • Favorites: Top-level Favorites group is now only displayed if at least one folder has been favorited
  • Google Calendar: Now view free & busy information for attendees in Scheduling when creating a meeting
  • Account setup: Improved error messages for IMAP accounts, including configuring IMAP access and two-step auth
  • When adding an Outlook.com account, calendars and contacts are now selected and properly displayed

 

Favorited

Hover to add to Favorites or remove from Favorites

 

Other notes:

  • For any issues, please use Help > Contact Support
  • For feature requests, please use Help > Suggest a Feature
  • For weekly updates with the latest features and fixes listed here, join Insider Fast!
  • Lastly, the Outlook Preview is concluding this week; thanks for all the valuable feedback!

 

New Innovations for SharePoint Developers

$
0
0

This post is provided by Senior App Dev Manager, Ed Tovsen who highlights some of the new innovations available for SharePoint developers.


SharePoint development has evolved over the years since its initial release in 2001. Customization options have varied from fully trusted code, to sand-boxed solutions, to add-ins. With each new option, Microsoft has intended to simplify and control how customers customize their SharePoint environments. This past year Microsoft has continued innovating and introduced some new services and frameworks to expedite the customization of SharePoint environments. Two of the new services, PowerApps and Microsoft Flow, are designed for rapid custom development and deployment. Microsoft is finding that these types of services help developers and power users quickly tackle SharePoint requests, making SharePoint more useful and valuable for corporations. Additionally, Microsoft recently introduced a new client-side development model called SharePoint Framework (SPFx). Below are overviews for each of these new innovations and links to provide you with the specifics.

PowerApps

PowerApps is service built for developers and analysts to create online forms which connect to numerous data sources including SharePoint Online and SharePoint on-premises using a data gateway. Using either PowerApps Studio or the PowerApps web designer, you can quickly create apps that address specific needs without writing code or struggling with integration issues. PowerApps will generate custom apps that execute on all devices, including mobile, as well as the web. PowerApps is designed for a corporate environment where apps can be shared with employees.

PowerApps is tightly integrated with the new modern SharePoint experience. The modern SharePoint List menu includes a PowerApps button to create a new app for the current list. At this time, the modern SharePoint experience is only available in SharePoint Online. It will be included as part of an upcoming feature pack for SharePoint 2016.

clip_image004clip_image002

When you click on the Create an app, a pop-up will appear allowing for the app to be named. After the name is entered and the Create button clicked, the PowerApps web designer will open in the browser. Because the PowerApps web designer knows the context of the SharePoint list, it automatically creates a default app based on the schema and data of the SharePoint List. You can then customize the app to meet your business requirements.

employeeapp

Apps created using these steps will be listed as a view and can be shared or launched from within the SharePoint modern list experience. This allows you to leverage PowerApps to build custom, mobile-optimized views of SharePoint lists and share them with co-workers. Lastly, PowerApps is a cross-platform service which allows apps to run on all devices including Windows, iOS, and Android.

Microsoft Flow

flowMicrosoft Flow is a service that allows developers and analysts to create automated workflows between applications and services which can synchronize files, get notifications, collect data, and more. Using templates or starting from scratch, developers create flows to turn repetitive tasks into multistep workflows. For example, you could get an email notification every time a new item is added to a SharePoint list. Microsoft Flow connects to both SharePoint Online and SharePoint on-premises using the same data gateway as PowerApps.

Like PowerApps, Microsoft Flow is also integrated into the modern SharePoint List menu as you can see in the image above. When you click on the Create a flow, a pop-up will appear to create a flow for the SharePoint List. After selecting a template, the Flow web designer will open in the browser. Because the Flow web designer knows the context of the SharePoint list, it prefills the steps in the flow. You can then customize the flow to meet your business requirements.

searchtemplate

Microsoft Flow is the successor to SharePoint Designer for common business scenarios such as approvals, document review, and onboarding/offboarding. Going forward, it will be the default tool for building business automation in SharePoint.

SharePoint Framework

The SharePoint Framework (SPFx) is a web part and page model that enables fully supported client-side development as well as support for open source tooling. Introduced in May 2016, the SharePoint Framework is initially focused on extending the SharePoint user interface using client-side web parts. SPFx aims to solve the difficulty of keeping up with the evergreen model of SharePoint Online. SPFx provides a standardized framework to create custom user interface extensions as well as building applications on top of SharePoint Online.

Microsoft built the SharePoint Framework from ground-up using a modern web stack including TypeScript / JavaScript, HTML, and CSS. All parts of the generated artifacts are executed locally in the browser. SPFx comes with a completely new set of tooling that is platform agnostic and works on either the PC or Mac. It is based on open source technologies such as Node.js, Gulp, Webpack, and Yeoman. The SharePoint Framework and tools are used at build time to streamline the developer experience for building, packaging, and deploying.

typescriptnodejsgulpyeoman

The SharePoint Framework runs in the context of the current user and connection in the browser, not using iFrames. The controls are rendered in the normal page Document Object Model (DOM) and are responsive and accessible in nature.

loyalty

The SharePoint Framework reached General Availability (GA) in February 2017. Currently, the SharePoint Framework is only applicable for web parts running in SharePoint Online. Microsoft is planning to bring the SharePoint Framework to SharePoint 2016 on-premises during 2017, as part of a future feature pack. The SharePoint Framework roadmap also includes full page apps, which will render in full page mode and not as web parts in SharePoint.

Additional Information


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>