Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Using SQLPackage to import or export Azure SQL DB

$
0
0

Purpose:

Explain how to easily and quickly use SQLPackage to import or export your Azure SQL Database.

Note that SQLPackage does not export a transitionally consistent package, so a good approach is to extract from a database copy. (thanks to ErikEJ for his comment)

Locating the SQLPackage.exe:

The SQLPackage.exe is part of the DacFramework which installed with SSMS or with SQL Server Data Tools

Or you can download only the DacFramework if you do not have already the management tools

Once the DacFramework / SSMS / SSDT is installed you can locate the SQLPackage.exe in this path

C:Program Files (x86)Microsoft SQL Server130DACbin

* Change the drive letter if you installed it to different location.

 

Export from Azure SQL DB to bacpac file:

to export database from your Azure SQL DB to bacpac file use this command:

sqlpackage.exe /Action:Export /ssn:tcp:<ServerName>.database.windows.net,1433 /sdn:<DatabaseName> /su:<UserName> /sp:<Password> /tf:<TargetFile> /p:Storage=File

example:

sqlpackage.exe /Action:Export /ssn:tcp:MyOwnServer.database.windows.net,1433 /sdn:AdventureWorks /su:AdminUser /sp:AdminPassword1 /tf:C:TempAW.bacpac /p:Storage=File

 

Import from bacpac file to Azure SQL DB:

to import database from bacpac file to your Azure SQL DB use this command:

sqlpackage.exe /Action:Import /tsn:tcp:<ServerName>.database.windows.net,1433 /tdn:<TargetDatabaseName> /tu:<UserName> /tp:<Password> /sf:<Path to bacpac file> /p:DatabaseEdition=Premium /p:DatabaseServiceObjective=P4 /p:Storage=File

example:

import database to default target service tier S0:

sqlpackage.exe /Action:Import /tsn:tcp:MyServer.database.windows.net,1433 /tdn:AdventureWorks /tu:AdminUser /tp:AdminPassword1 /sf:C:tempAW.bacpac /p:DatabaseEdition=Premium /p:DatabaseServiceObjective=P4 /p:Storage=File

import database and set target SLO to P2

sqlpackage.exe /Action:Import /tsn:tcp:MyServer.database.windows.net,1433 /tdn:AdventureWorks /tu:AdminUser /tp:AdminPassword1 /sf:C:tempAW.bacpac /p:DatabaseServiceObjective=P2 /p:Storage=File

 

 

Full documentation for SQLPackage.exe

 

 


Responding through delegates

$
0
0

One typical extension point is when extensible logic is delegating the responsibility of a certain operation, and is expecting extensions to provide a result. It could be a conversion, a calculation, class construction or similar.

 

Let’s look at the pattern, and how to be a good-citizen.

 

Example

The Batch table has a few delegates, one of them is classDescriptionDelegate.

The delegate’s signature is:

delegate void classDescriptionDelegate(Batch _batch, EventHandlerResult _ret)
{
}

The EventHandlerResults enables subscribers to return a class description for a given Batch record. A good-citizen would do something like this:

[SubscribesTo(tableStr(Batch), delegateStr(Batch, classDescriptionDelegate))]
public static void MyBatch_classDescriptionDelegate(Batch _batch, EventHandlerResult _ret)
{
    if (_batch.ClassNumber == classNum(MyClass))
    {
        _ret.result(“@MyLabelFile:MyClassDescription“);
    }
}

To be a good-citizen it is important to only provide a result when it is related to your extension.

A bully would set the result unconditionally, and thus overwrite other extensions. The order event handlers are invoked is arbitrary – so with two bullies you’d get arbitrary results.

The pattern

You can recognize events expecting a result:

  • Their name is postFixed with “Delegate”, as in “I’m delegating the responsibility of …”
  • They have a parameter that allows returning a value, for example, EventHandlerResult. Usually this is the last parameter.
  • XML documentation should describe how to use the delegate – when not, “Find references” and a bit of reverse engineering is your friend.

The delegating code will typically look like this:

delegate myOperationDelegate(<input parameters>, EventHandlerResult _result)
{
}    
public <result> myOperation()
{
    EventHandlerResult ret = new EventHandlerResult();    
    this.myOperationDelegate(<input parameters>, ret);
    if (ret.hasResult())
    {
        return ret.result();
    }
    throw error(…);
}

 

Please follow this pattern when implementing extensible code – remember others might be extending your solution!

 

THIS POST IS PROVIDED AS-IS; AND CONFERS NO RIGHTS.

Extending class state

$
0
0

A new and tremendously powerful feature was introduced in the Fall Release ’16. Now you can extend class instances, including adding state. This is available for any class in the system.

 

We already know we can extend class types. Which in essence allows us to introduce new methods that consumers of the class can benefit from. That was little more than compiler magic; now we got true class extension capabilities.

Example

Suppose you want to extend the SysUserLogCleanup class. Out-of-the-box this class is deleting records from the SysUserLog table. Let’s imagine you want to archive these records to a different table before they are deleted.

The SysUserLogCleanup class is a runbase class, so you want to add a check mark to the dialog, get the result of that check box, act on it in the run method, pack/unpack etc. Here is how the state can be extended, and how to act on the dialog() and getFromDialog() methods.

[ExtensionOf(classStr(SysUserLogCleanup))]
final class mfpSysUserLogCleanup_Extension
{
    // Extending class state…
    private boolean mfpArchive;
    private DialogField mfpDialogArchive;
   
    // Adding new instance methods…
    private void mfpDialog(Dialog _dialog)
    {
        mfpDialogArchive = _dialog.addField(extendedtypestr(NoYesId), “Archive”);
        mfpDialogArchive.value(mfpArchive);
    }
    private void mfpGetFromDialog()
    {
        mfpArchive = mfpDialogArchive.value();
    }


    // Wiring up event handlers…
    [PostHandlerFor(classStr(SysUserLogCleanup), methodStr(SysUserLogCleanup, dialog))]
    public static void mfpSysUserLogCleanup_Post_Dialog(XppPrePostArgs _args)
    {
        Dialog dialog = _args.getReturnValue();
        SysUserLogCleanup instance = _args.getThis();
        instance.mfpDialog(dialog);
    }


    [PostHandlerFor(classStr(SysUserLogCleanup), methodStr(SysUserLogCleanup, getFromDialog))]
    public static void mfpSysUserLogCleanup_Post_GetFromDialog(XppPrePostArgs _args)
    {
        SysUserLogCleanup instance = _args.getThis();  
        instance.mfpGetFromDialog();
    }
}

Please notice, to be a good citizen, I applied these practices:

  • Prefixed the added members and methods. I used “mfp” as prefix. This is important to avoid name clashes with other extensions and future version of the class being extended.
  • Hooked up post-method event handlers for the methods needed.

Other interesting aspects:

  • This also work for forms.
  • It even works for tables – except you cannot add state . Table’s don’t have a class declaration, so that is fair.
  • This way of extending a class will not break the extended class’s encapsulation. I.e. you will not have access to any private fields or methods.

 

THIS POST IS PROVIDED AS-IS; AND CONFERS NO RIGHTS.

Post Invitado: Azure Machine Learning, “En busca del Fuego”

$
0
0

Tenemos un nuevo post invitado, cuyos autores son Eugenio López de Elorriaga y Pedro Serrano de Ilitia technologies.

Un nombre curioso para un artículo…

Escribimos este articulo para compartir la experiencia vivida desarrollando un sistema de predicción del número de resfriados diagnosticados por centro de salud, empleando AzureML.

¿Por qué lo de “En busca del fuego”?

Bueno, porque en ciertas cosas nuestro proceso de desarrollo ha sido similar al viaje de los homínidos de la película del mismo nombre. De la oscuridad a la luz.
¡Comenzamos el viaje!

 

 

1

 

Oscuridad

Ilitia (nuestra empresa), está realizando una serie de pruebas de concepto para la sanidad pública vasca (Osakidetza), donde demostrarle las potencialidades de multitud de tecnologías en Azure entre otras Machine Learning.

Nos embarcamos en este viaje dos desarrolladores de aplicaciones de gestión sin experiencia previa en Machine Learning.

Resumiendo:

  • PoC = poco tiempo de desarrollo.
  • ML = ¿¿?? Móvil Launcher??, Main Library??, Memory Load?? … a no, es ¡¡Machine Learning!!

Es importante entender este contexto porque demuestra que AzureML hace posible “democratizar” los procesos de machine learning a equipos pequeños, no especialistas, con pocos recursos y tiempo.

 

 

 

2

Stonehenge” por Les Haines (licencia CC BY 2.0).

Se enciende una chispa

Entonces, ¡Salta la idea!, vamos a correlacionar los datos de clima con la incidencia de algunas enfermedades comunes (catarros, gripe, o gastroenteritis).

Una investigación rápida (Wikipedia), nos dice que el riesgo de resfriado común depende, de alguna forma, con algunas variables climáticas (humedad relativa, temperatura). Es importante tener como punto de partida un “escenario probable” y no ir dando palos de ciego. Si ya sabemos que hay algún tipo de relación clima-resfriados, mejor que si lo sospechamos, y si lo sospechamos mejor que si vamos a ciegas. Esto se traduce también en un ahorro inmenso de tiempos (apuntamos un poco antes de disparar, que las balas son caras).

Además, estamos de suerte, buscamos en internet y encontramos que hay disponibilidad gratuita de los datos climáticos de las estaciones de Euskadi; esto promete.

Así mismo, encontramos datos genéricos del número de casos de catarro atendidos en cada centro de salud, en los últimos años.

Empezamos a ver las posibilidades para empezar a desarrollar esto que nos ofrece Azure: HDInsight-Spark, HDInsight-R-Server…
Todas las opciones suponen aprender lenguajes de programación, paradigmas distintos, frameworks, herramientas… y aprender nos encanta, pero el proyecto tiene que salir en días. Entonces, de forma natural, aparece AzureML al rescate: no hace falta montar un entorno, te das de alta en el servicio y accedes. Promete facilidades para gente como nosotros (developers). De hecho, vemos una demo de 60 minutos, nos damos de alta en el servicio, y empezamos a jugar con los ejemplos.

 

3

Ejemplo en Microsoft Azure Machine Lerning Studio

 

¡Y además, cumple lo que promete! Con la documentación, y las ayudas, así como la interface, en cuestión de horas estemos montando los primeros experimentos. Ya nos vemos como “auténticos expertos”. 

 

4

Y con la chispa el humo: primeros problemas

Pero claro, no todo va a ser color de rosa. Estamos utilizando datos del mundo real y, ni el mundo real, ni sus datos son perfectos. Las estaciones meteorológicas a veces se rompen y, esos días, no dan datos o dan datos erróneos.

Tampoco contamos con la posición geográfica de los centros de salud. La forma más rápida que tenemos de conseguirlos es mediante scraping de la web, sumado a un largo y tedioso trabajo a mano (al tener doble idioma Euskera/Castellano hay municipios y centros de salud con nombres distintos).

 

5

Problemas con los datos

Además, las lecturas de las estaciones son horarias, y para obtener las diarias debemos hacer distintos procesos según el dato buscado (temperaturas máximas, mínimas, media diaria, precipitación acumulada, …).

 

14

Por fin una llama

Como resultado de todo este trabajo obtenemos una base de datos, una tabla, en SQL Azure, donde cada registro contiene los datos del número de pacientes atendidos por resfriado en cada centro de salud, junto con la temperatura máxima, media, mínima, humedad relativa, precipitación acumulada en 24 horas (extrapolando a partir de las estaciones meteorológicas más cercanas que tengan datos para estos días).

 

7

Distintos pasos como experimentos de AzureML

Ahora sí que avanzamos. Empezamos a entrenar con los distintos algoritmos de regresión empleando las distintas variables. En cada prueba entrenamos con los datos de cuatro años y luego

8

comprobamos los resultados con los datos reales del quinto año. De nuevo ante las dudas sale en nuestra ayuda la documentación y los ejemplos.

9

¡¡Aahhh, esto quema!!

Pero ojo, cada prueba tarda horas en ejecutarse y el consumo de la cuenta se dispara. Por ahí vamos mal. Empezamos a probar con subconjuntos de datos menores.

Montamos un único experimento que pruebe todos los algoritmos y deje los resultados de cada uno en un fichero diferente, así podemos lanzar este experimente y olvidarnos hasta que termine.

Con unas pocas pruebas vamos afinando el algoritmo, los datos de entrada a emplear, la parametrización requerida … y todo eso con sin apenas conocimientos de Machine Learning, ni de climatología, ni de medicina.

Por fin, el fuego es nuestro: ya lo tenemos, los parámetros serán la temperatura media, el día, y el mes.

El algoritmo empleado será “Decision Forest Regression”.  Publicamos como un servicio (trivial, el wizard funciona fenomenal), con lo que podemos pedirle la previsión para un centro de salud para un día determinado, pasándole la temperatura media esperada para ese día.

 

10

Y llenamos la noche de antorchas

El resto es ya transitar por caminos conocidos. Construimos un servicio .Net que recoge de AEMET (Agencia Estatal de Meteorología) las previsiones de temperatura por municipios, y lanza a través del servicio web del experimento, el cálculo del número de pacientes por centro de salud, y deja los resultados en AzureSQL.

Programamos la ejecución de este proceso a través del servicio de Scheduler Jobs de Azure y lo tenemos funcionando integrado en nuestra aplicación sobre Power BI.

 

 

 

11

PowerBI Mostrando predicción de casos por centro de salud para el próximo 11 de diciembre del 2016

12

Power BI mostrando la evolución de casos de catarro y temperatura (valores reales frente a valores predichos).

13

Epílogo

Ya reconfortados por el calor y la luz del fuego, llega el momento de recopilar algunas de las lecciones aprendidas por el camino:

  • Trabaja sobre escenarios “probables”: en esto el conocimiento del área de negocio te puede ahorrar un montón de trabajo y de dolores de cabeza.
  • Acudir a la documentación: en este caso, como en tantos otros, es tu mejor compañera de viaje.
  • Dedicar todo el trabajo que sea necesario a la obtención, limpieza y preparación de los datos: de esto va a depender más que de nada el que puedas obtener los resultados deseados.
  • Hacer las pruebas iniciales con subconjuntos de datos: al reducir el tamaño del problema, reduces el tiempo a emplear en cada prueba.
  • Prueba y afina: te ayudará a elegir el algoritmo adecuado, así como su parametrización.
  • El conocimiento nunca sobra: aunque Azure Machine Learning rebaja considerablemente la curva de entrada, todo el conocimiento que poseas o adquieras entorno al área de aprendizaje automático te será más que útil.

 

Skype for Business Online でカスタムクライアントポリシーが利用できるようになりました。

$
0
0

こんばんは、SfB サポートのワトソンです。

Skype for Business Online ではこれまで、組織の利用形態やコンプライアンス要件に合わせて、
会議やクライアントポリシーのカスタマイズをすることができなく、準備されている一般的なポリシーの適用を行う必要がございました。

あらゆる企業のコンプライアンス要件を満たし、Office 365・Skype for Business Online の利用を促進するため、
現在 Skype for Business Online では、クライアントポリシーおよびに、会議ポリシーのカスタマイズが可能になりました。

クライアントポリシーとは、クライアントとしてどのような機能が利用可能かについて設定をするポリシーとなります。

設定した内容については、Skype for Business クライアントがサインイン後、Skype for Business Online サーバーより、クライアントに対して、
インバンドプロビジョニングと言われる方式でポリシーの内容がクライアントに対して強制されます。

Skype for Business Online で現時点でカスタマイズ可能なクライアントポリシーのパラメターにつきまして以下の通りとなります。

Set-CsClientPolicy や New-CsClientPolicy でカスタマイズ可能なパラメター

パラメター名 カスタマイズ可否
PolicyEntry カスタマイズ不可
Description カスタマイズ不可
AddressBookAvailability カスタマイズ不可
AttendantSafeTransfer カスタマイズ不可
AutoDiscoveryRetryInterval カスタマイズ不可
BlockConversationFromFederatedContact カスタマイズ不可
CalendarStatePublicationInterval カスタマイズ不可
ConferenceIMIdleTimeout カスタマイズ不可
CustomizedHelpUrl カスタマイズ不可
CustomLinkInErrorMessages カスタマイズ不可
CustomStateUrl カスタマイズ不可
DGRefreshInterval カスタマイズ不可
DisableCalendarPresence カスタマイズ可能
DisableContactCardOrganizationTab カスタマイズ不可
DisableEmailComparisonCheck カスタマイズ可能
DisableEmoticons カスタマイズ可能
DisableFeedsTab カスタマイズ不可
DisableFederatedPromptDisplayName カスタマイズ不可
DisableFreeBusyInfo カスタマイズ可能
DisableHandsetOnLockedMachine カスタマイズ可能
DisableMeetingSubjectAndLocation カスタマイズ不可
DisableHtmlIm カスタマイズ可能
DisableInkIM カスタマイズ可能
DisableOneNote12Integration カスタマイズ不可
DisableOnlineContextualSearch カスタマイズ不可
DisablePhonePresence カスタマイズ不可
DisablePICPromptDisplayName カスタマイズ不可
DisablePoorDeviceWarnings カスタマイズ可能
DisablePoorNetworkWarnings カスタマイズ可能
DisablePresenceNote カスタマイズ可能
DisableRTFIM カスタマイズ可能
DisableSavingIM カスタマイズ可能
DisplayPhoto カスタマイズ可能
EnableAppearOffline カスタマイズ可能
EnableCallLogAutoArchiving カスタマイズ可能
EnableClientAutoPopulateWithTeam カスタマイズ可能
EnableClientMusicOnHold カスタマイズ可能
EnableConversationWindowTabs カスタマイズ可能
EnableEnterpriseCustomizedHelp カスタマイズ可能
EnableEventLogging カスタマイズ不可
EnableExchangeContactSync カスタマイズ可能
EnableExchangeDelegateSync カスタマイズ不可
EnableFullScreenVideo カスタマイズ不可
EnableHighPerformanceConferencingAppsharing カスタマイズ不可
EnableHotdesking カスタマイズ不可
EnableIMAutoArchiving カスタマイズ可能
EnableMediaRedirection カスタマイズ不可
EnableMeetingEngagement カスタマイズ不可
EnableNotificationForNewSubscribers カスタマイズ不可
EnableServerConversationHistory カスタマイズ可能
EnableSkypeUI カスタマイズ可能
EnableSQMData カスタマイズ不可
EnableTracing カスタマイズ不可
EnableURL カスタマイズ可能
EnableUnencryptedFileTransfer カスタマイズ可能
EnableVOIPCallDefault カスタマイズ不可
ExcludedContactFolders カスタマイズ可能
HotdeskingTimeout カスタマイズ不可
IMWarning カスタマイズ可能
MAPIPollInterval カスタマイズ不可
MaximumDGsAllowedInContactList カスタマイズ不可
MaximumNumberOfContacts カスタマイズ不可
MaxPhotoSizeKB カスタマイズ不可
MusicOnHoldAudioFile カスタマイズ可能
P2PAppSharingEncryption カスタマイズ不可
EnableHighPerformanceP2PAppSharing カスタマイズ不可
PlayAbbreviatedDialTone カスタマイズ可能
RequireContentPin カスタマイズ不可
SearchPrefixFlags カスタマイズ不可
ShowRecentContacts カスタマイズ可能
ShowManagePrivacyRelationships カスタマイズ可能
ShowSharepointPhotoEditLink カスタマイズ可能
SPSearchInternalURL カスタマイズ不可
SPSearchExternalURL カスタマイズ不可
SPSearchCenterInternalURL カスタマイズ不可
SPSearchCenterExternalURL カスタマイズ不可
TabURL カスタマイズ不可
TracingLevel カスタマイズ可能
TelemetryTier カスタマイズ不可
PublicationBatchDelay カスタマイズ不可
EnableViewBasedSubscriptionMode カスタマイズ不可
WebServicePollInterval カスタマイズ不可
HelpEnvironment カスタマイズ不可
RateMyCallDisplayPercentage カスタマイズ不可
RateMyCallAllowCustomUserFeedback カスタマイズ不可
IMLatencySpinnerDelay カスタマイズ不可
IMLatencyErrorThreshold カスタマイズ不可
SupportModernFilePicker カスタマイズ不可

例えば、管理者側で、クライアントのオプションメニューにてユーザーの IM の会話履歴への保存、通話ログの保存、FreeBusy の情報の公開を無効にし、かつ、ユーザー側でオプションをグレイアウトされる場合、以下のコマンドとなります。

New-CsClientPolicy -identity custompolicy1 | Set-CsClientPolicy -EnableIMAutoArchiving $false -EnableCallLogAutoArchiving $false -DisableFreeBusyInfo $true

上記ポリシーを以下のようにユーザーに割り当てます。
Grant-CsClientPolicy -PolicyName hoge -Identity sipaddress@contoso.com

上記で設定した場合、クライアントのオプション画面では以下のような設定となり、ユーザー側では変更ができない状況となります。
setpolicyexampleprod

各パラメターの意味につきましては以下の公開情報をご参照下さい。
Site:Set-CsClientPolicy
URL: https://technet.microsoft.com/ja-jp/library/gg398300.aspx

なお、Set-CsClientPolicy で設定したパラメターにつきましては、1 時間後にクライアントでサインアウトとサインインをする事により、
新しいポリシーの情報がクライアントに反映されます。

クライアントを強制的にサインアウトさせる事はできないので、次回ユーザーがサインインする事をお待ちいただく必用があります。

引き続き SfB Online およびに、各 SfB クライアントをどうぞよろしくお願いします。

.Net Core Project changes in Visual Studio 2017 RC (build 26127.00)

$
0
0

For any developer who was using the previous Visual Studio 2017 RC build (26020.00) or previous they may encounter issues when upgrading to the latest release which was made available on the January 27, 2017 (build 26127.00).

Two issues affected me today which I have documented below on the fix, can be fixed easily but sometimes nice to have it pointed out.  Obviously please backup and test in your environments !!

Error  Duplicate ‘Compile’ items were included.

You may encounter the following error in your projects –

Error  Duplicate ‘Compile’ items were included. The .NET SDK includes ‘Compile’ items from your project directory by default. You can either remove these items from your project file, or set the ‘EnableDefaultCompileItems’ property to ‘false’ if you want to explicitly include them in your project file. The duplicate items were: ‘xxxx.cs’ xxx.yyy.zzz C:Program Files (x86)Microsoft Visual Studio2017EnterpriseMSBuildSdksMicrosoft.NET.SdkbuildMicrosoft.NET.Sdk.DefaultItems.targets 

Workaround

In previous versions of the project file you would have found the section within your project file that instructs the compiler to include the source files within the solution, these now seems to be included as default when included so the original entry now creates duplicate classes.

For example, previously you may have within your project.csproj file  –

<ItemGroup>
<Compile Include="***.cs" />
<EmbeddedResource Include="***.resx" />
</ItemGroup>

Simple remove the entire section (ItemGroup which includes the Compile and EmbeddedResource) above which will stop the duplicate error.  Files will be included as default.

Warning  A PackageReference for ‘NETStandard.Library’ was included in your project.

The second common error is that either one of the following libraries ( Microsoft.NETCore.App, NETStandard.Library and Microsoft.NET.Sdk ) are included in your project and will raise the error below.

Warning  A PackageReference for ‘NETStandard.Library’ was included in your project. This package is implicitly referenced by the .NET SDK and you do not typically need to reference it from your project. For more information, see https://aka.ms/sdkimplicitrefs xxx.yyy.zzzz C:Program Files (x86)Microsoft Visual Studio2017EnterpriseMSBuildSdksMicrosoft.NET.SdkbuildMicrosoft.NET.Sdk.DefaultItems.targets 

Workaround

Simply edit the associated project.csproj file and rmeove the following line –

<PackageReference Include="NETStandard.Library" Version="1.6.1" />

The include is now implicit for .Net Core project files so does not need to be included in the project references any more.

As with any update it is worth visiting the Release Notes before apply the update and refer to for any other breaking change.

Comments welcome below

 

PowerShell Open Source Community Dashboard

$
0
0
Since going cross-platform and open source on GitHub, I’ve wanted to know how we are doing as a community and who the top contributors we should recognize are.
The available GitHub graphs are not sufficient as they focus on commits, and there are many other ways for the community to contribute to PowerShell.
Certainly receiving Pull Requests (PRs) has a direct impact on the code base, but opening issues, commenting on issues, and commenting on PRs (aka code reviews) are also immensely appreciated and valuable to help improve PowerShell.In addition, PowerShell is not a single repository, but several repositories that help to make PowerShell successful:
  • PowerShell-RFC where we do design work for new proposed features
  • PowerShell-Docs which contains all the PowerShell help and documentation
  • platyPS: tooling for our help documentation that enables authoring and editing of docs in Markdown instead of XML
  • Microsoft.PowerShell.Archive: a built-in module for creating and expanding ZIP archives (in the future we plan to move other built-in modules to their own repositories like this)
  • ODataUtils: a module to generate PowerShell cmdlets from an OData REST endpoint
  • JEA where we store samples and resources associated with Just Enough Administration (JEA)
  • PSL-OMI-Provider: an optional component for Linux/Mac to enable PowerShell remoting over WS-Man protocol (both client and server)
  • PSReadline: the default interactive command line experience for PowerShell

Although most of the contributions happen in the PowerShell/PowerShell repo, I want to ensure we recognize contributions in these other repositories (and new ones in the future).

To get a more holistic view, I decided to create a dashboard in PowerBI.
A follow-up blog post will go into some of the technical details and challenges to having an Azure Function execute a PowerShell script calling GitHub REST APIs and storing the result in an Azure StorageTable queried by PowerBI.
The PowerShell scripts I used for this dashboard will be published to the PowerShell Gallery.

You can access the dashboard at http://aka.ms/PSGitHubBI.

The first page, Top Community Contributors, recognizes individuals outside of Microsoft for their contributions for each of the 4 types of contributions described previously.
Two things to note:

  • the rankings are based on a moving 30 day window
  • ties for the rank are due to individuals having exactly the same count for a contribution type

The second page, Top Microsoft Contributors, is the same as the first table but for Microsoft employees who are members of the PowerShell organization on GitHub.

The third page, Contributions over Time, has two graphs:

  • The first graph compares community contributions and Microsoft contributions.
    It really shows that going open source was the right decision as the community has provided lots of contributions and helped to move PowerShell forward more than what the PowerShell Team could have done alone!
  • The second graph shows a comparison over time of the different types of contributions, but is not separated out between the community and Microsoft.

The last page, Downloads, shows a trend of the cumulative downloads of our official releases with a view comparing the different operating systems and the different release versions.
Eventually, I would like to replace the download numbers with usage numbers based on Census Telemetry, which is a much more accurate representation of growth of PowerShell adoption.

I intend to iterate and improve upon this dashboard to make it useful not only to the PowerShell Team, but also to the PowerShell community.
I plan to provide similar dashboards for some of our other projects such as DSC resources, ScriptAnalyzer, Editor Services, OpenSSH on Windows, and others.

Please leave any suggestions or feedback as comments to this blog post. If you find this dashboard useful, or you believe we can improve upon it in some way, please let us know!

Steve Lee
Principal Software Engineer Manager
PowerShell Core

Released: Microsoft Kerberos Configuration Manager for SQL Server v3.1

$
0
0

We are pleased to announce the latest generally-available (GA) of Microsoft Kerberos Configuration Manager for SQL Server.

Get it here: Download Microsoft Kerberos Configuration Manager for SQL Server

Kerberos authentication provides a highly secure method to authenticate client and server entities (security principals) on a network. To use Kerberos authentication with SQL Server, a Service Principal Name (SPN) must be registered with Active Directory, which plays the role of the Key Distribution Center in a Windows domain. In addition, many customers also enable delegation for multi-tier applications using SQL Server. In such a setup, it may be difficult to troubleshoot the connectivity problems with SQL Server when Kerberos authentication fails.

The Kerberos Configuration Manager for SQL Server is a diagnostic tool that helps troubleshoot Kerberos related connectivity issues with SQL Server, SQL Server Reporting Services, and SQL Server Analysis Services. It can perform the following functions:

  • Gather information on OS and Microsoft SQL Server instances installed on a server.
  • Report on all SPN and delegation configurations on the server.
  • Identify potential problems in SPNs and delegations.
  • Fix potential SPN problems.

This release (v 3.1) adds support for SQL Server 2016.


What do Madagascar, How to Train your Dragon have in common with OMS?

$
0
0

A case study of course, JumpStart, also known as Knowledge Adventure, is a gaming company built on a foundation of learning!

JumpStart has discovered that part of the power of OMS is that it doesn’t care where the Linux machines are running. Even though OMS is on Azure, it can monitor LAMP stack apps running on either AWS-based or Azure-based servers. OMS is, in this sense, cloud platform–agnostic. The architecture it supports can span a heterogeneous mix of data centers and cloud providers.

https://customers.microsoft.com/en-US/story/jumpstart

 

Subscribing to onValidatingWrite

$
0
0

Most event handlers are straight forward. One exception is when subscribing to a table’s onValidated and onValidating events.

The trick is to realize that the DataEventArgs instance passed to the event handler, is a validateEventArgs – a specialization of DataEventArgs.

Here is a template to use:

[DataEventHandler(tableStr(<TableName>), DataEventType::ValidatingWrite)]
public static void <TableName>_onValidatingWrite(Common _sender, DataEventArgs _e)
{
    boolean result = true;
    ValidateEventArgs validateEventArgs = _e as ValidateEventArgs;
    <TableName> <table> = _sender as <TableName>;
    if (<validation>)
    {
        result = checkFailed(“Validation failed”);
    }
    validateEventArgs.parmValidateResult(result);
}

 

The platform will keep raising the onValidating events until an event handler returns a negative result.

Here is a great post with more examples: Access stuff in the new event subscriptions

 

THIS POST IS PROVIDED AS-IS; AND CONFERS NO RIGHTS.

SQL Server on Linux: Scatter/Gather == Vectored I/O

$
0
0

Scatter/gather capabilities allow more efficient memory to disk transfers reducing redundant memory copies, sorting and other activities applications may require to gain improved I/O performance.

If my memory serves me correctly SQL Server started using the ReadFileScatter and WriteFileGather APIs in SQL Server 6.5 SP3.  It may not have been this exact build but as the running joke around here goes “That information was saved to my offsite backup!”

Scatter/gather is not limited to the Windows platform.  On Linux the capabilities are called “Vectored I/O.” – Example writev  SQL Server on Linux takes full advantage of vectored I/O, retaining the performance and design benefits.

On Linux there is one small twist.  The Windows the API set does not impose a specific limit on the number of buffers (read/write) size for a request.  On Linux the total number of buffers that can be read or written, in a single ABI request, is capped at 2048.   The buffers must be the operating system page size (4K.)   There are a few locations (zero file, column store, …) in SQL Server which attempt to read or write more then 1024 database pages in a single request.  (Remember SQL Server database pages are 8K, taking up 2 positions in the read or write request.)

Within LibOS, when we detect a request exceeding 2048 OS pages, we split the request into multiple 2048 page chunks, deemed sub-I/O requests.  The logic attempts to retain as few I/O requests as possible using vectored I/O for each of the sub request and maximizing I/O performance.  Once all sub-I/O requests, for the original parent request, are complete the original I/O request is considered complete maintaining application compatibility.

image

 

Bob Dorr – Principal Software Engineer SQL Server

Unicode – Nemeth Character Mappings

$
0
0

In addition to handling 2D arrangements such as fractions, root, subscripts and superscripts, math layout programs need to be able to display the myriad math symbols discussed in Unicode Technical Report #25 Unicode Support for Mathematics. To interoperate with Nemeth braille, such programs need to map between Unicode characters and Nemeth braille sequences. Since Unicode and Nemeth braille were developed independently of one another, it’s not surprising that each can represent symbols that the other doesn’t have. For example, Nemeth doesn’t have Unicode’s reversed tilde ∽ and Unicode doesn’t have Nemeth’s extended tilde (like ∼∼ with no intervening space). Fortunately, all common math symbols have well-defined mappings. Nemeth braille has rules that guide reasonable choices for many Unicode math symbols not mentioned explicitly in the Nemeth specification, for example, see that specification’s §139 on negation and §147 on comparison signs compounded vertically. The present post discusses some of the Nemeth methodology and gives representative mappings for some symbols. If you’re interested in the full table, email me and I’ll send you the Word document containing the current mapping collection. Because of the size of this undertaking, only Nemeth math braille is considered. It would be worthwhile for someone to undertake a similar effort for Unified English Braille (UEB) math braille. Discussion on how Nemeth represents 2D layouts such as fractions is given in Nemeth Braille—the first math linear format.

Math zones

The focus here is on math zones, which are text ranges that have math typography, rather than normal typography. Natural language contractions are not used in math zones, so hopefully we can get general globalized math-symbol mappings. When math zones are embedded in UEB, a math-zone start delimiter would be ⠸⠩ and the math-zone end delimiter would be ⠸⠱ in accord with Using the Nemeth Code within UEB contexts. Math zones are key to working with technical documents since math-zone typography and conventions differ from those for normal text. So, a user needs to know when a math zone starts and ends.

Braille symbol construction techniques

The Nemeth specification describes several symbol construction techniques. Some very productive ones are illustrated in the following table, which also includes the section number in the Nemeth specification

nemethconstruction

Mapping origins

The mappings given in the table below resulted from scouring the Nemeth specification, which is a pdf file of scanned images. As such it offers no search or link capabilities, and a paper version is more useful than the electronic version. It’s the first document I’ve printed in years, other than occasional tickets, boarding passes, and legal authorizations. There’s also a nicely formatted Nemeth specification in French complete with a navigation pane with links to all the rules. You can search the text including finding braille sequences, since the sequences are encoded in Nemeth Ascii braille. This is valuable in learning about sequences in general and whether a potentially new sequence is already defined. The combination ⠄⠱ doesn’t appear as math in the French specification, so maybe that’s a good candidate for encoding the missing reversed tilde ∽ (∼ is given by ⠈⠱). The French version’s content has differences from the original English version, so I checked both in creating the table entries. It would be nice if someone would enter the 1972 Nemeth specification into Word so that a more accessible pdf could be created in English. A partial version with MathSpeak functionality is given in The Nemeth Braille Code for Mathematics and Science. An ASCII braille version (brf) can be downloaded from here.

Challenging mappings

Unicode 9.0 has a total of 2310 characters that have the math property (see Math property in DerivedCoreProperties.txt). Of these, many of the more advanced symbols don’t have unambiguous Nemeth representations. In particular, Nemeth doesn’t distinguish between slanted and vertical bar overlays, e.g., 219A ↚ and 21F7 ⇷ are both given by ⠳⠈⠫⠪⠒⠒⠻ . Nemeth doesn’t have white arrows like ⇦, and white/black arrow heads like ⭠. Unicode has symbols like the bowtie ⧑ that don’t have apparent Nemeth representations. One possibility for ⋈ is as a shape with a suggestive name like “bt” as in ⠫⠃⠞, but one still needs to encode whether the sides are black or white since Unicode encodes all four possibilities.

Conversely, Nemeth has quite a few symbols not in Unicode. Often these can be constructed with a combination of Unicode symbols or with math layout objects in applications like Microsoft Word. We can submit proposals to add characters given in the Nemeth specification but not yet in Unicode provided the characters occur in journals or books. Nemeth’s extended tilde is one such character to research. Since Nemeth braille is based on a productive syntax, many symbol combinations can be created. Eventually I hope to collect all Unicode math characters that have reasonable Nemeth braille sequences and add their mapping data to the information associated with Unicode Technical Report #25, Unicode Support for Mathematics.

Sample mappings

The table below lists some representative Unicode characters used in mathematical text along with their Unicode names and the corresponding Nemeth math braille sequences. The table doesn’t include any mappings of the Unicode math alphanumerics, since they are defined in the post Nemeth Braille Alphanumerics and Unicode Math Alphanumerics. Relational operators (Nemeth calls them “signs and symbols of comparison”) need to be surrounded by spaces. The spaces are not included in the table since the relational operator property is defined in the MathClass data file and software can insert spaces programmatically. It’d be easy to add the math behavior column in MathClass.txt for quick reference. The full mapping table isn’t included since I don’t know how to convince the MSDN blogging facility to use a braille font that displays nondot place holders and braille is hard to read without them.

somenemethmathbraille

How to tank your Power BI performance

$
0
0

Getting ready for Australia Ignite i have been setting in in Sirui’s Performance Best Practices prep sessions and realized that my “Go to” performance list is an INTERNAL document that our IT group put together.

(Thanks Maria and Amy!)

I have asked Jessica to get it on the Power BI blog…but posting it here until she gets a chance:

The Bi at Microsoft group (Mario and Amy <grin>) recently held a Performance Optimization webinar to educate internal Power BI users on how to best optimize the performance of their dashboards and reports for end-users. 

During the webinar, it was clear that knowing what NOT to do is just as important as knowing what you SHOULD do. We guarantee you will have bad performance if you incorporate the below items into your work.

  • In the spirit of satire,  here’s a top “10” list. 

    Top 10 (or 15) Thing to do to guaranteed bad performance:

    1. Put too many visuals in your dashboards  (>4-8 visuals)
    2. Put too many visuals in your reports (>4-8 visuals)
    3. Don’t set filters in the filter pane in reports to limit the data you bring into Power BI
    4. Use a Live Connection or Direct Query to a poorly performing backend server  (hand and and hand with #12)
    5. Ensure you backend data source performs poorly and has inadequate hardware to support your reporting
    6. Ensure your backend data source is not SQL Server Analysis Services 2016
    7. Don’t tune, monitor or understand the performance characteristics of your backend data source
    8. Use RLS for all your users so that Power BI has to query the backend server separately and cache separate reports for every user
    9. Build many, complicated measures and aggregates in your data models
    10. Bring in large volumes of unused or seldom used data into your reports so that refresh and loading of reports and tiles are exceptionally slow
    11. Don’t learn DAX
    12. Deploy a Power BI solution to your users without testing it
    13. Embed >4 tiles in your application or website
    14. Add lots of custom visuals without testing the performance of the custom visuals with your data first
    15. Don’t check performance of your reports in the Power BI desktop first to compare it to the performance of the same reports in the Service.

    Enjoy these quick tips!   This is an example of an anti-solution brainstorming session on Power BI performance to help you understand what not to do.

SQL Server(IaaS) on Azure におけるバックアップ(続編)

$
0
0

Microsoft Japan Data Platform Tech Sales Team

中川

前回の投稿では、Azure 上での IaaS 環境の SQL Server にてバックアップを取得する方法について、バックアップ先とバックアップ方式という観点で整理いたしました。その中で、バックアップ方式に関し、Azure BLOB Storage 内のデータベース ファイル のファイル スナップショット バックアップ方式(以下、スナップショット バックアップ方式と称す)をご紹介しましたが、今回の投稿ではそこにフォーカスを当てて深掘りをしていきます。

スナップショット バックアップ方式では SQL Server のバックアップ機能を Azure BLOB Storage のスナップショット機能と組み合わせることにより高速なバックアップ、リストアを実現しておりますが、そもそも Azure BLOB Storage のスナップショット機能とはどういったものかを先ずはご紹介し、その後に具体的なバックアップ、リストアの話に触れたいと思います。

[Azure BLOB Storage のスナップショットとは]

Azure BLOB Storage のスナップショットとは、ある時点で取得された BLOB の読み取り専用バージョンを作成する機能のことです。下図では、t1、および t3 の時点でスナップショットを作成しておりますが、その後の変更されたデータをページ (SQL Server のページではなく Azure BLOB Storage のページ) 単位で変更前イメージを保持することにより t1 時点、t3 時点の BLOB イメージを読み取り専用で参照することが可能となります。ポイントは t1 時点、t3 時点のイメージ全体をコピーして保持するのではなく、変更前ページのみを保持することにより、ベースとなる BLOB と変更前ページ分だけの課金で済む点と、スナップショット作成時に BLOB イメージ全体のコピーを別の場所に取得しているわけではないのでスナップショット作成には時間がかからない点です。

image

図1: スナップショット作成

image

図2: スナップショットの参照

ただ、スナップショットを作り続けると、当然変更前ページが累積されていることとなりその分の課金も増えていきます。そこで、古いスナップショットは削除したいというニーズが出てくるかと思いますが、一点ご注意いただきたいポイントがあります。例えば、下図の図3ではスナップショット:t1 を削除していますが、t1 と t3 の間に保持していた変更前ページ (図3 内の ➀) は必要なくなったために解放されます。それに対し図4 ではスナップショット:t3 を削除していますが、スナップショット:t1 は削除されていないためにスナップショット:t1 を参照するために必要な t1 以降で保持していた変更前イメージ (図4 内の ➀、および➁) は保持されたままで解放されません。このように、途中のスナップショットを削除するようなケースはあまりないかもしれませんが、間のスナップショットを削除しても累積された変更前ページは解放されないという点はご留意ください。

image

図:3 スナップショット:t1 を削除

image

図4: スナップショット:t3 を削除

[スナップショット バックアップ方式]

では本題に移ります。前回の投稿でも述べましたが、本方式は VHD ファイルを IaaS 環境にディスクとして接続した上にデータベースファイル ( データファイル、ログファイル ) を配置するのではなく、URL 指定にて Azure BLOB Storage に直接データベースファイルを配置した場合にのみ利用することができる方式ですが、Azure 上で SQL Server の高速なバックアップ・リストアを実現することができます。

バックアップについては、毎回データをバックアップ先に転送する必要がないために高速であることは前回お伝えしましたが、リストアについても高速です。従来の方式ではリストアにかかる時間は簡略化すると

1.完全バックアップで取得したバックアップセットからのリストア

2.(取得している場合には)差分バックアップで取得したバックアップセットからのリストア

3.トランザクションログバックアップで取得したトランザクションログのリストア

の 2 つ、あるいは 3 つのステップが必要でした。また、完全バックアップは定期的に取得する必要がありました。完全バックアップを定期的に取得せず、トランザクションログバックアップのみを取得しつづけると、リストア時間が累積されたトランザクションログの量に比例して長期化してしまうためです。

 

image

図5: 従来のバックアップ・リストア

しかしスナップショット バックアップ方式では、バックアップ チェーンの起点を設けるために最初は完全バックアップを取得する必要がありますが、その後はトランザクションログバックアップを取得し続けるだけでよくなります。なぜなら、本方式ではトランザクションログバックアップを取得する際にも、トランザクションログファイルだけでなく、データファイルに関してもその時点のスナップショットが取得されているためです。

image

図6: スナップショット バックアップ方式によるバックアップ・リストア

例えば図 6 の例で T4 の時点にリストアしたい場合、T4 時点のデータファイル、およびトランザクションログファイルのスナップショットが維持されているために T4 のスナップショットがあればリストア可能です。

RESTORE DATABASE mydb01
FROM URL = 'https://sqlbackupstorage.blob.core.windows.net/mydb01/mydb01-tranbackup-T4.bak'
WITH RECOVERY, REPLACE;
GO

また、T3 時点 ( スナップショット トランザクションログバックアップを取得した T2 と T4 の間) にリストアしたい場合には、リストアの起点となる T2 時点のスナップショットでリストアを行った後、 T4 時点のスナップショットからトランザクションログを読み取って T3 時点にリストアすることが可能です。

RESTORE DATABASE mydb01
FROM URL = 'https://sqlbackupstorage.blob.core.windows.net/mydb01/mydb01-tranbackup-T2.bak'
WITH NORECOVERY,REPLACE;
GO
RESTORE LOG mydb01
FROM URL = 'https://sqlbackupstorage.blob.core.windows.net/mydb01/mydb01-tranbackup-T4.bak'
WITH RECOVERY,STOPAT = T3;
GO

※ T3 部分は日時フォーマットで指定。

なお、このポイントインタイムのリストアを行う際には、必ず隣接する二つのスナップショットバックアップが必要となります。例えば図 6 でいうところの T2 のスナップショットを削除してしまった状態で T3 時点にリストアすべく T1 と T4 を使ってリストアを行おうとしてもできません。これはバックアップチェーンが分断されてしまっているためです。

以上のように、スナップショット バックアップ方式はバックアップ設計を簡素化できると共にバックアップ・リストアを高速化することにより運用も楽にしてくれるという特徴がございますので、是非ご利用いただければと思います。

 

関連記事

The difference between adding Safe and Blocked senders in Outlook, vs. Outlook.com

$
0
0

I’m currently doing a bunch of work around making Outlook.com better, and one the things I’ve noticed is different is how you add to your Safe and Blocked senders list when you use a desktop client like Outlook, vs. when you use the web UX in either Outlook.com (our consumer email product) or Outlook Web Access (for your Office 365 account).

Outlook + Office 365

In Outlook, if you right-click on a message, or click the “Junk” option at the top, you have several options:

2017-01-31-outlook-blocked-senders

You can add to your Blocked and Safe senders list by picking one of those first three options. If your organization uses Directory Sync, this is pushed to Office 365 and then your list is respected by the service. Blocked senders go to your Junk folder, Safe senders are delivered to your inbox.

These right-click and drop-down actions are available regardless of whether or not the message is in the Inbox, or in the Junk Email folder.

Outlook.com

The situation is different in Outlook.com. In Outlook.com, if you are in the Inbox, there is no option for “Block sender”. There is only the option to mark as Junk or Phishing along the top status bar:

2017-01-31-outlook-dot-com-top

Or if you right-click, you can pick Mark as Junk, or select Move > Junk Email:

2017-01-31-outlook-dot-com-right-click

There isn’t currently a single-click option to add to the Blocked senders list. To do that:

  1. First, you have to move the message to Junk using one of the options above
  2. Second, navigate to the Junk Email folder. The option to “Block” will along the top status bar

2017-01-31-outlook-dot-com-block-option

Adding to Blocked senders in Outlook.com deletes messages from that sender instead of delivering them to Junk the way it does in Office 365. So beware – if you subscribe to a newsletter or get a message from some other sender, and you block them and then forget you did, you will be wondering why you’re not getting email from that sender.

The option for adding to Blocked Senders is to do it manually. You’ll need to copy the sender address you want to block, and then select the gear icon along the top, and then select Options:

2017-01-31-gear-icon

Then navigate to MailBlocked senders and paste it in (or type it in).

2017-01-31-outlook-dot-com-blocked-senders-ux

Using Blocked senders isn’t that effective at stopping spam because most spammers will use throwaway email addresses; thus, what you add to your list will probably not stop the next spam attack. However, blocked senders does work well on some sketchy newsletters that you don’t want to hear from as long as they have a stable sending address. And it also works on some people you know that you don’t want to hear from either.

For spam, the best way to have it stop is to report it, per above. That goes into our automated training system so it can learn to recognize current and future spam attacks.

Outlook Web Access + Office 365

The web UX for OWA looks the same as Outlook.com, however, it behaves the same was as Outlook + Office 365. That is, adding to blocked senders sends the message to your Junk Email folder.

Outlook + other email services

What about Outlook + other email services?

I haven’t tested it so I’m not sure, but I think adding to Safe and Blocked senders will send those messages to your Junk Email folder. They don’t get synced to a free email service (e.g., Gmail, Yahoo Mail) so any blocking would be done in Outlook.


So, that’s how managing Safe and Blocked senders using Outlook or the web UX works in Outlook.com vs. Office 365.


Dissecting the new() constraint in C#: a perfect example of a leaky abstraction

$
0
0

Most likely you’ve heard about The Law of Leaky Abstractions coined by Joel Spolsky. Even if you never heard of it, you definitely faced it in your day-to-day job. The “law” is pretty simple: “All non-trivial abstractions, to some degree, are leaky”. And this is 100% true. But sometimes even not that complicated abstractions can leak their internal details.

Let’s consider the following code snippet:

public class NodeFactory
{
   
public static TNode CreateNode<TNode>()
       
where TNode : Node, new()
    {
       
return new TNode();
    }
}

Do you see any issues with it? Will it pass a thorough code review? Of course, you need to know the context. For instance, you need to know what the TNode types are, whether the constructor of those types can throw exceptions and whether the method can be called on a hot path of an app.

But first of all, you need to know what the compiler and the runtime will do with a method like this.

Once a user calls a method CreateNode, the C# compiler checks that a given type has a default constructor and if this is the case the compiler will emit a call to it. Right? Not exactly. The compiler doesn’t know upfront what constructor to call, so it delegates all the job to a helper method – Activator.CreateInstance<T> (*).

(*) This statement is not 100% correct. Different C# compilers emit different code for new T(). The C# compiler starting from VS2015 emits a call to the Activator.CreateInstance(), but older versions are “smarter”: they return default(T) for value types and calls the Activator.CreateInstance() only for reference types.

Ok, and what’s wrong with the Activator? Nothing, if you know how it’s implemented.

Implementation details of the Activator.CreateInstance

Non-generic version of the Activator.CreateInstance(Type) was first introduced in the .NET Framework 1.0 and was based on reflection. The method checks for a default constructor of a given type and calls it to construct an instance. We can even implement a very naïve version of this method ourselves:

{
   
public static T CreateInstance<T>() where T : new()
    {
       
return (T) CreateInstance(typeof(T));
    }

   
public static object CreateInstance(Type type)
    {
       
var constructor = type.GetConstructor(new Type[0]);
       
if (constructor == null && !type.IsValueType)
        {
           
throw new NotSupportedException($"Type '{type.FullName}' doesn't have a parameterless constructor");
        }

       
var emptyInstance = FormatterServices.GetUninitializedObject(type);
           
       
return constructor?.Invoke(emptyInstance, new object[0]) ?? emptyInstance;
    }
}

As we’ll see shortly an actual implementation of Activator.CreateInstance is a bit more complicated and relies on some internal CLR methods for creating an uninitialized instance. But the idea is the same: get a ConstructorInfo, create uninitialized instance and then call the constructor to initialized it, similar to the placement new concept in C++.

But the generic version “knows” the type being created at compile time so the implementation could be way more efficient, right? Nope. Generic version is just a façade that gets the type from its generic argument and calls the old method – reflection-based Activator.CreateInstance(Type).

You may wander: “Ok, for new T() the C# compiler calls Activator.CreateInstance<T>() that calls Activator.CreateInstance(Type) that uses reflection to do its job. Is it a big deal?” Yes, it is!

Concern #1. Performance

Using reflection to create a frequently instantiated type can substantially affect the performance of your application. Currently I work on build system and one of the components is responsible for parsing build specification files. The first implementation of the parser was used a factory method that created every node using new TNode() as shown above. The very first profiling session showed a sizable impact of the factory on the end-to-end performance. Just by switching to a more expression-based implementation of the node factory we gained 10% performance improvements for one of our end-to-end scenarios.

To be more specific, let’s compare different ways of creating a Node instance: explicit construction, using Func<Node>, Activator.CreateInstance and a custom factory based on the new() constraint.

public static T Create<T>() where T : new() => new T();
public static Func<Node> NodeFactory => () => new Node();


// Benchmark 1: ActivatorCreateInstace
var node1 = System.Activator.CreateInstance<Node>();
// Benchmark 2: FactoryWithNewConstraint
var node2 = Create<Node>();
// Benchmark 3: ConstructorCall
var node3 = new Node();
// Benchmark 4: FuncBasedFactory
var node4 = NodeFactory();

Here are the perf numbers obtained using BenchmarkDotNet:

                         Method |        Mean |    StdDev |  Gen 0 |
——————————- |———— |———- |——- |
        ActivatorCreateInstance |  98.6628 ns | 3.0845 ns |      – |
FactoryMethodWithNewConstraint | 103.0030 ns | 4.2670 ns |      – |
                ConstructorCall |   2.4361 ns | 0.0430 ns | 0.0036 |
               FuncBasedFactory |   6.8369 ns | 0.0436 ns | 0.0034 |

 

As we can see, the difference is pretty drastic: a factory method based on the new() constraint is 15 times slower than a delegate-based solution and 50 times slower than manual construction. But performance is not the only concern.

Correctness

Reflection-based method invocation means that any exception thrown from the method will be wrapped in a TargetInvocationException:

class Node
{
   
public Node()
    {
       
throw new InvalidOperationException();
    }
}

public static T Create<T>() where T : new() => new T();


try
{
   
var node = Create<Node>();
   
Console.WriteLine("Node was create successfully");
}

catch (InvalidOperationException)
{
   
// Handling the error!
    Console.WriteLine("Failed to create a node!");
}

Is it obvious for everyone that the code shown above is incorrect? Reflection-based object construction “leaks” through the generics implementation. And now every developer needs to know how new T()is implemented and the consequences it has in terms of exception handling: every exception thrown from the constructor will be wrapped in a TargetInvocationException!

You may fix the issue if you know that the type’s constructor may throw an exception. Starting from .NET 4.5 you can use ExceptionDispatchInfo class to rethrow an arbitrary exception object (an inner exception in this case) without altering the exception’s stack trace:

public static T Create<T>() where T : new()
{
   
try
    {
       
return new T();
    }
   
catch (TargetInvocationException e)
    {
       
var edi = ExceptionDispatchInfo.Capture(e.InnerException);
        edi
.Throw();
       
// Required to avoid compiler error regarding unreachable code
        throw;
    }
}

This code solves one issue with Activator.CreateInstance, but as we’ll see in a moment, there are better solutions that fix correctness as well as performance issues.

Correctness (2)

Activator.CreateInstance is implemented in a more complicated way than I mentioned before. Actually, it has a cache that holds constructor information for the last 16 instantiated types . This means that the user won’t pay the cost of getting the constructor info via reflection all the time, although it will pay the cost of a slow reflection-based constructor invocation.

A more accurate description of the algorithm used by Activator.CreateInstance is as following:

  1. Create a raw instance using RuntimeTypeHandle.Allocate(this)
  2. Get the ConstructorInfo for the given type’s parameterless constructor
    1. If the constructor information is already in the cache, get it from there
    2. If the constructor information is not in the cache, get a ConstructorInfo via reflection and put it into the cache
  3. Call the constructor on the newly created instance and return a fully constructed instance to the caller

But unfortunately, this optimization has an issue (reproducible in .NET 4.0 – 4.6.2): the optimization doesn’t handle structs with a parameterless constructor properly. Current C# compiler doesn’t support custom default constructors for structs. But the CLR and some other languages do: you may create a struct with a default constructor using C++/CLI or IL directly. Moreover, this feature was added to C# 6, but was removed from the language 3 months before the official release. And the reason is this bug in Activator.CreateInstance. Today there is a hot discussion at github about this feature, and it seems that even the language authors can’t agree on whether default constructors on structs is a good thing or not.

The issue is related to a caching logic in Activator.CreateInstance: if it gets the constructor information from the cache it doesn’t call the constructor for structs assuming, apparently, that they don’t exist (see InitializeCacheEntry method). And this means that if you have a struct with a default constructor, and you create an instance of that type multiple times, the constructor will only be called for the first instance.

We can’t easily fix the issues in Activator.CreateInstance and we definitely can’t change the existing behavior of new T() without breaking the world. But we can avoid using it and create our own generic factory that won’t suffer from the aforementioned issues.

Solution #1: using expression trees

Expression trees are a good tool for lightweight code generation. In our case, we can use an expression tree that creates a new instance of type T. And then we can compile it to a delegate to avoid performance penalty.

Lambda-expressions are special in the C# language because they’re convertible by the compiler to a delegate (DelegateType) or to an expression (Expression<DelegateType>). The compiler can convert an arbitrary expression to a delegate but only a limited set of language constructs can be converted to an expression. In our case the expression is very simple, so the compiler can cope with it:

public static class FastActivator
{
   
public static T CreateInstance<T>() where T : new()
    {
       
return FastActivatorImpl<T>.NewFunction();
    }

   
private static class FastActivatorImpl<T> where T : new()
    {
       
// Compiler translates 'new T()' into Expression.New()
        private static readonly Expression<Func<T>> NewExpression = () => new T();

       
// Compiling expression into the delegate
        public static readonly Func<T> NewFunction = NewExpression.Compile();
    }
}

FastActivator.CreateInstance is conceptually similar to Activator.CreateInstance but it lacks two main issues: it doesn’t suffer from the exception-wrapping problem and it doesn’t rely on reflection during the execution (it does rely on the reflection during expression construction, but this happens only once).

Let’s compare different solutions and see what we get:

                      Method |       Mean |    StdDev |  Gen 0 |
—————————- |———– |———- |——- |
     ActivatorCreateInstance | 94.6173 ns | 0.5036 ns |      – |
            FuncBasedFactory |  6.5049 ns | 0.0551 ns | 0.0034 |
FastActivatorCreateInstance | 22.2258 ns | 0.2240 ns | 0.0020 |

 

FastActivator is almost 5 times faster than the default one, but still 3.5 times slower than the func-based factory. I’ve intentionally removed the other cases we saw at the beginning; func-based solution is our base line, because any custom solution can’t beat an explicit constructor call for a known type.

The question is, why is the compiled delegate way slower than a manually-written delegate? Expression.Compile creates a DynamicMethod and associates it with an anonymous assembly to run it in a sandboxed environment. This makes it safe for a dynamic method to be emitted and executed by partially trusted code but adds some run-time overhead.

The overhead can be removed by using a constructor of DynamicMethod which associates it with a specific module. Unfortunately, Expression.Compile doesn’t allow us to customize the creation of a dynamic method and the only other option is to use Expression.CompileToMethod. CompileToMethod compiles the expression into a given MethodBuilder instance. But this won’t work for our scenario because we can’t create a method via MethodBuilder that has access to internal/private members of different assemblies. And this will restrict our factory to public types only.

Instead of relying on Expression.Compile we can “compile” our simple factory manually:

public static class DynamicModuleLambdaCompiler
{
   
public static Func<T> GenerateFactory<T>() where T:new()
    {
       
Expression<Func<T>> expr = () => new T();
       
NewExpression newExpr = (NewExpression)expr.Body;

       
var method = new DynamicMethod(
            name:
"lambda",
            returnType: newExpr
.Type,
            parameterTypes:
new Type[0],
            m:
typeof(DynamicModuleLambdaCompiler).Module,
            skipVisibility:
true);

       
ILGenerator ilGen = method.GetILGenerator();
       
// Constructor for value types could be null
        if (newExpr.Constructor != null)
        {
            ilGen
.Emit(OpCodes.Newobj, newExpr.Constructor);
        }
       
else
        {
           
LocalBuilder temp = ilGen.DeclareLocal(newExpr.Type);
            ilGen
.Emit(OpCodes.Ldloca, temp);
            ilGen
.Emit(OpCodes.Initobj, newExpr.Type);
            ilGen
.Emit(OpCodes.Ldloc, temp);
        }
           
        ilGen
.Emit(OpCodes.Ret);

       
return (Func<T>)method.CreateDelegate(typeof(Func<T>));
    }
}

The GenerateFactory method creates a DynamicMethod instance and associates that method with a given module. This immediately gives the method access to all internal members of the current assembly. But we specify skipVisibility as well, because the factory method should be able to create internal/private types declared in other assemblies as well. The name ‘lambda’ is never used and would be visible only during debugging.

This method creates an expression tree to get the constructor information even though we can get it manually. Note that the method checks newExpr.ConstructorInfo and uses different logic if the constructor is missing (i.e. for value types without a default constructor defined).

With the new helper method, FastActivator will be implemented in the following way:

public static class FastActivator
{
   
public static T CreateInstance<T>() where T : new()
    {
       
return FastActivatorImpl<T>.Create();
    }

   
private static class FastActivatorImpl<T> where T : new()
    {
       
public static readonly Func<T> Create =
            DynamicModuleLambdaCompiler.GenerateFactory<T>();
    }
}

Let’s compare the new implementation (FastActivatorCreateInstance) with the expression-based one (CompiledExpression):

                      Method |       Mean |    StdDev |  Gen 0 |
—————————- |———– |———- |——- |
     ActivatorCreateInstance | 93.8858 ns | 1.2702 ns |      – |
            FuncBasedFactory |  6.4719 ns | 0.0640 ns | 0.0033 |
FastActivatorCreateInstance | 11.6035 ns | 0.0774 ns | 0.0030 |
          CompiledExpression | 22.7874 ns | 0.1509 ns | 0.0021 |

 

As we can see, the new version of the fast activator is two times faster than the old one, but still two times slower than the func-based factory. Let’s explore, why.

The reason is in implementation of the generics in the CLR. A generic method that calls a method from a generic type will never be inlined, so we suffer the overhead of an additional method call. But the more important thing is subtler. If the generic is instantiated with a value type the CLR has no other options except to generate a separate type for it. This means that a List<int> and a List<double> are completely independent from the CLR perspective. However, this is not the case with reference types. Two generic instantiations like List<string> and List<object> share the same EEClass, which allows the CLR to reuse the code between different instantiations and avoid code bloating. But this optimization trades speed for memory.

When you have one generic type (or method) that calls another generic type (or method) the CLR needs to make sure that actual types are compatible at runtime (**). To make sure that this is the case the CLR will make a few look-ups that affect the performance in the previous example and make our FastActivator slower than the delegate like () => new Node().

(**) The CLR implementation of generics is a very complicated topic and it’s definitely out of scope of this blogpost. If you want to understand the design of generics and the complexity of the problem better, I recommend to read an amazing article written by the author of the generics in .NET – Don Syme – Design and Implementation of Generics for the .NET Common Language Runtime. If you want to understand the current state of affairs, please see Pro .NET Performance or a very good article by Alexandr Nikitin, .NET Generics under the hood.

To prove this assumption let’s use the same factory on a value type Node:

                      Method |       Mean |    StdDev |  Gen 0 | Allocated |
—————————- |———– |———- |——- |———- |
     ActivatorCreateInstance | 86.4298 ns | 2.5527 ns | 0.0005 |      12 B |
            FuncBasedFactory |  4.7406 ns | 0.0254 ns |      – |       0 B |
FastActivatorCreateInstance |  4.3134 ns | 0.0159 ns |      – |       0 B |
          CompiledExpression |  3.1534 ns | 0.0210 ns |      – |       0 B |

As we can see, there is no performance impact of the current solution when structs are involved.

To solve the issue with reference types we can avoid additional level of indirection and move the nested FastActivatorImpl<T> out from the façade FastActivator type and use it directly:

public static class FastActivator<T> where T : new()
{
   
/// <summary>
    /// Extremely fast generic factory method that returns an instance
    /// of the type <typeparam name="T"/>.
    /// </summary>
    public static readonly Func<T> Create =
        DynamicModuleLambdaCompiler.GenerateFactory<T>();
}

And here are the last results when the FastActivator<T> is introduced and used directly:

                  Method |       Mean |    StdDev |  Gen 0 |
———————— |———– |———- |——- |
ActivatorCreateInstance | 95.0161 ns | 1.0861 ns | 0.0005 |
        FuncBasedFactory |  6.5741 ns | 0.0608 ns | 0.0034 |
  FastActivator_T_Create |  5.1715 ns | 0.0466 ns | 0.0034 |

 

As you can see we’ve achieved the goal and created a generic factory method with the same performance characteristic as a plain delegate that instantiates a specific type!

Application-specific fix for the Activator.CreateInstance issue

The C# compiler uses “duck typing” for many language constructs. For example, LINQ syntax is pattern based: if the compiler is able to find Select, Where and other methods for a given variable (via extension methods or as instance methods) it will be able to compile queries using a query comprehension syntax.

The same is true for some other language features, like the collection initialization syntax, async/await, foreach loop and others. But not everyone knows that there is a large list of “well known members” that the user may potentially provide to change the runtime behavior. And one of such well-known members is Activator.CreateInstance<T>.

This means that if the C# compiler is able to find another System.Activator type with a generic CreateInstance method then the given method will be used instead of the method from mscorlib. The following behavior is undocumented and I would not recommend using it in a production environment without clear evidence from a profiler. And even if a profiler shows some benefit , I would prefer using FastActivator explicitly instead on relying on this hack.

namespace System
{
   
/// <summary>
    /// Dirty hack that allows using a fast implementation
    /// of the activator.
    /// </summary>
    public static class Activator
    {
       
public static T CreateInstance<T>() where T : new()
        {

#if DEBUG
        Console.WriteLine("Fast Activator was called");
#endif

            return ActivatorImpl<T>.Create();
        }

       
private static class ActivatorImpl<T> where T : new()
        {
           
public static readonly Func<T> Create =
                DynamicModuleLambdaCompiler.GenerateFactory<T>();
        }
    }
}

Now, all methods that call new T() to create an instance of a type, will use our custom implementation instead of relying on the default one.

Conclusion

This is a fairly long post, but we managed to cover many interesting details.

  • The new() constraint in the C# language is extremely leaky: in order to use it correctly and efficiently the developer should understand the implementation details of the compiler and the BCL.
  • We’ve figured out that the C# compiler calls Activator.CreateInstance<T> for creating an instance of a generic argument with a new() constraint (but remember, this is true only for C# 6+ compilers and the older versions emit the call only for reference types).
  • We’ve discovered the implications of the Activator.CreateInstance from a developer’s point of view in terms of correctness and performance.
  • We’ve come up with a few alternatives, starting with a very simple one that “unwraps” TargetInvocationException, to a fairly sophisticated solution based on code generation.
  • We’ve discussed a few interesting aspects of the generics implementation in the CLR and their impact on the performance (very minor, and likely negligible in the vast majority of cases).
  • And finally, we’ve come up with a solution that can solve aforementioned issues with the new() constrained by using the custom System.Activator.CreateInstance<T> implementation.

And as a final conclusion I won’t suggest that anyone removes all calls to new T() in their codebase or define their own System.Activator class. You need to profile your application and make the decision only based on real evidence.

But to avoid shooting yourself in the foot, you need to know what the compiler and the runtime do for new T() and other widely used language constructs and what the implications are from correctness and performance perspectives.

DirectXTex and DirectXMesh now support Direct3D 12

$
0
0

As part of my multi-year personal project of providing open source replacements for the deprecated D3DX library once found in the legacy DirectX SDK, two libraries are focused on content creation tools and build pipelines. DirectXTex handles loading image files, texture processing including format conversion, mipmap generation, block-compression, and writing out ‘fully cooked’ textures into DDS files. DirectXMesh provides geometry support such as computing normals and tangent-frames, transparent vertex cache optimization, and provides utilities for extracting/inserting vertex data in vertex buffers.

These libraries were originally written for DirectX 11, and it seems likely that most tools should continue to use DirectX 11 for the simplicity and ease of developer productivity. There are, however, cases where you want to use some of this functionality ‘in-engine’, so the January 2017 releases include DirectX 12 API support as well.

DirectXTex January 2017 release on GitHub

DirectXMesh January 2017 release on GitHub

To simplify supporting all the various platforms and Windows SDK combinations, the library continues to default to using DirectX 11. If you want to use DirectX 12, you need to explicitly include the required headers before including the library header:

#include <d3d12.h>
#include "DirectXTex.h"

You also need to link with the DirectXTex_Desktop_2015_Win10.vcxproj, DirectXTex_Windows10.vcxproj, or DirectXTex_XboxOneXDK_2015.vcxproj projects which build with both DirectX 11 and DirectX 12 support.

If you want to use both DirectX 11 and DirectX 12 in the same compilation module, then you need to explicitly include both:

#include <d3d11_1.h>
#include <d3d12.h>
#include "DirectXTex.h"

The story is similar for DirectXMesh, although in this case it’s just to let you use D3D12_ input layouts enums and structures instead of D3D11_–the data itself is identical.

In case you missed it, DirectXTex was updated to support the HDR (RGBE) file format as a source for floating-point HDR texture data, as well as having ‘opt-in’ support for OpenEXR. For more details on how to enable OpenEXR, see this page.

Related: DirectX Tool Kit for DirectX 12

Is never good for you?

$
0
0

Eyebrows There’s a famous New Yorker cartoon with an executive arranging a time to meet with a colleague. He says, “Is never good for you?” You know what? Never is great for me. I’m good with not wasting an hour I could have spent delivering value to customers; I’m good with not subtracting an hour from my life and our business.

As I walk the halls of buildings around Redmond (and when I visit friends at their businesses), I witness conference rooms filled with highly paid professionals wasting their precious time. Many are staring at their laptops or phones instead of engaging—desperately trying to stay productive or, at least, awake. Some would say that we need to ban devices in meetings. I say, why not skip a step and ban the meetings!

Sure, meeting face to face in person or online is invaluable, even indispensable. However, it’s not inevitable or inescapable. Most meetings that folks attend are with people they already know and understand—do they really need to meet, as often as they meet, for the duration they meet? No, never is good for me.

Why do you shamelessly waste my time?

More than a decade ago, I wrote “The day we met” (chapter 3) about running efficient meetings. Today, people still run terrible meetings—they don’t share a focused agenda in advance, they invite too many people, they schedule too much time, and they don’t share an actionable recap with everyone impacted. I’ve concluded the only sure way to reduce bad meetings is to reduce meetings.

Meetings are often incredibly evil and inefficient. They suck away life and the will to live. They break up blocks of time that you could use to be in flow, delivering value. They are meant to get people aligned and excited, but often achieve the opposite.

Yet some meetings are invaluable and indispensable. Which meetings are worthwhile and which are waste? Let’s review, and (hint) we won’t need a meeting for that.

You aren’t gonna need it

In my experience, the most common meetings are standups, peer reviews (including decision meetings), status meetings, and staff meetings (including one-on-ones and morale events). Two of these shouldn’t exist, and the other two should last half as long.

  • Daily standup meetings are invaluable. They give your team a chance to adjust to changes, swarm to blocking issues, and reprioritize. They’re also typically twice as long as necessary. Update your board in advance, defer the design discussions until after the standup, and stick to issues and prioritization.
  • Peer review and decision meetings are an enormous waste of time and money. Instead, send the docs to reviewers; get clarity on feedback via email, IM, or drop-in; resolve the issues; and share the result. Unfortunately, people don’t read documents and don’t provide feedback in a timely fashion, so meetings are used as a forcing function. I get it, but desperation is a poor excuse for wasting everyone’s time. See the next section for a better solution.
  • Status meetings are worse than peer review meetings—there’s no reason and no excuse for holding these. Put the necessary status online and/or in email, and move forward. And don’t confuse status meetings with Shiproom (aka, war room, box triage, and triage). Shiproom is a standup meeting for leaders, and like standup, it’s length should be cut in half.
  • Staff meetings, one-on-ones, and morale events are invaluable. They encourage co-workers to understand each other, work out issues, and drive team culture and alignment. They too are often twice as long as necessary (aside from morale events, 30 minutes is sufficient). Have an agenda, enjoy the time, and then get back to work.

Eric Aside

Actually, there are two kinds of peer review and decision meetings. The first kind is about understanding each other’s viewpoint— the decision makers and the context behind the situation. These meetings are valuable and, ideally, should be done in person to best learn from each other. The second kind is about working through the document or decision. These meetings should cease.

You want it when?

Peer review and decision meetings are often used as forcing functions. Attendees must draft, read, and review the documents in advance to avoid appearing unprepared at the meeting. It’s an effective strategy that I’ve used and seen used incessantly over the years. Unfortunately, it’s also a crutch that enables slackers to avoid prioritizing, ignore communication, and waste everyone else’s time. It’s unacceptable.

You don’t need a forcing function if your co-workers are responsible and responsive, treat timely feedback and communication as essential to their business, and think of their work as a business instead of an entitlement. You know—if they are professionals.

Wasting everyone’s time with meetings only enables slackers to get by without addressing the root issue. Instead, insist that people be responsive. Provide a deadline for feedback, and make it clear that no feedback means acquiescence and complaining later admits incompetence. Set the standard, and hold yourself and others to it.

Sure, there are exceptions. Sometimes approvers are above your paygrade and don’t acquiesce to your terms, and sometimes the urgent trumps the important. However, those are exceptions, not cowardly reasons to support slackers and punish professionals.

Eric Aside

I.M. Wright is being a bit harsh here, but that doesn’t mean he’s wrong. When working across groups, being flexible at first and setting clear expectations over time can effectively reduce your reliance on meetings.

What good are you?

Why are standups, Shiproom, and staff meetings useful, but other meetings wasteful? Because meetings are essentially social, interpersonal experiences. They create connection between people by literally bringing them together.

You don’t need a meeting to review a document or make a decision, but you do need a meeting to understand each other’s goals and concerns. You don’t need a meeting to share status, but you do need a meeting to build trust, establish team culture, and drive alignment.

Other good reasons to bring people together are to generate ideas, resolve conflicts, and learn from one another (like retrospectives). The farther people are apart in their ideas and mutual empathy, the closer you must bring them together. The closest you can get is an in-person one-on-one meeting (good for conflict resolution), and the furthest is a conference call (useful to gain shared understanding).

All other meetings are a waste of time, and banning devices won’t make them better.

Eric Aside

I don’t ban device use at my meetings. Instead, I use them to assess focus and engagement. If attendees who should be engaged are not, the problem is with the meeting, not the people.

Stop wasting my time

I hate unnecessary and inefficient meetings—stop wasting my time and Microsoft’s money. Cancel peer review, decision, and status meetings—hold people accountable instead. Shorten standup, Shiproom, and staff meetings—have an agenda, and if the meetings don’t need to be daily, make them biweekly.

As for other worthwhile meetings, like requirements gathering, brainstorming, and conflict resolution, have a focused agenda, invite only the people necessary, keep the meeting short, and share the actionable outcome with everyone impacted.

Consider blocking out time for meetings each day, like from 10 a.m. to noon, and then leave the rest of the day open for teamwork and flow. (This assumes your feature teams sit close together so impromptu communication is unencumbered.) Every minute you save is a minute that can be spent creating more value for customers and more satisfaction for co-workers. That is time well spent.

Eric Aside

If your team has blocked-out time for meetings, how do you handle meeting requests from other teams that fall outside your meeting hours? Redirect those requests to your management. It’s their job to keep you productive, and their calendars are already ruined with meetings anyway.

SharePoint WorkflowQuotaExceededException caused by ServiceBus deferred messages

$
0
0

The problem:

SharePoint 2013 with workflow manager 1.0 stopped working with the following  Microsoft.Workflow.Client.WorkflowQuotaExceededException failure in ULS log:

w3wp.exe         0x4AFC SharePoint Server              Workflow Services               Exception              Microsoft.Workflow.Client.WorkflowQuotaExceededException: Cannot start more instances because the size of the topic has exceeded the quota limit. HTTP headers received from the server – ActivityId: 895a6d56-92c1-4ce7-bb17-33b40607c372. NodeId: . Scope: /SharePoint/default/a67eb351-30b5-43f8-9be4-1ed1faf6647a/87c9068b-6ccc-4033-9102-5d27c43a476a. Client ActivityId : d9deb79d-724d-10a7-6483-11ea093bdd49. —> System.Net.WebException: The remote server returned an error: (403) Forbidden.     at Microsoft.Workflow.Common.AsyncResult.End[TAsyncResult](IAsyncResult result)     at Microsoft.Workflow.Client.HttpGetResponseAsyncResult`1.End(IAsyncResult result)     at Microsoft.Workflow.Client.ClientHelpers.SendRequest[T](HttpWebRequest request, T content)     — End of inner exception stack trace —     at Microsoft.Workflow.Client.ClientHelpers.SendRequest[T](HttpWebRequest request, T content)     at Microsoft.Workflow.Client.WorkflowManager.StartInternal(String workflowName, WorkflowStartParameters startParameters)     at Microsoft.SharePoint.WorkflowServices.FabricWorkflowManagementClient.StartInstance(String serviceGroupName, String workflowName, String monitoringParam, String activationKey, IDictionary`2 payload)     at Microsoft.SharePoint.WorkflowServices.FabricWorkflowInstanceProvider.StartWorkflow(WorkflowSubscription subscription, IDictionary`2 payload) StackTrace:  at Microsoft.Office.Server.Native.dll: (sig=678c0f87-966f-4d99-9c94-b49e788d2672|2|microsoft.office.server.native.pdb, offset=131CE) at Microsoft.Office.Server.Native.dll: (offset=21BE5)            d9deb79d-724d-10a7-6483-11ea093bdd49

Analysis:

By running the following SQL query against ServiceBus message container database, we found there were more than 1 million records of deferred messages.

Select count(*) as totalDeferred from [MessageReferencesTable] where state = 2

This is reason why WFM reports topic quota has been used up. To move on, use the following query to find which problematic workflow generates so many deferred messages.

SELECT  T2.SessionId,t1.WorkflowName,t1.WorkflowStatus,t2.state, count(*) as  total
FROM [WFInstanceManagementDB].[dbo].[Instances] T1
inner join [SBMessageContainer01].[dbo].[MessageReferencesTable] T2 on T1.[SessionId] = T2.[SessionId]
group by T2.SessionId, t1.WorkflowName,t1.WorkflowStatus,t2.state
having t2.state = 2 order by total desc

Solution:

  1. Undeploy the problematic workflow.
  2. Contact Microsoft support to cleanup these deferred messages with our internal CleanupDeferredMessages tool.
  3. Make sure both WFM and SharePoint have been updated with the latest patch. Otherwise, deferred messages cannot be 100% avoided.

Best regards,

WenJun Zhang

Simple Saving and Investing Plan

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>