Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Zero to full source control of a model-driven PowerApp using spkl

$
0
0

Most people who build applications have used some sort of source control system.  Source control has become the de-facto way to keep track of changes as an application evolves.  I often find that people think they have to give up the well understood benefits of source control when adopting a no-code/low-code rapid application development platform.  Well, if you are creating a model-driven app using PowerApps and the Common Data Service, you don’t.  You can achieve the same granular level of visibility into changes in your application, over time, you get with traditional source code.  In fact, I wrote a blog post and video walkthrough about this very topic well over 6 years ago:
https://blogs.msdn.microsoft.com/devkeydet/2012/09/19/crm-2011-visual-studio-and-source-control-of-non-code-customizations/

Back then we were still calling it XRM.  Heck, just a few weeks ago it was still called XRM by those who associate with the term.  It will likely be called XRM, out of habit, for some time still.  Nonetheless, moving forward, it’s PowerApps and specifically the model-driven type which require the Common Data Service today.  If that hasn’t quite sunk in yet, head over to the announcement:
https://powerapps.microsoft.com/en-us/blog/powerapps-spring-announce/

While just about everything in my video from 6+ years ago can be done today, there are community tools out there which make the process much more efficient, therefore making the individual executing the process more productive.  Which, of course, makes the whole idea more approachable.  The two community tools that I am most fond of for this are spkl and D365 Developer Extensions.  However, I am finding that spkl and the D365 Developer Extensions aren’t as commonly known in the community as perhaps they should be.  Based on some feedback I’ve recently received on the general topic, I decided to create a new video demonstrating how go from zero to full source control of a Common Data Service based app using spkl and the Visual Studio IDE.  If there is enough interest, I can record a similar video for the D365 Developer Extensions.  The general flow of the video would be the same, but the mechanics of using each extension within Visual Studio will be a bit different.

HTH

@devkeydet


LCS (April 2018 – release 1) release notes

$
0
0

The Microsoft Dynamics Lifecycle Services (LCS) team is happy to announce the immediate availability of the release notes for LCS (April 2018, release 1).

 

Download critical X++ updates

From our earlier release (February 2018, release 2), critical X++ updates can only be downloaded from online production environments, and we were going to remove the Critical X++ updates tile from pre-production and on-premises environments because the Critical X++ updates tile shows zero (0) in the pre-production or on-premises environments.

However, we've decided to keep the Critical X++ updates tile for online pre-production environments, and only remove it from on-premises environments.

Therefore, for all online environments of Microsoft Dynamics 365 for Finance and Operations, the critical X++ updates will be available in production environments, as well as pre-production environments.

The critical X++ updates are based on the telemetry data in your production environment. The count of the critical X++ updates in each of your environments is based on whether that environment has deployed the recommended critical X++ updates.

For example, if your production environments indicates 5 critical X++ updates, the Critical X++ updates tile in your production environment will show 5. In your pre-production environment, for example in the sandbox environment, will also show 5 if none of the critical X++ updates has been deployed in your sandbox environment. If your sandbox environment had deployed 1 of the critical X++ updates, it will show 4 on the Critical X++ updates tile.

 

Updates to project creation flows in LCS

Starting with this release, any new customer who has purchased licenses for Dynamics 365 Finance and Operations will not be get an implementation project if they have not purchased the minimum licenses needed to use Finance and Operations. As per the licensing guide, a customer needs a minimum of 20 licenses (Enterprise or equivalent) to use Finance and Operations. Going forward, LCS will enforce this restriction during project creation. Only customers who have purchased 20 licenses (Enterprise or equivalent) are allowed to get an implementation project in LCS and deploy environments.

 

Updates to Environment monitoring view in LCS

In this release of LCS, we have updated the User activity section on the Environment monitoring page to improve the user experience as well as the performance of loading this page. The user will now need to select a user session in the User sessions grid to see a detailed list of interactions on the grid on the left. The following screenshot shows the new user interface.

【 AI 編】技術トラック担当者からのメッセージ

$
0
0

皆さん、こんにちは。

AI トラック担当 の 畠山 大有です。

 

AI トラック概要

AI (Artificial Intelligence)/機械学習は、「機械学習とは何ですか?」という「知る」フェーズから、AI の実ビジネスへの「活用」フェーズへ大きく移っています。Deep Neural Network の進化だけでなく、GPU/FPGA などチップセット レベルでのハードウェアの進化など、個々の技術も日進月歩で良いものが出てきています。また、「AutoML」という言葉に代表される、機械学習の作業自体を自動化する取り組みも出始めました。AI トラックでは、2018 年 5 月時点での最新の機械学習の技術、そして、実ビジネスでの事例で得られた多くの知見を取り扱います。それによって、明日から実ビジネスに AI を活用する方法を見いだしてください。

■ 技術トラック担当者からのメッセージ!

AI の技術の進化は本当に早く、この一年で現場での活用も大分広まってきました。Microsoft だけでも公開可能な事例が数百を超えています。

de:code は開発者の祭典です。そこで身近な問題を日々解決している開発者の方が、AI/機械学習をシステムに取り入れるためのセッションを用意しています。現場でプロジェクトを行った方の生の声も聞けます。昨年登場したAzure Machine Learning Service や、Azure Batch AI、そして、進化した Cognitive Service が、通常のシステム開発と異なるアプローチが必要な機械学習のプロジェクト推進にどう役に立つのか? 特に Deep Learning によって、処理しやすくなった画像/映像/音声の解析、自然言語の解析が、ビジネスの課題をどう解決したのかに焦点を当てます。

 

--------------------------------------------------------------------------

  • 公式ウェブサイトはこちら
  • 早期割引申込はこちら
    • 早期申込割引締切日:2018 年 4 月 24 日(火)
  • セッション情報はこちら
  • SNS

--------------------------------------------------------------------------

SQL Server プロセスダンプの採取

$
0
0



神谷 雅紀
Escalation Engineer


SQL Server データベースエンジンプロセス (プロセス名 sqlservr.exe) のプロセスダンプを採取する方法です。SQL Server 製品に付属の sqldumper.exe を使用することで、デバッガ等を追加インストールすることなく、かつ、データベースのデータを含まない形のダンプの採取が可能です。


手順1

How to use Powershell script to generate a dump file の Code details をクリックし、Powershell ソースコードをファイル名  SQLDumpHelper.ps1 ファイルとして、プロセスダンプ採取の対象となるSQL Serverが実行されているサーバーに保存します。

How to use the Sqldumper.exe utility to generate a dump file in SQL Server
https://support.microsoft.com/en-us/help/917825/how-to-use-the-sqldumper-exe-utility-to-generate-a-dump-file-in-sql-se


手順2

Windows キー + Q を押下し、Windows Powershell と入力します。検索結果の中の Windows Powershell を右クリックし、「管理者として実行」(Run as administrator) をクリックします。


手順3

Powershell コンソールで cd コマンドにより SQLDumpHelper.ps1 を保存したフォルダーに移動します。

例: C:tempSQLDumpHelper.ps1 として保存した場合
cd C:temp


手順4

Powershell コンソールに .SQLDumpHelper.ps1 と入力し、Enter を押下します。


手順5

プロセスダンプ採取対象の SQL Server プロセスの PID を入力します。


clip_image001[7]


手順6

採取するダンプのタイプを入力します。各ダンプに含まれるメモリ領域やファイルの目安は以下の通りです。

※ 採取時間やファイルサイズはあくまでも目安です。書き出し先のディスク性能や書き出すデータ量 (メモリサイズ) などに依存して大きく変動します。

ダンプの種類

ダンプファイルに含まれる
メモリ領域

ダンプファイル採取時間の目安

ダンプファイルサイズの目安

1)

Mini-dump

スレッドスタックのみ。

スレッド数に依存。
一般的には数秒から数十秒。

スレッド数に依存。
一般的には数百 KB ~ 数 MB。

2)

Mini-dump with referenced memory

スレッドスタックとそこから参照されているメモリのみ (SQL Server の既定)。

スレッド数に依存。
一般的には数秒から十秒分。

スレッド数に依存。
一般的には数百 KB ~ 数十 MB。

3)

Filtered dump

コミットされているメモリ。ただし、データベースのデータを保持しているメモリを除く。

割り当て済みメモリ量に依存。
一般的には数十秒から数分。

割り当て済みメモリ量に依存。
おおよそ パフォーマンスカウンタの Memory ManagerTotal Server Memory から Buffer ManagerTotal pages を差し引いたサイズ。

4)

Full dump

コミットされているメモリすべて。

割り当て済みメモリ量に依存。
一般的には数分から数十分。

割り当て済みメモリ量に依存。
おおよそ パフォーマンスカウンタの Memory ManagerTotal Server Memory に数百 MB を加えたサイズ。


clip_image002


手順7

出力先フォルダーを指定します。


image


手順8

複数回採取する場合は、Y を入力後、採取回数と採取間隔を指定します。

clip_image002[5]


正常にダンプが生成されると、コンソールに生成されたダンプファイルのファイル名が表示され、ダンプファイル SQLDmprNNNN.mdmp (NNNN は数字) とログファイル SQLDUMPER_ERRORLOG.log が手順 7 で指定したフォルダーに生成さています。

SQL Server プロダクト キーの変更手順について

$
0
0

 

高原 伸城

Support Escalation Engineer

 

皆さん、こんにちは。 BI Data Platform サポートチームの 高原 です。

今回は、SQL Server プロダクトキーの変更手順について紹介します。

 

SQL Server をインストールする際、プロダクト キー (Product Key) の入力画面より、プロダクト キーを入力する必要があります。

image

しかしながら、SQL Server のインストールが完了後、別の環境に使用する予定の プロダクト キー を誤って使用してしまったなど、プロダクト キーを変更したい状況があるかもしれません。

SQL Server では、同じエディション (例えば SQL Server 2017 Standard Edition から SQL Server 2017 Standard Edition など)  へのエディションのアップグレードが可能です。

そのため、エディションのアップグレードを実施することで、プロダクト キーを変更することが出来ます。

エディション アップグレードの詳細な手順は、以下の公開情報を参照ください。

SQL Server の別のエディションへのアップグレード (セットアップ)

 

[補足]

既存の環境でどの プロダクト キーが使用されて SQL Server がインストールされているかを確認する方法はありません。そのため、誤ったプロダクト キーが使用されている可能性が疑われる際には、エディションのアップグレードを実施することを検討ください。一般的に、エディションのアップグレードではSQL Server 内部で保持されているエディションの情報を更新するのみとなるため、処理に時間を要することはありません。

ボリューム ライセンスの管理者の場合、VLSC (Volume Licensing Service Center) サイト にサインインすることで、使用可能なプロダクトキー情報を参照できます。

 

[参考情報]

サポートされるバージョンとエディションのアップグレード

SQL Server 2016 の各エディションとサポートされている機能

sys.dm_db_persisted_sku_features (Transact-SQL)

 

 

※ 本Blogの内容は、2018年4月現在の内容となっております。

[FAQ] よくある質問事項について (Azure Database for MySQL)

$
0
0

 

高原 伸城

Support Escalation Engineer

 

皆さん、こんにちは。 BI Data Platform サポートチームの 高原 です。

今回は、Azure Database for MySQL でよくある質問事項について紹介します。

 

Q1)  2018年4月に GA (General Availability) となったが、Preview 期間に作成したデータベースに対して、何らかのアップデート作業は必要となるか?

A1)  Preview 期間に作成したデータベースは、特にアップデート作業を実施することなく、そのまま使用することが可能です。

 

Q2)  手動で フェールオーバー を実施することは可能か?

A2) 出来ません。しかしながら、Azure Database for MySQL のデータベースが配置されたノードで何らかの障害が検知された場合、自動的にフェールオーバーが行われます。

Azure Database for MySQL での高可用性の概念

+ 高可用性

 

Q3)  Azure 仮想マシン上から、仮想ネットワーク サービスのエンドポイント経由で、接続することは可能か?

A3) 現時点において、仮想ネットワーク サービスのエンドポイント経由で Azure Database for MySQL に接続することは出来ません。

仮想ネットワーク サービスのエンドポイント

 

※ 本Blogの内容は、2018年4月現在の内容となっております。

[FAQ] よくある質問事項について (Azure Database for PostgreSQL)

$
0
0


高原 伸城

Support Escalation Engineer


皆さん、こんにちは。 BI Data Platform サポートチームの 高原 です。

今回は、Azure Database for PostgreSQL でよくある質問事項について紹介します。


Q1)  2018年4月に GA (General Availability) となったが、Preview 期間に作成したデータベースに対して、何らかのアップデート作業は必要となるか?

A1)  Preview 期間に作成したデータベースは、特にアップデート作業を実施することなく、そのまま使用することが可能です。


Q2)  手動で フェールオーバー を実施することは可能か?

A2) 出来ません。しかしながら、Azure Database for PostgreSQL のデータベースが配置されたノードで何らかの障害が検知された場合、自動的にフェールオーバーが行われます。

Azure Database for PostgreSQL での高可用性の概念

+ 高可用性


Q3)  Azure 仮想マシン上から、仮想ネットワーク サービスのエンドポイント経由で、接続することは可能か?

A3) 現時点において、仮想ネットワーク サービスのエンドポイント経由で Azure Database for PostgreSQL に接続することは出来ません。

仮想ネットワーク サービスのエンドポイント



※ 本Blogの内容は、2018年4月現在の内容となっております。

Article 7


CodeTalk: Making IDEs more accessible for developers with visual impairment

$
0
0

The graphically dense interface of integrated development environments (IDEs) makes them immensely powerful for developers across the world. Visual cues are used extensively in these environments to help developers get more information about their code with little more than a quick glance. However, these visual cues are inaccessible to many developers with visual impairments.

According to the World Health Organization, there are 285 million people who cannot read all the content on a regular screen due to some form of visual impairment. Of these 39 million people are blind and cannot access any visual information on screens.

Bringing technology and cutting-edge tools to people with disabilities is part of our mission to empower everyone, everywhere. CodeTalk helps developers with visual impairment to get involved in creating and developing new technologies with the power of code. We are trying to make development platforms more accessible to everyone.

Improving the experience and productivity for developers with visual impairment

IDEs are designed to boost productivity. Developers on the platform have access to a range of tools that help them monitor the code, rectify errors, debug, and modify their programs. Bright colors indicate code syntax, squiggly red lines underscore errors in the code, and multiple windows are required to run a debugging process. Graphical models are used to represent the code structure, the performance of the programme and bottlenecks in the architecture.

Although developers with low vision and severe visual impairments can use IDEs with the aid of screen readers, the experience is incomplete since screen readers miss out on these essential visual cues. A screen reader, for example, cannot describe a bar chart or graphical model on the platform. This is a problem even for coders with visual impairment who use a refreshable braille display. Not only does this impede productivity, but it also makes the experience of creating something on the platform needlessly frustrating.

Project CodeTalk

Project CodeTalk is our ongoing effort to address these issues. By rethinking the design of our IDEs, defining accessibility guidelines for IDEs, we can make powerful development platforms more accessible to developers.

For example, extensions developed for IDEs, such as Visual Studio and VS Code remove three key barriers to usage:

i) Discoverability

Converting graphical user interfaces (GUIs) into audio user interfaces (AUIs) helps developers with visual impairment use platforms at par with everyone else. Audio cues and specific non-verbal sounds can alert the user about bugs, errors and syntax issues. Users can then access a list of errors with a single keystroke. Audio bridges the accessibility gap and allows users to discover more information on their code without the need for visual cues.

Talk Point

Debugging the code is one of the crucial aspects of IDEs that are made accessible with ‘Talk Points’. Talk Points can announce the result of an expression when a certain line of code is reached and help the developer choose whether to continue or stop the execution of the program. Talk Points can be based on speech, where specific statements are spoken when a Talk Point is hit. They can also be programmed to play a tone to let users know whether a certain block has been executed or not. Users with visual impairment can also modify the Talk Points to read out expressions in the execution context so that developers can get contextual information without changing the underlying code.

ii) Glanceability

Tools built into the CodeTalk extension can help users get quick information about the code structure without the need for visual representations. The file structure, code summaries, and function lists can be described through the screen reader to help developers get a holistic view of the project. Code Summary allows developers to directly navigate through the code construct with a handy summary of the construct in an accessible window. Function Lists provide a list of active functions that are easily navigable. These tools can also help users detect the current location of the cursor through a list of blocks, functions, classes and namespaces in the code structure.

Code Summary

Function Lists

iii) Navigability

CodeTalk allows users to move to context and place the cursor at specific points in the code structure with a simple keystroke. For example, a keystroke can help the user bring the cursor to the start or end point of a block of code. Similarly, keystrokes can help users locate and skip comments on their document.  

These enhanced features, such as audio cues and additional feedback on their code have received positive feedback from a group of developers with visual impairment.

Enabling the shared goal of wider accessibility

Accessibility is centred around end-user needs. At Microsoft, our commitment to accessibility extends to our entire product spectrum as we endeavour to deliver great experiences to people with disabilities – be they end-users or developers. By incorporating accessibility considerations in the development environment, CodeTalk aims to systematically address barriers for developers with visual impairment. We’re committed to investing more and more efforts in this evolving medium to enable the shared goal of wider accessibility – not just for the users of software but the creators themselves.

For more information on CodeTalk and its features or to download the latest VISX installer, please visit: https://microsoft.github.io/CodeTalk/

With contributions from Suresh Parthasarathy, Gopal Srinivasa and Priyan Vaithilingam.

Azure App service: Using Custom domain with Azure Traffic Manager

$
0
0

Azure Traffic Manager provides a default hostname on Traffic manager creation. It is of the Format <TrafficManagerName>.trafficmanager.net. When we browse this URL, the Domain Name Server (DNS) directs the requests to the most appropriate Endpoint.

If you decide to provide a alias (Custom URL) to the Azure Traffic manager which in turn routes to the Azure App services, then you have landed at the right place.

Overview of the steps required to complete this configuration:

  • Configure Azure Traffic Manager with Endpoints as Azure App Service.
  • Update the DNS records to point to the Azure Traffic Manager.
  • Map the custom domains to the Azure Traffic Manager.
  • Verification of Custom Domain configuration.

CONFIGURING TRAFFIC MANAGER

  1. Purchase an Azure Traffic manager profile.

image

     2. Decide the method of routing:

  • Priority based
  • Weighted
  • Performance
  • Geographic

For more details about the types of routing please refer the article -> Traffic Manager routing methods .

  1. Once you have decided the routing method, configure the endpoints for the traffic manager. (Azure App service in this scenario)
  • In the Azure Traffic manager profile blade, in Settings section, click Endpoints.
  • In the Endpoint blade, click Add, insert the necessary details.
  • Select the Target resource type as App Service(slots), choose the Target resource and fill in the necessary details depending on the type of routing.

Note: Only WebApps with pricing tier Standard or above can use Traffic Manager

image

4. Repeat the above step for all the endpoints.

UPDATING DNS RECORDS

Update DNS registry with a CNAME record pointing to the Azure Traffic Manager.

image

Please use this as a reference to configure DNS record:

Host is the Custom domain that you want to configure to your Traffic Manager. This point to your Traffic Manager <TrafficManagerName>.trafficmanager.net.

Note: Ensure that the TTL is the minimum value. One might wonder why everyone mentions this, the reason behind this is TTL decides when the DNS cache is cleared, in case of the above CNAME record the cache is cleared every 600 seconds. Our intention here is to ensure that the DNS changes propagates as soon as possible.

MAPPING CUSTOM DOMAIN

Once the DNS propagation is completed the next step will be to configure the custom domain to the Traffic Manager.

  • Go to the Azure App services blade, in the Settings section, click Custom domains.
  • Click Add hostname, enter the custom domain and click on Validate. You will see an option to either configure the custom domain to the Azure App service itself or to the Azure Traffic Manager. Go ahead and select Traffic Manager to which you have added this App service as an Endpoint.

image

Note: You will now be able to see the custom domain under the HOSTNAMES ASSIGNED TO SITE.

Repeat the above step for all the Endpoints configured to the Azure Traffic Manager and you will be good to go.

VERIFICATION OF DNS RECORDS

To verify whether the configuration was successful setup test cases and verify whether the traffic manager works for all scenarios, one way to check whether the custom domain is mapped to the Azure WebApp Endpoints is to use Digwebinterface.

image

Your WebApps can now be, accessed over the Traffic Manager. Ensure that you use Traffic manager to the best of its capabilities.

IoT: živé přechroupávání dat do Azure SQL DB, Cosmos DB a Power BI s Azure Stream Analytics

$
0
0

Pokračuji v seznamování se s IoT v Azure a dnes budu chtít surová data z IoT Hubu kontinuálně přechroupávat, trochu filtrovat, trochu agregovat a hlavně posílat na dlouhodobé uložení v Azure SQL DB, Azure Cosmos DB, Azure Blob Storage a také do real-time vizualizace s Power BI. Zní to složitě? Ve skutečnosti to bylo snadné.

Proč Stream Analytics

S daty z IoT bude většinou potřeba něco dělat. Proč to ale nepodniknout rovnou na vstupu? Nebo proč to naopak nenechat až po uložení do databáze?

Možná máte senzory, které posílají teplotu v jiných jednotkách (nějaký v Celsius, jiný ve Fahrenheit), jiné škále (nějaký v milionech, jiný v tisících), jiném formátu (jeden v Avro, druhý v JSON a políčko teploty označuje temparature, třetí sicé také v JSON, ale teplotu označuje temp) nebo v jiném intervalu (nějaký každých 10 vteřin, jiný každou minutu). Potřebujete tedy data upravit a je s tím nějaká práce, má to nezanedbatelnou náročnost na zpracování.

Druhý aspekt může být nutnost reagovat v reálném čase. Vždy je skvělé se důkladně zamyslet nad uloženými daty s dvacetiletou historií a spustit komplikované algoritmy, ale někdy je důležitější mít informaci rychle i za cenu, že je to samozřejmě méně přesné. Třeba potřebujeme zabránit výbuchu v přetlakovaných trubkách nebo odmítnout podezřelou platební transakci.

Třetí věc k zamyšlení je zda surová data nejsou pro naše účely nesmyslně podrobná. Možná hledáme agregované pohledy s hodinovými trendy a analytiku postavenou nad relační strukturou, ne nutnost zvlášť zkoumat každou vteřinu dat. Nebo zařízení posíla data z víc senzorů a nás zajímají zatím jen některé. Surová data je fajn si uložit co nejlevnějším způsobem třeba do Blob storage pro případ, že se do podrobností budeme chtít někdy podívat, ale do relační databáze nám stačí třeba minutové pohledy.

Všechny tři situace myslím ukazují důležitost Stream Analytics. Funkce zpracování dat nechceme dávat přímo do systému pro příjem dat, protože bychom dramaticky ovlivnili jeho škálovatelnost. IoT Hub (stejně jako Event Hub) musí být jednoduchý a neuvěřitelně škálovatelný, být schopen přijmout cokoli a fungovat jako buffer pro další zpracování; pokud by při příjmu prováděl nějaký složitější kód, data by nestíhal přijímat, timeouty by způsobovaly ztrátu dat, rozpad komunikace se zařízeními nebo dlouhé čekací doby vedoucí k větší spotřebě zařízení. Zpracování tedy chceme oddělit od příjmu. Strategie nějakým jednoduchým způsobem data pouze dostat do databáze a teprve pak je upravovat by také nebyla nejefektivnější. Docházelo by jednak ke ztrátě schopnosti reagovat co nejblíže reálnému času (místo predikce výbuchu továrny tak jak data proudí bychom čekali na jejich uložení a pak je nějakými SELECTy zkoumali) a také bychom mohli zbytečně zatěžovat cílový systém daty, která ve skutečnosti nepotřebujeme (představte si kolik výkonu relační DB bychom zbytečně spotřebovali, pokud bychom do ní uložili surová data, nad nimi teprve udělali tabulky minutových agregací a surová data pak exportovali do Blobu).

Moje první Query a odkládání do Blob storage

Začněme tím, že si vytvoříme Azure Stream Analytics.

Základem pro škálování výkonu je počet jednotek. Velmi zajímavá je v Preview možnost použít stejnou technologii mimo cloud na IoT bráně. To souvisí se strategií Intelligent Edge, kdy Microsoft nabízí Azure Machine Learning, Azure Stream Analytics nebo Azure Functions ve formě spustitelné na takových zařízeních.

Následně si přidáme vstup. Jak už padlo půjde v mém případě o Azure IoT Hub, podporovaný je i Azure Event Hub (zajímavé pro jednosměrné scénáře, třeba sběr clickstreamu a jiných událostí z vaší webové aplikace s cílem analyzovat chování uživatele v reálném čase a udržet ho na stránce co nejdéle) nebo Blob Storage.

Můj IoT DevKit z minulého dílu seriálu je připojen a posílá data. Abychom si mohli Query odladit, můžeme si v GUI nahrát vzorek dat nebo ještě lépe si ho zachytit z právě přicházejících dat.

Jazyk Stream Analytics je velmi podobný SQL. To je z mého pohledu perfektní scénář. K proudu dat se tak syntakticky mohu chovat jako k databázi včetně věcí jako WHERE, GROUP BY nebo JOIN. Nemusím se tedy učit příliš mnoho nové terminologie. Začnu tak, že si vezmu všechno ze vstupu a pošlu na výstup. Kliknu na Test, abych viděl co to udělá.

Výborně. Co kdybych pro zjednodušení chtěl ve svém Query pracovat pouze s vlhkostí?

Pojďme si teď tenhle výstup nechat ukládat do Blob storage. Přidáme tedy nový Output.

 

Pro mne bude nejpříjemnější vytvářet ve storage JSON soubor, ale můžete zvolit i CSV nebo Avro.

Output mám, upravím tedy své Query. Posílat budu vlhkost a teplotu a teď už do blobu.

Máme nastaveno, pojďme Stream Analytics zapnout.

Po chvilce se podívám na svou Azure Blob Storage.

A nacházím tam svoje data.

A je to. Úžasně jednoduché.

Pokračovat ve čtení

 

Custom bi-directional Microsoft Dynamics 365 Integration for Lookup Tables: a practical example.

$
0
0

It could be considered quite a typical request to integrate a custom supplemental table (Dynamics 365 for Sales Lookup table) in Dynamics NAV when dealing with integration projects. The scope of this blog post is to provide a general guidance at high-level which are the tasks to accomplish to be successful in achieve this goal.

Attached at the end of the post, you'll find a full object sample, based on Microsoft Dynamics NAV Cronus IT Cumulative Update 16.

Generic Inquiry:

  • Request the capability of categorizing each Dynamics NAV Customer and/or Dynamics 365 Account per a defined custom Group Segment
  • Dynamics NAV Customer and/or Dynamics 365 for Sales must be synchronized either manually or automagically
  • Group Segment could be created either with Dynamics NAV or Dynamics 365 for Sales
  • Group Segment must be synchronized either manually or automagically

Development tasks:

  1. Create a custom Microsoft Dynamics 365 for Sales Solution that contains:
    1. New custom entity: Group Segment

      With the following fields

      (see below a typical view)

    2. Lookup to Group Segment in the Account entity (the one selected below)
  2. Create a Dynamics NAV supplemental table: Group Segment
    1. Create a Group Segment Card and List bounded to it
    2. Change Customer Table to add the relevant Group Segment Code and Description
    3. Change Customer List and Card for display and data entries
  3. Use the New-NAVCrmTable PowerShell cmdlet to:
    1. Regenerate CRM Account table
    2. Generate CRM Group Segment table
      NOTE: Both tables must be added in the same PowerShell command to respect the dependencies between the two tables and extract the appropriate lookup fields
  4. Import the CRM Group Segment table.
    Below how it should look like with highlighted the relevant fields.
  5. Export CRM Account table in TXT format, merge only the new fields needed for Group Segment display and handling. Import back and compile them.
    Below a snippet of the lookup field just added:And their properties:
  6. Create a CRM Group Segment List in Dynamics NAV bounded to the CRM Group Segment table.
  7. Modify the relevant Microsoft Dynamics NAV objects to orchestrate the following
    1. Add Group Segment as valid Integration Record for Dynamics 365 for Sales Integration
      Change Codeunit 5150 “Integration Record
    2. For a typical manual synchronization within a card page, change Codeunit 5330 “CRM Integration Management”
    3. Handle the lookup to the CRMTable by changing Codeunit 5332 “Lookup CRM Tables” function Lookup and creating a brand new one: LookupCRMGroupSegment
    4. Set up Defaults to automate the creation of Table/Field mappings and Job Queue automagic enablement. Change the following functions in Codeunit 5334 “CRM Setup Defaults” and create a new one: RestGroupSegmentMappingThis step would help in generate the relevant Integration MapAnd their corresponding Field mappings
    5. And now, the last but most IMPORTANT step to understand.
      Since the lookup field is handled through a GUID relation within the Dynamics 365 for Sales entity and a Code field in Dynamics NAV, this bidirectional dichotomy has to be handled through a very specific event when transferring Customer / Account fields back and forth.
      More in deep, the relevant event is OnAfterTransferRecordFields in Codeunit 5341 “CRM Int. Table. Subscriber” for whom we need to add some code and 2 functions to handle lookup updates in both systems

In the end, you will see that Dynamics NAV and Dynamics 365 for Sales group segments can be integrated bi-directionally and also Lookup / FlowField added in the Account / Customer is flowing back and forth beautifully.

Remember: a mantra for system integrators reads :“All integrations are never so simple as they seem”.

Download: GroupSegment.POC.txt

 

These postings are provided "AS IS" with no warranties and confer no rights. You assume all risk for your use.
Duilio Tacconi (dtacconi)
Microsoft Dynamics Italy
Microsoft Customer Service and Support (CSS) EMEA

Very Large Database Migration to Azure – Recommendations & Guidance to Partners

$
0
0

SAP systems moved onto Azure cloud now commonly include large multinational "single global instance" systems and are many times larger than the first customer systems deployed when the Azure platform was first certified for SAP workloads some years ago

Very Large Databases (VLDB) are now commonly moved to Azure. Database sizes over 20TB require some additional techniques and procedures to achieve a migration from on-premises to Azure within an acceptable downtime and a low risk.

The diagram below shows a VLDB migration with SQL Server as the target DBMS. It is assumed the source systems are either Oracle or DB2

A future blog will cover migration to HANA (DMO) running on Azure. Many of the concepts explained in this blog are applicable to HANA Migrations

This blog does not replace the existing SAP System Copy guide and SAP Notes which should be reviewed and followed.

High Level Overview

A fully optimized VLDB migration should achieve around 2TB per hour migration throughput per hour or possibly more.

This means the data transfer component of a 20TB migration can be done in approximately 10 hours. Various postprocessing and validation steps would then need to be performed.

In general with adequate time for preparation and testing almost any customer system of any size can be moved to Azure.

VLDB Migrations require do considerable skill, attention to detail and analysis. For example the net impact of Table Splitting must be measured and analyzed. Splitting a large table into more than 50 parallel exports may considerably decrease the time taken to Export a table, but too many Table Splits may result in drastically increased Import times. Therefore the net impact of table splitting must be calculated and tested. An expert licensed OS/DB migration consultant will be familiar with the concepts and tools. This blog is intended to be a supplement to highlight some Azure specific content for VLDB migrations

This blog deals with Heterogeneous OS/DB Migration to Azure with SQL Server as the target database using tools such as R3load and Migmon. The steps performed here are not intended for Homogenous System Copies (a copy where the DBMS and Processor Architecture (Endian Order) stays the same). In general Homogeneous System Copies should have very low downtime regardless of DBMS size because log shipping can be used to synchronize a copy of the database in Azure.

A block diagram of a typical VLDB OS/DB migration and move to Azure is illustrated below. The key points illustrated below:

1.The current source OS/DB is often AIX, HPUX, Solaris or Linux and DB2 or Oracle

2. The target OS is either Windows, Suse 12.3, Redhat 7.x or Oracle Linux 7.x

3. The target DB is usually either SQL Server or Oracle 12.2

4. IBM pSeries, Solaris SPARC hardware and HP Superdome thread performance is drastically lower than low cost modern Intel commodity servers, therefore R3load is run on separate Intel servers

5. VMWare requires special tuning and configuration to achieve good, stable and predictable network performance. Typically physical servers are used as R3load server and not VMs

6. Commonly four export R3load servers are used, though there is no limit on the number of export servers. A typical configuration would be:

-Export Server #1 – dedicated to the largest 1-4 tables (depending on how skewed the data distribution is on the source database)

-Export Server #2 – dedicated to tables with table splits

-Export Server #3 – dedicated to tables with table splits

-Export Server #4 – all remaining tables

7. Export dump files are transferred from the local disk in the Intel based R3load server into Azure using AzCopy via public internet (this is typically faster than via ExpressRoute though not in all cases)

8. Control and sequencing of the Import is via the Signal File (SGN) that is automatically generated when all Export packages are completed. This allows for a semi-parallel Export/Import

9. Import to SQL Server or Oracle is structured similarly to the Export, leveraging four Import servers. These servers would be separate dedicated R3load servers with Accelerated Networking. It is recommended not to use the SAP application servers for this task

10. VLDB databases would typically use E64v3, m64 or m128 VMs with Premium Storage and Write Accelerator. The Transaction Log can be placed on the local SSD disk to speed up Transaction Log writes and remove the Transaction Log IOPS and IO bandwidth from the VM quota.  After the migration the Transaction Log should be placed onto persisted disk

Source System Optimizations

The following guidance should be followed for the Source Export of VLDB systems:

1. Purge Technical Tables and Unnecessary Data – review SAP Note 2388483 - How-To: Data Management for Technical Tables

2. Separating the R3load processes from the DBMS server is an essential step to maximize export performance

3. R3load should run on fast new Intel CPU. Do not run R3load on UNIX servers as the performance is very poor. 2-socket commodity Intel servers with 128GB RAM cost little and will save days or weeks of tuning/optimization or consulting time

4. High Speed Network ideally 10Gb with minimal network hops between the source DB server and the Intel R3load servers

5. It is recommended to use physical servers for the R3load export servers – virtualized R3load servers at some customer sites did not demonstrated good performance or reliability at extremely high network throughput (Note: very experienced VMWare engineer can configure VMWare to perform well)

5. Sequence larger tables to the start of the Orderby.txt

6. Configure Semi-parallel Export/Import using Signal Files

6. Large exports will benefit from Unsorted Export on larger tables. It is important to review the net impact of Unsorted Exports as importing unsorted exports to databases that have a clustered index on the primary key will be slower

7. Configure Jumbo Frames between source DB server and Intel R3load servers. See "Network Optimization" section later

8. Adjust memory settings on the source database server to optimize for sequential read/export tasks 936441 - Oracle settings for R3load based system copy

Advanced Source System Optimizations

1. Oracle Row ID Table Splitting

SAP have released SAP Note 1043380 which contains a script that converts the WHERE clause in a WHR file to a ROW ID value. Alternatively the latest versions of SAPInst will automatically generate ROW ID split WHR files if SWPM is configured for Oracle to Oracle R3load migration. The STR and WHR files generated by SWPM are independent of OS/DB (as are all aspects of the OS/DB migration process).

The OSS note contains the statement "ROWID table splitting CANNOT be used if the target database is a non-Oracle database". Technically the R3load dump files are completely independent of database and operating system. There is one restriction however, restart of a package during import is not possible on SQL Server. In this scenario the entire table will need to be dropped and all packages for the table restarted. It is always recommended to kill R3load tasks for a specific split table, TRUNCATE the table and restart the entire import process if one split R3load aborts. The reason for this is that the recovery process built into R3load involves doing single row-by-row DELETE statements to remove the records loaded by the R3load process that aborts. This is extremely slow and will often cause blocking/locking situations on the database. Experience has shown it is faster to start the import of this specific table from the beginning, therefore the limitation mentioned in Note 1043380 is not a limitation at all

ROW ID has a disadvantage that calculation of the splits must be done during downtime – see SAP Note 1043380.

2. Create multiple "clones" of the source database and export in parallel

One method to increase export performance is to export from multiple copies of the same database. Provided the underlying infrastructure such as server, network and storage is scalable this approach is linearly scalable. Exporting from two copies of the same database will be twice as fast, 4 copies will be 4 times as fast. Migration Monitor is configured to export on a select number of tables from each "clone" of the database. In the case below the export workload is distributed approximately 25% on each of the 4 DB servers.

-DB Server1 & Export Server #1 – dedicated to the largest 1-4 tables (depending on how skewed the data distribution is on the source database)

-DB Server2 & Export Server #2 – dedicated to tables with table splits

-DB Server3 & Export Server #3 – dedicated to tables with table splits

-DB Server4 & Export Server #4 – all remaining tables

Great care must be taken to ensure that the databases are exactly and precisely synchronized, otherwise data loss or data inconsistencies could occur. Provided the steps below are precisely followed, data integrity is provided.

This technique is simple and cheap with standard commodity Intel hardware but is also possible for customers running proprietary UNIX hardware. Substantial hardware resources are free towards the middle of an OS/DB migration project when Sandbox, Development, QAS, Training and DR systems have already moved to Azure. There is no strict requirement that the "clone" servers have identical hardware resources. So long as there is adequate CPU, RAM, disk and network performance the addition of each clone increases performance

If additional export performance is still required open an SAP incident in BC-DB-MSS for additional steps to boost export performance (very advanced consultants only)

Steps to implement a multiple parallel export:

1. Backup the primary database and restore onto "n" number of servers (where n = number of clones). In the case illustrated 3 is chosen making a total of 4 DB servers

2. Restore backup onto 3 servers

3. Establish log shipping from the Primary source DB server to 3 target "clone" servers

4. Monitor log shipping for several days and ensure log shipping is working reliably

5. At the start of downtime shutdown all SAP application servers except the PAS. Ensure all batch processing is stopped and all RFC traffic is stopped

6. In transaction SM02 enter text "Checkpoint PAS Running". This updates table TEMSG

7. Stop the Primary Application Server. SAP is now completely shutdown. No more write activity can occur in the source DB. Ensure that no non-SAP application is connected to the source DB (there never should be, but check for any non-SAP sessions at the DB level)

8. Run this query on the Primary DB server SELECT EMTEXT FROM <schema>.TEMSG;

9. Run the native DBMS level statement INSERT INTO <schema>.TEMSG "CHECKPOINT R3LOAD EXPORT STOP dd:mm:yy hh:mm:ss" (exact syntax depends on source DBMS. INSERT into EMTEXT)

10. Halt automatic transaction log backups. Manually run one final transaction log backup on the Primary DB server. Ensure the log backup is copied to the clone servers

11. Restore the final transaction log backup on all 3 nodes

12. Recover the database on the 3 "clone" nodes

13. Run the following SELECT statement on *all* 4 nodes SELECT EMTEXT FROM <schema>.TEMSG;

14. With a phone or camera photograph the screen results of the SELECT statement for each of the 4 DB servers (the Primary and the 3 clones). Be sure to carefully include each hostname in the photo – these photographs are proof that the clone DB and the primary are identical and contain the same data from the same point in time. Retain these photos and get customer to sign off the DB replication status

15. Start export_monitor.bat on each Intel R3load export server

16. Start the dump file copy to Azure process (either AzCopy or Robocopy)

17. Start import_monitor.bat on the R3load Azure VMs

Diagram showing existing Production DB server log shipping to "clone" databases. Each DB server has one or more Intel R3load servers

Network Upload Optimizations

Jumbo Frames are ethernet frames larger than the default 1500 bytes. Typical Jumbo Frame sizes are 9000 bytes. Increasing the frame size on the source DB server, all intermediate network devices such as switches and the Intel R3load servers reduces CPU consumption and increases network throughput. The Frame Size must be identical on all devices otherwise very resource intensive conversion will occur.

Additional networking features such as Receive Side Scaling (RSS) can be switched on or configured to distribute network processing across multiple processors  Running R3load servers on VMWare has proven to make network tuning for Jumbo Frames and RSS more complex and is not recommended unless there very expert skill level available

R3load exports data from DBMS tables and compresses this raw format independent data in dump files. These dump files need to be uploaded into Azure and imported to the Target SQL Server database.

The performance of the copy and upload to Azure of these dump files is a critical component in the overall migration process.

There are two basic approaches for upload of R3load dump files:

1.Copy from on-premises R3load export servers to Azure blob storage via Public Internet with AzCopy

On each of the R3load servers run a copy of AzCopy with this command line:

AzCopy /source:C:ExportServer_1Dumpfiles /dest:https://<storage_account>/ExportServer_1/Dumpfiles /destkey:xxxxxx /S /NC:xx /blobtype:page

The value for /NC: determines how many parallel sessions are used to transfer files. In general AzCopy will perform best with a larger number of smaller files and /NC values between 24-48. If a customer has a powerful server and very fast internet this value can be increased. If this value is increased too high connection to the R3load export server will be lost due to network saturation. Monitor the network throughput in Windows Task Manager. Copy throughput of over 1Gigabit per second per R3load Export Server can be easily achieved. Copy throughput can be scaled up by having more R3load servers (4 are depicted in the diagram above)

A similar script will need to be run on the R3load Import servers in Azure to copy the files from Blob onto a file system that R3load can access.

2. Copy from on-premises R3load export servers to an Azure VM or blob storage via a dedicated ExpressRoute connection using AzCopy, Robocopy or similar tool

Robocopy C:Export1Dump1 \az_imp1Dump1 /MIR /XF *.SGN /R:20 /V /S /Z /J /MT:8 /MON:1 /TEE /UNILOG+:C:Export1Robo1.Log

The block diagram below illustrates 4 Intel R3load servers running R3load. In the background Robocopy is started uploading dump files. When entire split tables and packages are completed the SGN file is copied either manually or via a script. When the SGN file for a package arrives on the import R3load server this will trigger import for this package automatically

Note: Copying files over NFS or Windows SMB protocols is not as fast or robust as mechanisms such as AzCopy. It is recommended to test performance of both file upload techniques. It is recommended to notify Microsoft Support for VLDB migration projects as very high throughput network operations might be mis-identified as Denial of Service attacks.

Target System Optimizations

1. Use latest possible OS with latest patches

2. Use latest possible DB with latest patches

3. Use latest possible SAP Kernel with latest patches (eg. Upgrade from 7.45 kernel to 7.49 or 7.53)

4. Consider using the largest available Azure VM. The VM type can be lowered to a smaller VM after the Import process

5. Create multiple Transaction Log files with the first transaction log file on the local non-persistent SSD. Additional Transaction Log files can be created on P50 disks.  VLDB migrations could require more than 5TB of Transaction Log space. It is strongly recommended to ensure there is always a large amount of Transaction Log space free at all times (20% is a safe figure). Extending Transaction Log files during an Import is not recommended and will impact performance

6. SQL Server Max Degree of Parallelism should usually be set to 1. Only certain index build operations will benefit from MAXDOP and then only for specific tables

7. Accelerated Networking is mandatory for DB and R3load servers

8. It is recommended to use m128 3.8TB as the DB server and E64v3 as the R3load servers (as at March 2018)

9. Limit the maximum memory a single SQL Server query can request with Resource Governor. This is required to prevent index build operations from requesting very large memory grants

10. Secondary indexes for very large tables can be removed from the STR file and built ONLINE with scripts after the main portion of the import has finished and post processing tasks such as configuring STMS are occurring

11. Customers using SQL Server TDE are recommended to pre-create the database and Transaction Log files, then enable TDE prior to starting the import. TDE will run for a similar amount of time on a DB that is full of data or empty. Enabling TDE on a VLDB can lead to blocking/locking issues and it is generally recommended to import into a TDE database. The overhead importing to a TDE database is relatively low

12. Review the latest OS/DB Migration FAQ

Recommended Migration Project Documents

VLDB OS/DB migrations require additional levels of technical skill and also additional documentation and procedures. The purpose of this documentation is to reduce downtime and eliminate the possibility of data loss. The minimum acceptable documentation would include the following topics:

1. Current SAP Application Name, version, patches, DB size, Top 100 tables by size, DB compression usage, current server hardware CPU, RAM and disk

2. Data Archiving/Purging activities completed and the space savings achieved

3. Details on any upgrade, Unicode conversion or support packs to be applied during the migration

4. Target SAP Application version, Support Pack Level, estimated target DB size (after compression), Top 100 tables by size, DB version and patch, OS version and patch, VM sku, VM configuration options such as disk cache, write accelerator, accelerated networking, type and quantity of disks, database file sizes and layout, DBMS configuration options such as memory, traceflags, resource governor

5. Security is typically a separate topic, but network security groups, firewall settings, Group Policy, DBMS encryption settings

6. HA/DR approach and technologies, in addition special steps to establish HA/DR after the initial import is finished

7. OS/DB migration design approach:

-How many Intel R3load export servers

-How many R3load import VMs

-How many R3load processes per VM

-Table splitting settings

-Package splitting settings

-export and import monitor settings

-list of secondary indexes to be removed from STR files and created manually

-list of pre-export tasks such as clearing updates

9. Analysis of last export/import cycle. Which settings were changed? What was the impact on the "flight plan"? Is the configuration change accepted or rejected? Which tuning & configuration is planned for next test cycle?

10. Recovery procedures and exception handling – procedures for rollback, how to handle exceptions/issues that have occurred during previous test cycles

It is typically the responsibility of the lead OS/DB migration consultant to prepare this documentation. Sometimes topics such as Security, HA/DR and networking are handled by other consultants. The quality of such documentation has proven to be a very good indicator of the skill level and capability of the project team and the risk level of the project to the customer.

Migration Monitoring

One of the most important components of a VLDB migration is the monitoring, logging and diagnostics that is configured during Development, Test and "dry run" migrations.

Customers are strongly advised to discuss with their OS/DB migration consultant implementation and usage of the steps in this section of the blog. Not to do so exposes a customer to a significant risk.

Deployment of the required monitoring and interpretation of the monitoring and diagnostic results after each test cycle is mandatory and essential for optimizing the migration and planning production cutover. The results gained in test migrations are also necessary to be able to judge whether the actual production migration is following the same patterns and time lines as the test migrations. Customers should request regular project review checkpoints with the SAP partner.  Contact Microsoft for a list of consultants that have demonstrated the technical and organizational skills required for a successful project.

Without comprehensive monitoring and logging it would be almost impossible to achieve safe, repeatable, consistent and low downtime migrations with a guarantee of no data loss. If problems such as long runtimes of some packages were to occur, it is almost impossible for Microsoft and/or SAP to assist with spot consulting without monitoring data and migration design documentation

During the runtime of an OS/DB migration:

OS level parameters on DB and R3load hosts: CPU per thread, Kernel time per thread, Free Memory (GB), Page in/sec, Page out/sec, Disk IO reads/sec, Disk IO write/sec, Disk read KB/sec, Disk write KB/sec

DB level parameters on SQL Server target: BCP rows/sec, BCP KB/sec, Transaction Log %, Memory Grants, Memory Grants pending, Locks, Lock memory, locking/blocking

Network monitoring normally handled by network team. Exactly configuration of network monitoring depends on customer specific situation.

During the runtime of the DB import it is recommended to execute this SQL statement every few minutes and screenshot anything abnormal (such as high wait times)

select session_id, request_id,start_time,
status,
command, wait_type, wait_resource, wait_time, last_wait_type, blocking_session_id from sys.dm_exec_requests
where session_id >
49 orderby wait_time desc;

During all migration test cycles a "Flight Plan" showing the number of packages exported and imported (y-axis) should be plotted against time (x-axis). The purpose of this graph is to establish an expected rate of progress during the final production migration cutover. Deviation (either positive or negative) from the expected "Flight Plan" during test or the final production migration is easily detected using this method. Other parameters such as CPU, disk and R3load rows/sec can be overlaid on top of the "Flight Plan"

At the conclusion of the Export and Import the migration time reports must be collected (export_time.html and import_time.html) https://blogs.sap.com/2016/11/17/time-analyzer-reports-for-osdb-migrations/

VLDB Migration Do's & Don't

The guidelines contained in this blog are based on real customer projects and the learnings derived from these projects. This blog instructs customers to avoid certain scenarios because these have been unsuccessful in the past. An example is the recommendation not to use UNIX servers or virtualized servers as R3load export servers:

1. Very often the export performance is a gating factor on the overall downtime. Often the current hardware is more than 4-5 years old and is prohibitively expensive to upgrade

2. It is therefore important to get the maximum export performance that is practical to achieve

3. Previous projects have spent man-weeks or even man-months trying to tune R3load export performance on UNIX or virtualized platforms, before giving up and using Intel R3load servers

4. 2-socket commodity Intel servers are very inexpensive and immediately deliver substantial performance gains, in some cases many orders of magnitude greater than minor tuning improvements possible on UNIX or virtualized servers

5. Customers often have existing VM farms but most often these do not support modern offload or SRIOv technologies. Often the VMWare version is old, unpatched or not configured for very high network throughput and low latency. R3load export servers require very fast thread performance and extremely high network throughput. R3load export servers may run for 10-15 hours at nearly 100% CPU and network utilization. This is not the typical use case of most VMWare farms and most VMWare deployments were never designed to handle a workload such as R3load.

RECOMMENDATION: Do not invest time into optimizing R3load export performance on UNIX or virtualized platforms. Doing so will waste not only time but will cost much more than buying low cost Intel servers at the start of the project. VLDB migration customers are therefore requested to ensure the project team has fast modern R3load export servers available at the start of the project. This will lower the total cost and risk of the project.

Do:

1. Survey and Inventory the current SAP landscape. Identify the SAP Support Pack levels and determine if patching is required to support the target DBMS. In general the Operating Systems Compatibility is determined by the SAP Kernel and the DBMS Compatibility is determined by the SAP_BASIS patch level.

Build a list of SAP OSS Notes that need to be applied in the source system such as updates for SMIGR_CREATE_DDL. Consider upgrading the SAP Kernels in the source systems to avoid a large change during the migration to Azure (eg. If a system is running an old 7.41 kernel, update to the latest 7.45 on the source system to avoid a large change during the migration)

2. Develop the High Availability and Disaster Recovery solution. Build a PowerPoint that details the HA/DR concept. The diagram should break up the solution into the DB layer, ASCS layer and SAP application server layer. Separate solutions might be required for standalone solutions such as TREX or Livecache

3. Develop a Sizing & Configuration document that details the Azure VM types and storage configuration. How many Premium Disks, how many datafiles, how are datafiles distributed across disks, usage of storage spaces, NTFS Format size = 64kb. Also document Backup/Restore and DBMS configuration such as memory settings, Max Degree of Parallelism and traceflags

4. Network design document including VNet, Subnet, NSG and UDR configuration

5. Security and Hardening concept. Remove Internet Explorer, create a Active Directory Container for SAP Service Accounts and Servers and apply a Firewall Policy blocking all but a limited number of required ports

6. Create an OS/DB Migration Design document detailing the Package & Table splitting concept, number of R3loads, SQL Server traceflags, Sorted/Unsorted, Oracle RowID setting, SMIGR_CREATE_DDL settings, Perfmon counters (such as BCP Rows/sec & BCP throughput kb/sec, CPU, memory), RSS settings, Accelerated Networking settings, Log File configuration, BPE settings, TDE configuration

7. Create a "Flight Plan" graph showing progress of the R3load export/import on each test cycle. This allows the migration consultant to validate if tunings and changes improve r3load export or import performance. X axis = number of packages complete. Y axis = hours. This flight plan is also critical during the production migration so that the planned progress can be compared against the actual progress and any problem identified early.

8. Create performance testing plan. Identify the top ~20 online reports, batch jobs and interfaces. Document the input parameters (such as date range, sales office, plant, company code etc) and runtimes on the original source system. Compare to the runtime on Azure. If there are performance differences run SAT, ST05 and other SAP tools to identify inefficient statements

9. SAP BW on SQL Server. Check this blogsite regularly for new features for BW systems including Column Store

10. Audit deployment and configuration, ensure cluster timeouts, kernels, network settings, NTFS format size are all consistent with the design documents. Set perfmon counters on important servers to record basic health parameters every 90 seconds. Audit that the SAP Servers are in a separate AD Container and that the container has a Policy applied to it with Firewall configuration.

11. Do check that the lead OS/DB migration consultant is licensed! Request the consultant name, s-user and certification date. Open an OSS message to BC-INS-MIG and ask SAP to confirm the consultant is current and licensed.

12. If possible, have the entire project team associated with the VLDB migration project within one physical location and not geographically dispersed across several continents and time zones.

13. Make sure that there is a proper fallback plan is in place and that it is part of the overall schedule.

14. Do select fast thread count Intel CPU models for the R3load export servers. Do not use "Energy Saver" CPU models as they have much lower performance and do not use 4-socket servers. Intel Xeon E5 Platinum 8158 is a good example

Do not:

1. VLDB OS/DB migration requires an advanced technical skillset and very strong process, change control & documentation. Do not do "on the job training" with VLDB migrations

2. Do not subcontract one consulting organization to do the Export and subcontract another consulting organization to do the Import. Occasionally the Source system is outsourced and managed by one consulting organization or partner and a customer wishes to migrate to Azure and switch to another partner. Due to the tight coupling between Export and Import tuning and configuration it is very unlikely assigning these tasks to different organizations will produce a good result

3. Do not economize on Azure hardware resources during the migration and go live. Azure VMs are charged per minute and can be reduced in size very easily. During a VLDB migration leverage the most powerful VM available. Customers have successfully gone live on 200-250% oversized systems, then stabilized while running significantly oversized systems. After monitoring utilization for 4-6 weeks, VMs are reduced in size or shutdown to lower costs

Required Reading, Documentation and Tips

Below are some recommendations for those setting up this solution based on test deployments:

Check the SAP on Microsoft Azure blog regularly https://blogs.msdn.microsoft.com/saponsqlserver/

Read the latest SAP OS/DB Migration FAQ https://blogs.msdn.microsoft.com/saponsqlserver/tag/migration/

A useful blog on DMO is here https://blogs.sap.com/2017/10/05/your-sap-on-azure-part-2-dmo-with-system-move/

Information on DMO https://blogs.sap.com/2013/11/29/database-migration-option-dmo-of-sum-introduction/

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

How To Build A Chatbot FAQ With The Azure Bot Service

$
0
0

Editor's note: The following post was written by Visual Studio and Development Technologies MVP Joe Mayo as part of our Technical Tuesday series. Mia Chang of the Technical Committee served as the Technical Reviewer of this piece. 

On December 13th 2017, Microsoft announced the general availability of the Azure Bot Service. In a nutshell, the Azure Bot Service helps you build conversational applications. If you’re new to chatbots, think of them as something like apps, except they excel in situations where conversation is more natural. A great example of this is Frequently Asked Questions (FAQs).

Think of all the times you’ve visited a product FAQ and encountered many pages of information. The volume of data and time it takes to hunt for an answer can take a while. Wouldn’t it be nice to ask someone who knows the answer right away? This is a good scenario for a chatbot. You can ask a question in plain language and receive an answer.

This post shows how easy it is to build a chatbot FAQ with the Azure Bot Service:

The approach

There are different tools available to accomplish building a chatbot, and this section describes the choices. The Azure Bot Service supports two languages: C# and node.js. We’ll use C#. We could also use the Azure Bot Service tools built into Azure, or use the Bot Builder SDK with Visual Studio. I’ll be using the latter strategy, using the Visual Studio Bot Application project template.

This chatbot will use the Microsoft QnA Maker service, which helps automate the FAQ. Later, I’ll show how to import an existing FAQ, streamlining the process of getting data into the QnA Maker. The particular FAQ will be for my open source project, LINQ to Twitter, a LINQ provider for the Twitter API.

Setting up the chatbot project

To get started, you’ll need to install the Bot Application project template, which you can find on Microsoft’s Bot Builder Quick Start page. We’ll also be using the Bot Dialog template in this post. Once you have the project and item templates installed, open Visual Studio and select File | New | Project. Select the Bot Application project, name your project (I called mine LinqToTwitterFAQ), and click OK. Make sure your project builds. Next, update NuGet packages, by right-clicking the project and and selecting Update NuGet packages. Go to the Updates tab and update all the packages to get the latest version of the Bot Builder SDK and related dependencies.

The QnA Maker is part of the Microsoft Cognitive Services and is a REST API. This means you can access it via HTTP, using HttpClient or whatever your favorite library is for accessing a REST service. Rather than write all that code, this post takes advantage of a NuGet package that simplifies integrating QnA Maker with a Bot Builder chatbot. To load this package, open NuGet, navigate to the Browse tab and type Microsoft.Bot.Builder.CognitiveServices, which you can see in the following figure:

Click Install to add the package. Note that the code for this post works with v1.1.2 in Visual Studio 2017 – so, if something changes in the future, you might need to adjust your software versions to get this to work.

Like much of the other software that Microsoft produces these days, the Cognitive Services libraries are open source, which you can find on the Bot Builder Cognitive Services GitHub page. Now the chatbot project is ready for adding QnA Maker code. Before doing that, lets set up the QnA Maker FAQ.

Setting up the QnA Maker FAQ

The QnA Maker is a Microsoft Cognitive Services product that you can set up via website at https://qnamaker.ai/. To get started, sign in and click on the Create new service tab. The following Figure shows the Creating a QnA service page, after filling in a couple fields:

In the Creating a QnA service page, SERVICE NAME is a unique name for the FAQ, I used LINQ to Twitter FAQ. QnA Maker gives you three ways to create the FAQ: via a web page URL, uploading a file, or manually entering questions and answers. Since LINQ to Twitter already has a FAQ on the GitHub wiki for the project, I added the URL: https://github.com/JoeMayo/LinqToTwitter/wiki/LINQ-to-Twitter-FAQ.

The FILES option lets you upload a file in various formats, like tab-separated variable, PDF, MS-Word, or Excel. Whether you use a URL or file, QnA Maker infers which lines are questions and which lines are the answers. QnA maker is very smart at this, as the LINQ to Twitter FAQ has questions formatted in Markdown like this:

   <h3>1. I get a <em>401 Unauthorized</em> Exception. How to I 
figure out the problem?</h3> 
    <p>A <em>401 Unauthorized</em> message means that the Twitter 
server is unable to authorize you. There are several causes for this 
problem and the following checklist should help you work through the 
problems:</p>

The actual HTML is more complex than that, but QnA Maker figures out that the <h3> content is the question and the <p> content is the answer. After specifying name and content options, scroll to the bottom of the page and click the Create button. QnA Maker then reads the data and opens an editor where you can modify questions and answers, shown in the figure below:

As good as QnA Maker is at figuring out questions and matching answers, it isn’t perfect – especially considering the myriad of ways people can write and format a FAQ. That means you should review the results of importing a FAQ to ensure it worked properly.

Things to check include making sure that each question has a matching answer, content doesn’t have strange characters, and that URLs are formatted properly. Sometimes it will catch a question, but only part of the answer if it interprets formatting that makes it believe an answer has ended. The service is continuously improving over time, but at a minimum, this saves you a ton of time by avoiding all the manual data entry. That said, remember that there is a manual data entry option, allowing you to copy and paste questions and answers into the editing grid, which you can do by clicking the Add new QnA pair button above the editing grid.

Tip: As you’re editing, be aware that QnA Maker supports Markdown format. Markdown is a quick way to write readable text that can also be translated to HTML. You can learn more about it with a quick Bing search. This is important to know because some normal characters will be interpreted as Markdown, causing the text to not appear the way you would expect.

Click the Save and retrain button to save the FAQ. The retrain part is because QnA Maker uses machine learning to match plain text user questions with answers. After Save and retrain, click on the Test tab (left side of screen) to see how the FAQ works. The following figure shows the Test console:

Testing and training the QnA Maker FAQ

The Test console isn’t a normal chat interface because it has some training options on the left and right. In the Figure, I asked the question “I’m getting a 429 exceptions. What do I do?”

First of all, notice that I misspelled the word “exceptions” as plural. Also, the text doesn’t match the question from the original FAQ, which is “I received an Exception with HTTP Status 429. What does that mean?”. Yet, QnA maker replied with the correct answer. This is the machine learning part, where the user doesn’t have to ask an exact question or spell every word correctly.

On the left of the Figure, notice that it also highlighted the chosen answer in blue. Had QnA Maker accidentally chosen the wrong answer, I could have clicked on one of the other answers to let it know which was correct. As you might have noticed, the 401 answer was the next in the list, which seems like a close (yet inaccurate) alternative.

On the right of the Figure is an edit box that lets you add alternative phrases for this question. When initially creating a FAQ, you’ll likely have only one question for a given answer. This is fine for a print FAQ, but people will ask questions in many different ways. You can anticipate this by adding different questions that a user might ask. Through entering questions, selecting appropriate answers, and adding alternative phrases, you can make a FAQ smarter. When you’re done. click the Save and retrain button to let QnA Maker

incorporate your changes. Next, click the Publish button, review changes, and click Publish again, which shows a screen containing HTTP POST instructions, similar to the following:

POST /knowledgebases/<Knowledge Base ID>/generateAnswer 
Host: https://westus.api.cognitive.microsoft.com/qnamaker/v2.0 
Ocp-Apim-Subscription-Key: <Subscription Key> 
Content-Type: application/json 
{"question":"hi"}

While you can use that information to write code that makes a request to the service, we’ll be building a chatbot that makes the code simpler. Copy the Knowledge Base ID and the Subscription Key. You’ll need those for the next section when adding code to make a chatbot call QnA Maker.

Connecting the chatbot to the QnA Maker FAQ

Earlier, we created a project to set up the chatbot with the Bot Builder SDK. Now, we’re going to connect this chatbot with the LINQ to Twitter FAQ. Go back to the LINQ to Twitter FAQ chatbot, right-click on the Dialogs folder, select Add, select New Item, and you’ll see the Add New Item window. Click the Visual C# folder, click on the Bot Dialog template, name the dialog (I used LinqToTwitterDialog), and click Add. Note: If you didn’t install the Bot Dialog item template, just create a new class file. Modify the code in LinqToTwitterDialog.cs so it looks like the Listing below:

using System; 
using Microsoft.Bot.Builder.CognitiveServices.QnAMaker; 
 
namespace LinqToTwitterFAQ.Dialogs 
{ 
 [Serializable] 
 [QnAMaker( 
 subscriptionKey: "<Your Subscription Key>", 
 knowledgebaseId: "<Your Knowledge Base ID")] 
 public class LinqToTwitterDialog : QnAMakerDialog 
 { 
 } 
}

In the code above, add your Subscription Key and Knowledge Base ID to the QnAMaker attribute. In addition to having the QnAMaker attribute, LinqToTwitterDialogderives from QnAMakerDialog, which comes from the Microsoft.Bot.Builder.CognitiveServices NuGet package you referenced earlier. The final coding task is to tell the chatbot to use this new dialog by modifying MessagesController, shown below: 

      public async Task Post([FromBody]Activity activity) 
        {    
           if (activity.Type == ActivityTypes.Message) 
           { 
               await Conversation.SendAsync(activity, () => new 
Dialogs.LinqToTwitterDialog()); 
            } 
            else 
            { 
                HandleSystemMessage(activity); 
             } 
             var response = 
Request.CreateResponse(HttpStatusCode.OK); 
      return response; 
        }

The only thing that changes in MessagesController is the second argument to SendAsync, instantiating LinqToTwitterDialog, rather than RootDialog. You now have working code and are ready to test. 

Testing the chatbot 

To test a chatbot, run the chatbot project and then open the Bot Framework Channel Emulator. Microsoft’s docs for Debugging bots with the Bot Framework Emulator explains how to download and configure the emulator if you haven’t already done so. Observe the URL in the browser from running the chatbot and add that URL to the emulator to communicate with the chatbot. Tip: don’t forget to append /api/messages to the end of the URL. The following image shows the Emulator communicating with the LINQ to Twitter FAQ chatbot: 

Again, I asked the chatbot a question that wasn’t exactly like the original question and QnA Maker still figured out what the answer was. You now have a working chatbot that understands natural language queries and answers based on a FAQ list. 

Where to from here 

This blog post showed how to build a chatbot and test it in the emulator, but you’ll want to deploy the chatbot to a channel, where users can find it. You can find more information on how to register a chatbot and configure a channel in the Microsoft Azure Bot Service documentation. Another resource is my recently released book, Programming the Microsoft Bot Framework - Programming the Microsoft Bot Framework: A Multiplatform Approach to Building Chatbots/Microsoft Press (aka.ms/botbook). 

 QnA Maker has even more features, like Active Learning and message customizations that enhance the user experience. You can learn more about those features at the Bot Builder Cognitive Services GitHub site. 

 All the code for this post is in my Bot Demos project on GitHub. 

 You can find me on Twitter at @JoeMayo. 


Joe Mayo is an author and independent software consultant, specializing in Microsoft technologies. He has written several books, including Programming the Microsoft Bot Framework by Microsoft Press. A long-time MVP with several years of awards, he lives in Las Vegas, NV and tweets (as @JoeMayo) about #BotFramework and #AI on Twitter.

APIs are now available for managing Hardware Submissions

$
0
0

One of the consistent themes the Hardware Dev Center team hears from the Partner Community is the need for an easy way to automate driver submissions for signing by Microsoft. This is especially true for partners with large volumes of driver submissions because they need a way to build, sign and package drivers inline with their existing build processes. To address this feedback, we have added APIs for driver submission in Hardware Dev Center. This is now available to all partners and allows you to submit drivers for signing by Microsoft.

How does it work?

Microsoft Hardware API are now available for Hardware Dev Center. You can use these REST APIs to submit drivers, download signed drivers, create / upload derived submissions and check status of an existing submission. The APIs can be accessed using your existing Azure AD account by associating an Azure AD application with your Windows Dev Center account. If you are already using Microsoft Store analytics API or Microsoft Store submission API, you could reuse the same credentials to access the Microsoft Hardware API as well.

How to onboard/start using it?

Read through the documentation to understand the methods available, request response types for each of these and how to call them. The documentation also contains sample code on how to use the API. Since these are REST APIs, you should be able to easily onboard to them without the need to change the technology you already use in-house.

What next?

We are looking forward for you to onboard and start using the APIs for driver submissions. However, this is only our first wave. In the coming months, we will be releasing APIs for publishing drivers, enhanced targeting, advanced driver search. Please watch this space for more updates.

I hope you are as excited as we are to start using the APIs to automate your processes and start saving your cycles and increasing your productivity!! Happy automating!


March 2018 App Service update

$
0
0

New tiles on overview blade

Overview blade has been revamped to add quick links for Diagnose and Solve problems, Application Insights and App Service Advisor

image

Free and Shared apps now support HTTPS only configuration

Last year we enabled the ability to force HTTPS connection for your apps hosted on app service and we are now extending support to also cover apps hosted in free and shared App Service plans.

image

Quality of life improvements for App Service Certificates and App Service Domains

Over the last 2 months we have fixed over a dozen issues both in UX and back-end to improve reliability and reduce the incidence of most common user issues in these areas.

Azure Function on National Clouds

Azure functions is now available for National Clouds

 


 

If you have any questions about any of this features or App Service in general be sure to check our forums in MSDN and Stack Overflow.

For any feature requests or ideas check out our User Voice

Previous updates:

Experiments with HoloLens, Mixed Reality Toolkit and two-handed manipulations

$
0
0

Senior Consultant/ADM Davide Zordan recently posted this article on his HoloLens experiment.  In this post, he explains how to get started with HoloLens and Mixed Reality Toolkit.


I’ve always been a big fan of manipulations, as in the past I worked on some multi-touch XAML Behaviors implementing rotate, translate and scale on 2D objects.

As I progress with my learning about HoloLens and Windows Mixed Reality, I had on my to-do list the task of exploring how to recreate this scenario in the 3D Mixed Reality context. Finally, during the weekend, I started some research while preparing a demo for some speaking engagements I’ll have over the next weeks.

 

TwoHandsManipulations-04Device-1024x576

 

Continue reading here.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Dashboard in a day kick off Demo (Short Version)

$
0
0

We are dramatically updating the Dashboard in a day training.  This blog post is to serve as a possible kick off demo flow for the instructors teaching this course.  The "theme" of the demo is "Take Action from Insight" and should take 10-20 minutes.

Step 1.  Introduction to Power BI Desktop and Bring in the Data

Using the walk Dashboard in a Day walk through directions create a Power BI Report and in import the dataset.

To speed this up we have created a version of the sales data that starts with 6/1/207 with only 100K rows called Sales_Demo.xlsx.    What Data Shaping features are up to you and the time you want to spend here.   Suggested duration is ~5 minutes.

Step 2.  Creating the Report

At this point walk the audience through the addition of adding Visual to the Power BI design canvas.  Including Slicers and different visuals.  Note as the next step is data discovery you will need at the very least Revenue by Country , Revenue by Date and a slicer for Manufactures (note this Visual now supports Graphics!).   Suggested duration is ~5 minutes. 

Step 3.  Styling the Report

Definitely a wow moment in the demo as doing the following three things literally transforms the report

  1. Set the background Image from one of the templates supplied
  2. Applying the theme.
  3. Add the Company Log
  4. Move the visuals around enough to sit on the background template image

Recommended Time ~3 minutes

Step 4.  Data Discovery - Finding Business Insight

The dataset was built to show that Australia on Sept 29th saw an unusual trend that started them on a lucrative sales year.  Please use the Dashboard in a day walk through to show this "story".   Recommended Time ~3 minutes

Step 5. Take Action from Insight

Using your Partner DIAD credentials log into the DIAD Tenant, navigate to the "DIAD" Workspace and Open the Report named DIADDemo.  (You can also do the Data discovery with this version and the complete Sales Data).

In the tab titled: PowerApps ToDo, add the PowerApps Custom Visual

Performance traps of ref locals and ref returns in C#

$
0
0

The C# language from the very first version supported passing arguments by value or by reference. But before C# 7 the C# compiler supported only one way of returning a value from a method (or a property) - returning by value. This has been changed in C# 7 with two new features: ref returns and ref locals.

But unlike other features that were recently added to the C# language I've found these two a bit more controversial than the others.

The motivation

There are many differences between the arrays and other collections from the CLR perspectives. The arrays were added to the CLR from the very beginning and you can think of them as of built-in generics. The CLR and the JIT-compiler are aware of the arrays but besides that, they're special in one more aspect: the indexer of the array returns the element by reference, not by value.

To demonstrate this behavior we have to go to the dark side -- use a mutable value type:

public struct Mutable
{
   
private int
_x;
   
public Mutable(int x) => _x =
x;

   
public int X =>
_x;

   
public void IncrementX() { _x++
; }
}

[
Test]
public void
CheckMutability()
{
   
var ma = new[] {new Mutable(1
)};
    ma[
0].
IncrementX();
   
// X has been changed!
    Assert.That(ma[0].X, Is.EqualTo(2
));

   
var ml = new List<Mutable> {new Mutable(1
)};
    ml[
0].
IncrementX();
   
// X hasn't been changed!
    Assert.That(ml[0].X, Is.EqualTo(1));
}

The test will pass because the indexer of the array is quite different from the indexer of the List<T>.

The C# compiler emits a special instruction for the arrays indexer - ldelema that returns a managed reference to a given array's element. Basically, array indexer returns an element by reference. But List<T> can't have the same behavior because it wasn't possible (*) to return an alias to the internal state in C#. That's why the List<T> indexer returns the element by value, i.e. returning the copy of the given element.

(*) As we'll see in a moment, it is still impossible for the List<T>'s indexer to return an element by reference.

This means that ma[0].IncrementX() calls a mutation method on the first element inside of the array, but ml[0].IncrementX() calls a mutation method on a copy, keeping the original list unchanged.

Ref locals and ref returns 101

The basic idea behind these features is very simple: ref returnallows to return an alias to an existing variable and ref local can store the alias in a local variable.

  1. Simple example
[Test]
public void
RefLocalsAndRefReturnsBasics()
{
   
int[] array = { 1, 2
};

   
// Capture an alias to the first element into a local
    ref int first = ref array[0
];
    first
= 42
;
   
Assert.That(array[0], Is.EqualTo(42
));

   
// Local function that returns the first element by ref
    ref int GetByRef(int[] a) => ref a[0
];
   
// Weird syntax: the result of a function call is assignable
    GetByRef(array) = -1
;
   
Assert.That(array[0], Is.EqualTo(-1));
}
  1. Ref returns and readonly ref returns

Ref returns can return an alias to instance fields and starting from C# 7.2 you can return a readonly alias using ref readonly:

class EncapsulationWentWrong
{
   
private readonly Guid
_guid;
   
private int
_x;

   
public EncapsulationWentWrong(int x) => _x =
x;

   
// Return an alias to the private field. No encapsulation any more.
    public ref int X => ref
_x;

   
// Return a readonly alias to the private field.
    public ref readonly Guid Guid => ref
_guid;
}

[
Test]
public void
NoEncapsulation()
{
   
var instance = new EncapsulationWentWrong(42
);
    instance
.X++
;

   
Assert.That(instance.X, Is.EqualTo(43
));

   
// Cannot assign to property 'EncapsulationWentWrong.Guid' because it is a readonly variable
    // instance.Guid = Guid.Empty;
}
  • Methods and properties could return an "alias" to an internal state. The property, in this case, could not have a setter.
  • Return by reference breaks the encapsulation because the client obtains the full control over the object's internal state.
  • Returning by readonly reference avoids a redundant copy for value types but prevents the client from mutating the internal state.
  • You may use ref readonly for reference types even though it makes no sense for non-generic cases.
  1. Existing restrictions Returning an alias could be dangerous: using an alias to a stack-allocated variable after a method is finished will crash the app. To make the feature safe, the C# compiler enforces various restrictions:
  • You can not return a reference to a local variable.
  • You can not return a reference to this in structs.
  • You can return a reference to heap-allocated variable (like class members).
  • You can return a reference to ref/out parameters.

For more information see an amazing post Safe to return rules for ref returns by Vladimir Sadov, the author of this feature in the C# compiler.

Now, once we know what these features are, let's see when they can be useful.

Using ref returns for indexers

To test the performance impact of these features we're going to create a custom immutable collection called NaiveImmutableList<T> and will compare it with the T[] and the List<T> for structs of different sizes (4, 16, 32 and 48).

public class NaiveImmutableList<T>
{
   
private readonly int
_length;
   
private readonly T
[] _data;
   
public NaiveImmutableList(params T
[] data)
       
=> (_data, _length) = (data, data.
Length);

   
public ref readonly T this[int
idx]
       
// R# 2017.3.2 is completely confused with this syntax!
        // => ref (idx >= _length ? ref Throw() : ref _data[idx]);
        {
           
get
            {
               
// Extracting 'throw' statement into a different
                // method helps the jitter to inline a property access.
                if ((uint)idx >= (uint
)_length)
                    ThrowIndexOutOfRangeException();

               
return ref
_data[idx];
            }
        }

   
private static void ThrowIndexOutOfRangeException() =>
        throw new IndexOutOfRangeException
();
}

struct LargeStruct_48
{
   
public int N { get
; }
   
private readonly long
l1, l2, l3, l4, l5;

   
public LargeStruct_48(int n) : this
()
       
=> N =
n;
}

// Other structs like LargeStruct_16, LargeStruct_32 etc

The benchmarks iterate over the collections and sum all the Nproperty values for each elements:

private const int elementsCount = 100_000;
private static LargeStruct_48[] CreateArray_48() =>
 
   
Enumerable.Range(1, elementsCount).Select(v => new LargeStruct_48(v)).ToArray();
private readonly LargeStruct_48[] _array48 =
CreateArray_48();

[
BenchmarkCategory("BigStruct_48"
)]
[
Benchmark(Baseline = true)]
public int
TestArray_48()
{
   
int result = 0
;
   
// Using elementsCound but not array.Length to force the bounds check
    // on each iteration.
    for (int i = 0; i < elementsCount; i++
)
    {
        result
= _array48[i].
N;
    }

   
return result;
}

And here the results:

Method | Mean | Scaled | -------------------------- |---------:|-------:| TestArray_48 | 258.3 us | 1.00 | TestListOfT_48 | 488.9 us | 1.89 | TestNaiveImmutableList_48 | 444.8 us | 1.72 | | | | TestArray_32 | 174.4 us | 1.00 | TestListOfT_32 | 233.8 us | 1.34 | TestNaiveImmutableList_32 | 219.2 us | 1.26 | | | | TestArray_16 | 143.7 us | 1.00 | TestListOfT16 | 192.5 us | 1.34 | TestNaiveImmutableList16 | 167.8 us | 1.17 | | | | TestArray_4 | 121.7 us | 1.00 | TestListOfT_4 | 174.7 us | 1.44 | TestNaiveImmutableList_4 | 133.1 us | 1.09 |

Apparently, something is wrong! Our NaiveImmutableList<T>has effectively the same performance characteristics as List<T>. What happened?

Readonly ref returns under the hood

As you may noticed, the indexer of NaiveImmutableList<T>returns a readonly reference via ref readonly. This makes perfect sense because we want to restrict our clients from mutating the underlying state of the immutable collection. But the structs we've been using in our benchmarks are regular non-readonly structs.

The following test will help us understand the underlying behavior:

[Test]
public void
CheckMutabilityForNaiveImmutableList()
{
   
var ml = new NaiveImmutableList<Mutable>(new Mutable(1
));
    ml[
0].
IncrementX();
   
// X has been changed, right?
    Assert.That(ml[0].X, Is.EqualTo(2));
}

The test fails! Why? Because "readonly references" are similar to in-modifiers and readonly fields in respect to structs: the compiler emits a defensive copy every time a struct member is used. It means that ml[0]. still creates a copy of the first element but not by the indexer: the copy is created in the call site.

In fact, the behavior is very reasonable. The C# compiler supports passing arguments by value, by reference, and by "readonly reference" using in-modifier (for more details see my post The in-modifier and the readonly structs in C#). And now the compiler supports 3 different ways of returning a value from a method: by value, by reference and by readonly reference.

"Readonly references" are so similar, that the compiler reuses the same InAttribute to distinguish readonly and non-readonly return values:

private int _n;
public ref readonly int ByReadonlyRef() => ref _n;

In this case the method ByReadonlyRef is effectively compiled to:

[InAttribute]
[
return: IsReadOnly]
public int*
ByReadonlyRef()
{
   
return ref this._n;
}

The similarity between in-modifier and readonly references means that these features are not friendly to regular structs and could cause performance issues. Here is an example:

public struct BigStruct
{
   
// Other fields
    public int X { get
; }
   
public int Y { get
; }
}

private BigStruct _bigStruct;
public ref readonly BigStruct GetBigStructByRef() => ref
_bigStruct;

ref readonly var bigStruct = ref GetBigStructByRef();
int result = bigStruct.X + bigStruct.Y;

Besides a weird syntax of variable declaration for bigStruct the code looks good. The intent is clear: BigStruct is returned by reference for performance reasons. Unfortunately, because BigStruct is a non-readonly struct, each time a member is accessed, the defensive copy is created.

Using ref returns for indexers. Attempt #2

Let's try the same set of benchmarks with readonly structs of different sizes:

Method | Mean | Scaled | -------------------------- |---------:|-------:| TestArray_48 | 265.1 us | 1.00 | TestListOfT_48 | 490.6 us | 1.85 | TestNaiveImmutableList_48 | 300.6 us | 1.13 | | | | TestArray_32 | 177.8 us | 1.00 | TestListOfT_32 | 233.4 us | 1.31 | TestNaiveImmutableList_32 | 218.0 us | 1.23 | | | | TestArray_16 | 144.7 us | 1.00 | TestListOfT16 | 191.8 us | 1.33 | TestNaiveImmutableList16 | 168.8 us | 1.17 | | | | TestArray_4 | 121.3 us | 1.00 | TestListOfT_4 | 178.9 us | 1.48 | TestNaiveImmutableList_4 | 145.3 us | 1.20 |

Now the results make much more sense. The time still grows for bigger structs, but that is expected because iterating over 100K structs of bigger size take a longer amount of time. But now the timings for NaiveimmutableList<T> is very close to T[] and reasonably faster than List<T>.

Conclusion

  • Be cautious with ref returns because they can break encapsulation.
  • Be cautious with readonly ref returns because they're more performant only for readonly structs and could cause performance issues for regular structs.
  • Be cautious with readonly ref locals because they also could cause performance issues for non-readonly structs causing defensive copy each time the variable is used.

Ref locals and ref returns are useful features for library authors and developers working on infrastructure code. But in the case of library code, these features are quite dangerous: in order to use a collection that returns elements by readonly reference efficiently every library user should know the implications: readonly reference for a non-readonly struct causes a defensive copy "at the call site". This can negate all performance gains at best, or can cause severe perf degradation when a readonly ref local variable is accessed multiple times.

P.S. Readonly references are coming to the BCL. The following PR for corefx repo (Implementing ItemRef API Proposal) introduced readonly ref methods to access the elements of immutable collections. So it is quite important for everyone to understand the implication of these features and to understand how to to use it and when to use it.

Build and deployment automation of a model-driven PowerApp using VSTS

$
0
0

This is the second part of a two part video.  The first part is here:
https://blogs.msdn.microsoft.com/devkeydet/2018/04/10/zero-to-full-source-control-of-a-model-driven-powerapp-using-spkl/

The second video won’t make as much sense if you don’t watch the first video.  In this video, I build on the work I did to get a model-driven PowerApp (the artist formerly known as XRM…as I like to say) into source control by showing how to enable deployment automation using Package Deployer, including setting up initial data using the Configuration Migration tool.  Then, I show you how to use Visual Studio Team Services (VSTS) to build all the assets in source control into their deployable form.  Finally, I show you how to then use VSTS to automate the deployment of those assets to one or many environments.  One of the things I highlight in the video is the Dynamics 365 Build Tools on the Visual Studio Team Services Marketplace.  These tasks greatly improve the productivity of using VSTS with Dynamics 365 & the Common Data Service.

All of the things I do from scratch in these two videos are the foundation of some of the more advanced things I highlight in the Dynamics 365 model-driven PowerApp DevOps work I mention here:
https://blogs.msdn.microsoft.com/devkeydet/2017/10/27/announcing-dynamics-365-devops-on-github/

@devkeydet

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>