Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

02/20 – Errata added for [MS-SFMWA]: Server and File Management Web APIs

$
0
0

Three (3) issues corrected at https://msdn.microsoft.com/en-us/library/mt779135.aspx

  • In Section 7, Appendix B: Full Xml Schema, changed the namespace prefix for http://schemas.microsoft.com/Message from “q6, q7” to “tns10”.
  • Throughout Section 2.2.4, Complex Types, changed the type from DateTime to xs:dateTime in 11 complex type definitions.
  • In Section 2.2.4.50, MSOUser, added the missing FirstName and LastName attributes to the definition of the MSOUser complex type.

02/20 – Errata added for [MS-DHCPM]: Microsoft Dynamic Host Configuration Protocol (DHCP) Server Management Protocol

$
0
0

Four (4) issues corrected at https://msdn.microsoft.com/en-us/library/mt779067.aspx

  • In Section 2.2.1.2.112, DHCP_STATELESS_PARAMS, updated the constant from DHCP_STATELESS_PARAMS to DHCPV6_STATELESS_PARAMS.
  • In Section 2.2.1.1.25, DHCP_MAX_FREE_ADDRESSES_REQUESTED, updated the constant from DHCP_MAX_FREE_ADDRESSES_REQUIRED to DHCP_MAX_FREE_ADDRESSES_REQUESTED.
  • In Section 6, Appendix A: Full IDL, added a line in the full IDL to match the definition in section 2.2.1.1.15.
  • In Section 3.1.4.20, R_DhcpDeleteClientInfo (Opnum 19), the field name was updated from ServerInfo to ClientInfo to match the description.

How to delete a specific instance from a cloud service – PAAS V1

$
0
0

Special Thanks to my colleague Kevin Williamson for the idea and suggestion.

Please check this REST API command – delete Role Instances  https://msdn.microsoft.com/en-us/library/azure/dn469418.aspx
The API takes a specific role instance in the body. We can tell it to just delete a single instance.

<RoleInstances xmlns=”http://schemas.microsoft.com/windowsazure” xmlns:i=”http://www.w3.org/2001/XMLSchema-instance”>
<Name>Role Instance name</Name>
</RoleInstances>

PAAS deployment with 3 instances –

paasv1deployment

POST – https://management.core.windows.net/<subscription-id>/services/hostedservices/<cloudservice-name>/deployments/<deployment-name>/roleinstances/
URI Parameter – comp=delete

Note : You can use AzureTools to test the RESP API calls – https://blogs.msdn.microsoft.com/kwill/2013/08/26/azuretools-the-diagnostic-utility-used-by-the-windows-azure-developer-support-team/
You can navigate to the section in below screenshot, by clicking on “Misc tools” link after downloading the tool.

azurectstool_ink_ink_li

After the Delete API call, deployment has 2 instances –

paasv1deployment2

Lesson Learned #22: How to identify blocking issues?

$
0
0

Today, we were working on a service request where our customer reported that, some TRUNCATE executions are taking more time that they expected. Normally, these TRUNCATE commands just only take 3 or 5 seconds to complete, however, this time, the problem is that they never finish.

Using the following TSQL command:

select conn.session_id as blockerSession,conn2.session_id as BlockedSession,req.wait_time as Waiting_Time_ms,cast((req.wait_time/1000.) as decimal(18,2)) as Waiting_Time_secs,
cast((req.wait_time/1000./60.) as decimal(18,2)) as Waiting_Time_mins,t.text as BlockerQuery,t2.text as BlockedQuery, req.wait_type from sys.dm_exec_requests as req
inner join sys.dm_exec_connections as conn on req.blocking_session_id=conn.session_id
inner join sys.dm_exec_connections as conn2 on req.session_id=conn2.session_id
cross apply sys.dm_exec_sql_text(conn.most_recent_sql_handle) as t
cross apply sys.dm_exec_sql_text(conn2.most_recent_sql_handle) as t2

 

We identified that these TRUNCATE commands are waiting for another query that is reading the data/is hanging. Basically, killing the session using KILL TSQL command all those TRUNCATE commands were completed successfully.

A common execution path optimization

$
0
0

Today I want to talk about one interesting optimization pattern that you may face in framework code or in high-performance libraries.

The idea is simple: suppose you have a commonly used method that has two execution paths – one is very common and simple, and the second one takes longer to execute, has more steps but happens not that often.

As an example, let’s consider a List<T>.Add method. On the “happy-path” Add method is very simple and efficient: if there is enough space it adds an item to an internal array. But if the internal array is full, Add method resizes the underlying array (by allocating another array double the size) and then adds an element to it.

Here is a simple implementation of this method:

public void Add(T item)
{

    if (_size < _items.Length)
    {
        // Common path: the array has enough space
        _items[_size++] = item;
        return;
    }

   
// Corner case: need to resize the array
    int newCapacity = _items.Length == 0 ? _defaultCapacity : _items.Length * 2;

   
T[] newItems = new T[newCapacity];
    if (_size > 0)
    {
        Array.Copy(_items, 0, newItems, 0, _size);
    }

   
_items = newItems;
    _items[_size++] = item;
}

Unfortunately, this implementation is not eligible for method inlining by the JIT compiler, because it is too large. Even if we try to “force” the method inlining by using MethodImplOptions.AggressiveInlining nothing good will happen. As we’ll see at the end of the post, inlining big methods doesn’t improve performance.

There is no official documentation regarding the JIT compiler optimizations. But there are enough unofficial documents (like blog-posts) that cover current behavior. The JIT compiler won’t inline a method in the following cases:

  • Method is marked with MethodImplOptions.NoInlining
  • Method body is larger than 32 bytes of the IL code
  • Virtual method calls including interface method invocation
  • Method has complex flow control like switch, while
  • Method has exception handling logic

Add method shown above fits into the second rule and won’t be inlined due to its size.

We know that the method has two cases: light-weight common path when the list has enough space and the rare case when the list should be resized. Based on this knowledge we can extract the logic for a rare case and leave the happy-path code as is:

public void Add(T item)
{

    if (_size < _items.Length)
    {
        // Common path: the array has enough space
        _items[_size++] = item;
        return;
    }

   
// Rare case: need to resize the array
    AddItemSlow(item);
}


private void AddItemSlow(T item)
{

    int newCapacity = _items.Length == 0 ? _defaultCapacity : _items.Length * 2;
    T[] newItems = new T[newCapacity];
    if (_size > 0)
    {
        Array.Copy(_items, 0, newItems, 0, _size);
    }

   
_items = newItems;
    _items[_size++] = item;
}

The change is very simple, but it makes the method small enough for method inlining by the JIT compiler. We can check the difference using the amazing benchmarking tool – BenchmarkDotNet:

const int DefaultCapacity = 100;
const int NumberOfElements = 500000;

[Benchmark]
public void InlinableList()
{

    var list = new OptimizedList<int>(DefaultCapacity);

   
for (int i = 0; i < NumberOfElements; i++)
    {
        list.Add(i);
    }
}


[Benchmark]
public void NonInlinableList()
{

    var list = new NonOptimizedList<int>(DefaultCapacity);

   
for (int i = 0; i < NumberOfElements; i++)
    {
        list.Add(i);
    }
}
           Method |      Mean |    StdErr |
----------------- |---------- |---------- |
    InlinableList | 3.3074 ms | 0.0373 ms |
NonInlinableList | 3.9557 ms | 0.0097 ms |

As you can see the difference is pretty significant (about 20% for a method that just adds a 500 000 elements). You may wander, why the difference is so big? The asymptotic complexity for the Add method is O(N), but the amortized complexity is O(1). The underlying array grows exponentially, so in the vast majority of cases adding an element to the list so cheap that the method call overhead starts playing a reasonable role.

In Visual Studio, you can check that the JIT compiler actually inlines the method. To do that, add a breakpoint to the both benchmark method, run the app in Release mode, and then switch to Disassembly window (Debug => Windows => Disassembly). But be aware, that you need to disable two options Tools => Options => Debugging => General menu: ‘Suppress JIT optimization on module load’ and ‘Enable Just My Code’.

When to use the pattern?

This is not a common pattern that would be helpful in every application. It may be useful for extremely hot paths in end-user production code. However it is more likely to be helpful for high-performance libraries and frameworks where even slight overhead is critical since they may be called on the hot path.

For instance, this pattern is used in the BCL and several places in TPL Dataflow (StartTaskSafe, OfferAsyncIfNececssary, ProcessAsyncIfNecessary etc):

internal static Exception StartTaskSafe(Task task, TaskScheduler scheduler)
{

    if (scheduler == TaskScheduler.Default)
    {
        task.Start(scheduler);
        // We don't need to worry about scheduler exceptions
        // from the default scheduler.
        return null; 
   
}
    // Slow path with try/catch separated out so that
    // StartTaskSafe may be inlined in the common case.
    else return StartTaskSafeCore(task, scheduler);
}

You may consider using this pattern if you have a highly-used member with two distinct scenarios: fast and common, and heavyweight and rare. If the heavyweight part is not inline friendly due to its size or language constructs (like, try/finally) then splitting methods in two will help the CLR JIT to inline the whole method.

Beware of too much inlining

You may think that the method inlining is such a good thing, that frameworks and libraries should enforce the runtime to inline as much as possible using MethodImplOptions.AggressiveInlining that was added in the .NET 4.5. First of all, the attribute removes only the restriction for method inlining based on a method size and won’t force the runtime to inline virtual methods (even ‘sealed’ ones), or methods with complicated control flow or exception handling.

But the main reason for not using the attribute without careful measurement is the speed. The method inlining is double-edged sword. Method inlining reduces number of instructions but can make the resulting code bigger.

For instance, we can force method inlining for our original implementation of the Add method, but it won’t yield any performance benefits. Actually, the overall performance would be slightly worse (for more information see “To Inline or not to Inline: That is the question” by Vance Morrison):

                             Method |      Mean |    StdErr |    StdDev |    Median |
----------------------------------- |---------- |---------- |---------- |---------- |
                   NonInlinableList | 3.7030 ms | 0.0116 ms | 0.0402 ms | 3.6858 ms |
OriginalListWithAggressiveInlining | 3.8335 ms | 0.0381 ms | 0.2347 ms | 3.7086 ms |
                      InlinableList | 3.3077 ms | 0.0457 ms | 0.2238 ms | 3.1836 ms |

Additional links and resources

[デブサミ] デベロッパーサミット 2017 で登壇した #devsumi #devsumiA

$
0
0

デベロッパーサミット (通称『デブサミ』/ Developer Summit) 2017 で登壇してきました!デブサミとは、3000人以上の来訪者を誇る、日本で最も大きな技術者イベントのひとつです。そんな大舞台での登壇の機会をいただけて光栄です!ありがとうございます!
ライブコーディングをメインにした構成で、C#Visual Studio 2017XamarinMicrosoft Azureについて話してきました。とても楽しかったです!

live-coding

このエントリーをはてなブックマークに追加

デブサミ2017公式サイト:
devsummit17

セッション開始前、大部屋が立ち見で埋まる

私のセッションは、一番大きな部屋の「Room A」(A会場) をあてがわれました!光栄です!
14:10 から、A部屋です!「C#で簡単にモバイルアプリを作ろう!」というタイトルでセッションを持っています。

timetable

そして、会場は超満員になりました!壁三面、立ち見です。

↓ ズラッと並ぶ、シニアエンジニアのみなさま(前の方の席はプレス席と関係者席のため空いています)
kaijo

この写真はセッション開始10分前に撮ったもので、実際の開始時はもっとギュウギュウでした。
運営の、翔泳社鍋島さんに聞いたところ、400人ほどがいらっしゃったようで、

screen-shot-2017-02-22-at-1-01-14
screen-shot-2017-02-22-at-1-01-21

すごいです!A会場が立ち見で溢れるなんて。。信じられません。
(おそらく)千代田さんのセッションへの来場者数、デブサミ史上最高の数でした!

とのことでした!光栄です!ドキドキ、、、!

セッションの待機列

私はスピーカー控え室にいたので待機列のことは知らなかったのですが、
扉の前にズラーーーーーーーっと、とても長かったらしいです!

私のセッションの感想

私のセッションを最後まで聞いてくれた人たちの感想を、ここに。(時系列的には最後に置くべきだけど、でも最初に持ってきたかった)

本番5分前

↓ 本番直前、モニターの接続確認などをしている私の写真

↑この、会場の写真取ってる私が撮られた写真↓

セッションが始まった!

14:10、会場の照明が落ち、
私のセッションが始まりました!

screen-shot-2017-02-22-at-1-52-40

このセッションについて。対象者は

  1. Microsoft 関連のことに疎遠
  2. C# わからない
  3. Xamarin?って何?

です。

screen-shot-2017-02-22-at-2-02-29

そして、ゴールは

  1. C# に興味を持つ
  2. モバイルアプリ開発に、Xamarin + Azure を選択肢のひとつに置く
  3. Visual Studio 2017 を入れる(無料)
  4. web漫画『はしれ!コード学園』を読む

です!

screen-shot-2017-02-22-at-1-52-50

口頭で説明したのですが、
どうしてツイート大歓迎!かというと、

セッションが終わった後、聞いてくださった皆様の感想が見たい」からです!

あと、皆様の実況により、このブログ記事のように、振り返り記事を後から書けますし!いつもありがとうございます!

写真撮影については、これも口頭で説明したのですが、

「仕事で登壇したさい、本社に『○○というイベントで登壇しました』とレポートを書くことになっている。その時、登壇したことを証明するために毎回レポートに写真を貼り付ける必要がある。(必須ではないがあると良い)。それで、私はいつもその写真をツイッターから拾っている。登壇中は自分の写真を撮れないからね。なので、登壇して喋っている私の写真を撮ってSNSでシェアしてくれたら嬉しい!」

ということです!
本当に、いつも助かっています、ありがとうございます!

Agenda

今日話すことの目次!
今日はこの4つについて話します:

screen-shot-2017-02-22-at-1-53-05

  1. モダンな言語:C#
  2. モバイルアプリ開発:Xamarin
  3. 最新のIDE: Visual Studio 2017
  4. クラウド連携: Xamarin + Azure

まずは自己紹介

最初は登壇者紹介です!私について。

screen-shot-2017-02-22-at-1-53-20

自己紹介で「プログラマ兼マンガ家」と書いたので、
描いた漫画の紹介も1枚。

screen-shot-2017-02-22-at-1-53-34

C# の話

今回のデブサミでは、セッション一覧を見るに、
DevOps 系や プロジェクトマネージャー系、Web系、インフラ系、人工知能系、など、非C#系の人が多いかなあと思い、
C#の基本的な話からしました。

C#とは、
Microsoft の開発している
プログラミング言語です。

つかみとして、”C#” の名前の由来も話しました。

screen-shot-2017-02-22-at-2-12-37

C# はいいぞ

C# はいいぞ。
ということで、プログラマ目線からの C# の具体的な紹介です!

C# の持っている機能

16777046_1487691254597056_1640467392_o

C# が好き過ぎて Microsoft に入社したくらいですので、C#の話題になると急に流暢に話せるようになります。

screen-shot-2017-02-22-at-2-34-00

  • 多値戻り値を可能にする タプル
  • コレクション操作のための LINQ
  • 非同期処理のための async/await
  • 型引数を受け取る ジェネリクス
  • 匿名関数 ラムダ式 … などなど

↑ ちなみに、TypeScript (コンパイルすると JavaScript のソースコードを出力する)も Microsoft が (オープンソースで)開発している言語です

サンプルコード

C# は様々な機能を持っていますが、サンプルコードを2つだけご紹介しました。

c4195ruueaebfmz-jpg-large-2

( *゚▽゚* っ)З「会場には優秀なエンジニアの先輩方ばかりだからね!言葉で説明するより実際にコード見てもらったほうが理解してもらえると思ったのよ」

コレクション操作:LINQ

コレクションを操作する時に、C#には LINQ(リンク)という超便利な機能があります。
WhereとかSelectとか SQL文みたいな感じのをメソッドチェーンで書くことができます。

screen-shot-2017-02-22-at-2-16-18

イベントがかっこよく書ける

screen-shot-2017-02-22-at-2-16-11

C# + VS2017 デモ

screen-shot-2017-02-22-at-3-29-16

「では実際にお見せしましょう」

と、Visual Studio 2017 RC を起動!
(Visual Studioとは、Microsoftの開発している、超強力 IDE です。無料版(Communityエディション)もあります)

live-coding

ライブコーディングに沸き立つ会場

Visual Studio 2017 は、起動と.slnファイルの展開が速い

Visual Studio 2017 は起動と.slnファイルの展開が速いよ!
と解説しながら VS 2017 を操作。

クリップボードの生JSONをclassに変換してくれるVSの機能

Visual Studio には、
コピーした生JSON (クリップボードの生JSON)を「JSONをクラスとして貼り付ける」という機能があります!(VS2017の新機能ではなく元々ある機能です。)

20150709124431
(↑画像は MS MVP のしばやんさんのブログから)

16811012_1794288444167630_776498196_o

詳細:VisualStudio コピペ JSONをClassとして貼り付けられるtips

これがなかなかインパクトがあったようです!

ライブコーディング!

今のところ順調です。

デモは8割くらいうまく行っていました。今のところ順調です。

会場のネット環境が貧弱

しかし。

この会場、なんだか、ものすごい、ネットワークが、貧弱だったのです…!

本番前。

私の前のスロットでセッションをしていた、同じく Microsoftのテクニカルエバンジェリストである牛尾さんから、
スピーカー控え室にて

牛尾さん「ネットワーク要注意でっせ」

というアドバイスを賜っていました。
あまりに会場のネットワークが弱すぎて、彼はネットを使ったデモが全滅してしまったらしいです…。(Webブラウザを開いても真っ白の画面)

↓牛尾さんのセッション(RoomB)を見ていた人のツイート

なので、私は事前に念のためにバックアップ用として、デモの動画を撮っておいたのです。
使うことがなければいいだけど、念のために、と。

んで、
予想通り、VSの NuGet Package (C#のパッケージ管理ツールシステム。Rubyのgemのような)Manager を起動しても「読み込み中」のままという悲劇になったので

私「ネットワークの問題で、NuGet Package Manager が読み込めないので、ここから先は、バックアップ用の動画に切り替えます」
と、プランB 発動。

とはいえ、デモは8割くらいライブで成功していました。
最後までライブでやりきることができなくて悔しい。運営さん、来年はスピーカー用のネットワークだけでも改善して欲しいです。

↑「筋肉の人」とは牛尾さんのことです

デモのおさらい

私はいつも、デモをやった後は必ず「今何をしたか」の おさらい をします。

c43dzg3uyaaf6a4-jpg-large-2

ライブコーディングで書いたコードは全てこちらにあります:
https://github.com/chomado/ConsoleApp1

screen-shot-2017-02-22-at-3-30-40

screen-shot-2017-02-22-at-3-31-01

C# もオープンソースだよ!

screen-shot-2017-02-22-at-3-37-48

皆も Visual Studio 使ってね!

Visual Studioとは、Microsoftの開発している、超強力 IDE です。無料版(Communityエディション)もあります

一応Mac版の Visual Studioもありますが、こちらは2016年秋に発表されたばかりで、現在はまだ「プレビュー版」です。(今日紹介した「JSONをclassに変換して貼り付け」などの細かい便利機能などが未実装)
Windows版はフル機能です

VS2017の新機能デモ

時間がなくて VS2017 の新機能のひとつ Live Unit Testing の機能のデモができなかった!

ということもあろうかと、前もって詳しい手順をすべて記事にしておきました。ご覧ください

C# は どこでも動くぞ!

.NET環境があれば C#はどこでも動くぞ! Androidでもどこでも!

screen-shot-2017-02-22-at-3-40-51

Xamarin

Microsoft 入社前はずっと Xamarin でスマホアプリを作る developer だったので、経験者です!

screen-shot-2017-02-22-at-3-41-51

screen-shot-2017-02-22-at-3-42-01

screen-shot-2017-02-22-at-3-42-31

screen-shot-2017-02-22-at-3-42-11

Xamarin は、コード共有化が美味しいよね!
まあ何より C#だけで開発できるのが一番の旨味だと思うけど!(by C# lover)

screen-shot-2017-02-22-at-3-43-53

Azure

Microsoft Azure (アジュール)について!

screen-shot-2017-02-22-at-4-06-43

↓セッションの内容はすべてこちらの記事にまとまっています!↓

まとめ

皆の感想

最後はアフタヌーンティー

会場が素敵なところだったので、登壇後は大好きな紅茶とスイーツです!

私のセッションを見た人の感想ブログ

How to Apply Transaction Logs to Secondary When it is Far Behind

$
0
0

 

Problem Large Send Queue

You discover that the log send queue for a given availability database has grown very large, which threatens your Recovery Point Objective (RPO) and the transaction log has grown very large on the primary replica, possibly threatening to fill the drive. The cause may have been for various reasons: you discover that the availability database synchronization was inadvertently suspended by an administrator, or even by the system and the problem was not identified until now.

 

Scenario 1 The database is not large

If it is reasonable to re-initialize the secondary replica copy of the database using a backup and restore of the database whose log file has grown very large, then you can allow synchronization to proceed to drain the log send queue or if the size of the transaction log is much larger than the database itself, remove the database from the availability group, shrink the log file and add it back into the availability group, re-seeding your secondary replicas with the newly added database.

  1. In SQL Server Management Studio (SSMS) remove the database from the availability group on the primary replica.
  2. Put the database in simple recovery mode on the primary replica. This will allow you to shrink the log file.
  3. Shrink the log file down to a reasonable size. To shrink the log file, in SSMS’ Object Explorer right-click the database, choose Tasks and then Shrink and then Files. For File type choose Log and click OK.
  4. Once log file has ben shrunk, configure the database for full recovery mode and then right-click the availability group, choose Add Database and use the wizard to add the database back and re-initialize the secondary replica with it.

 

Scenario 2 The database is very large, making re-initialization prohibitive to the secondary replicas

If the data portion of the database is very large combined with the remoteness of one or more secondary’s, it may not be reasonable to use Scenario 1. Another option is to apply the log backups onto the database at the secondary, to ‘catch up’ the database at the secondary. Following are the steps.

   1. Determine the transaction logs that must be applied. Query the last_hardened_lsn for the database on the secondary. Connect to the secondary and run the following supplying the database name as the only predicate:

select distinct dcs.database_name, ds.last_hardened_lsn from 

sys.dm_hadr_database_replica_states ds join sys.dm_hadr_database_replica_cluster_states dcs

on ds.group_database_id=dcs.group_database_id

where dcs.database_name=’AutoHa-sample’

You will get results like this.

image

 

   2. Query the log backups in MSDB to find what log backup the secondary LSN falls in the range of.

select name, backup_set_id, backup_start_date, backup_finish_date, first_lsn, last_lsn from msdb..backupset

where first_lsn<‘93000012832800001′ and last_lsn>’93000012832800001’

 

We see one row returned, we need to apply all logs starting with this log on the secondary replica:

image

 

Alternative to find the first transaction log backup Query sys.dm_hadr_database_replica_states.last_hardened_time for the database (same query as above). Use File Explorer to view the transaction log backup files and order by modified date. Compare the last_hardened_time with the backup modified dates to determine what transaction log backup you should start with, selecting the transaction log backup whose modified date is just beyond the last_hardened_time. Query that transaction log to verify the last_hardened_time falls in that transaction log backup:

restore headeronly from disk= N’f:backupsAutoHa-sample_backup_2017_01_20_194401_3869314.bak’

   3. Take database out of the availability group on the secondary. Connect to the secondary replica and execute the following to take the database out of the secondary replica.

alter database [AutoHa-sample] set hadr off

On the secondary the database will be in Restoring and ready to apply logs to.

image

 

   4. Apply transaction logs to database on the secondary replica Apply the transaction log backups. Here is a view of the log backup files.  We identified the one called ‘AutoHa-sample_backup_2017_01_20_194401_3869314’ as containing our secondary database last_hardened_lsn.

image

 

Begin restoring with that log being sure to restore with ‘no recovery’ all the transaction log backups.

restore log [AutoHa-sample] from disk=’\sqlserver-0BackupsAutoHa-sample_backup_2017_01_20_194401_3869314.trn’ with norecovery
go
restore log [AutoHa-sample] from disk=’\sqlserver-0BackupsAutoHa-sample_backup_2017_01_20_194500_8646900.trn’ with norecovery
go
restore log [AutoHa-sample] from disk=’\sqlserver-0BackupsAutoHa-sample_backup_2017_01_20_194600_9809513.trn’ with norecovery
go
restore log [AutoHa-sample] from disk=’\sqlserver-0BackupsAutoHa-sample_backup_2017_01_20_200000_6780254.trn’ with norecovery
go

 

   5. Add the database back into the availability group on the secondary and resume synchronization. Connect to the secondary replica and execute the following to add the database back into the availability group and resume sync.

alter database [AutoHa-sample] set hadr availability group = [contoso-ag]
go
alter database [AutoHa-sample] set hadr resume
go

You can view the availability group database on the secondary to confirm it is added back and synchronized.

image

Accelerate MXNet R training (deep learning) by GPUs and multiple machines

$
0
0

Scale your machine learning workloads on R (series)

In my previous post, I showed how to reduce your scoring workloads on deep learning using MXNetR. Here I show the training aspect on deep learning.

Get the power of devices – GPUs

For the training perspective, GPU is a very important factor for reducing and leveraging the workloads. With MXNet, you can easily take advantage of GPU utilized deep learning, and let’s take a look at these useful capabilities.

First you can easily get the GPU utilized environment using Azure N-series (NC, NV) Virtual Machines. You must setup (install) all components (drivers, packages, etc) with the following installation script, and then you can get the GPU-powered MXNet. (You must compile MXNet with USE_CUDA=1 switch.) Because we’re setting up RStudio server as follows, you can connect to this Linux machine using your familiar RStudio client on the web.
For the details about this compiling, you can refer “Machine Learning team blog : Building Deep Neural Networks in the Cloud with Azure GPU VMs, MXNet and Microsoft R Server“.

I note that you should download the following software (drivers, tools) before running this script.

#!/usr/bin/env bash

#
# install R (MRAN)
#
wget https://mran.microsoft.com/install/mro/3.3.2/microsoft-r-open-3.3.2.tar.gz
tar -zxvpf microsoft-r-open-3.3.2.tar.gz
cd microsoft-r-open
sudo ./install.sh -a -u
cd ..
sudo rm -rf microsoft-r-open
sudo rm microsoft-r-open-3.3.2.tar.gz

#
# install gcc, python, etc
#
sudo apt-get install -y libatlas-base-dev libopencv-dev libprotoc-dev python-numpy python-scipy make unzip git gcc g++ libcurl4-openssl-dev libssl-dev
sudo update-alternatives --install "/usr/bin/cc" "cc" "/usr/bin/gcc" 50

#
# install CUDA (you can download cuda_8.0.44_linux.run)
#
chmod 755 cuda_8.0.44_linux.run
sudo ./cuda_8.0.44_linux.run -override
sudo update-alternatives --install /usr/bin/nvcc nvcc /usr/bin/gcc 50
export LIBRARY_PATH=/usr/local/cudnn/lib64/
echo -e "nexport LIBRARY_PATH=/usr/local/cudnn/lib64/" >> .bashrc

#
# install cuDNN (you can download cudnn-8.0-linux-x64-v5.1.tgz)
#
tar xvzf cudnn-8.0-linux-x64-v5.1.tgz
sudo mv cuda /usr/local/cudnn
sudo ln -s /usr/local/cudnn/include/cudnn.h /usr/local/cuda/include/cudnn.h
export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:/usr/local/cudnn/lib64/:$LD_LIBRARY_PATH
echo -e "nexport LD_LIBRARY_PATH=/usr/local/cuda/lib64/:/usr/local/cudnn/lib64/:$LD_LIBRARY_PATH" >> ~/.bashrc

#
# install MKL (you can download l_mkl_2017.0.098.tgz)
#
tar xvzf l_mkl_2017.0.098.tgz
sudo ./l_mkl_2017.0.098/install.sh

# Additional setup for MRAN and CUDA
sudo touch /etc/ld.so.conf
echo "/usr/local/cuda/lib64/" | sudo tee --append /etc/ld.so.conf
echo "/usr/local/cudnn/lib64/" | sudo tee --append /etc/ld.so.conf
sudo ldconfig

#
# download MXNet source
#
MXNET_HOME="$HOME/mxnet/"
git clone https://github.com/dmlc/mxnet.git "$HOME/mxnet/" --recursive
cd "$MXNET_HOME"

#
# configure MXNet
#
cp make/config.mk .
# if use dist_sync or dist_async in kv_store (see later)
#echo "USE_DIST_KVSTORE = 1" >>config.mk
# if use Azure BLOB Storage
#echo "USE_AZURE = 1" >>config.mk
# For GPU
echo "USE_CUDA = 1" >>config.mk
echo "USE_CUDA_PATH = /usr/local/cuda" >>config.mk
echo "USE_CUDNN = 1" >>config.mk
# For MKL
#source /opt/intel/bin/compilervars.sh intel64 -platform linux
#echo "USE_BLAS = mkl" >>config.mk
#echo "USE_INTEL_PATH = /opt/intel/" >>config.mk

#
# compile and install MXNet
#
make -j$(nproc)
sudo apt-get install libxml2-dev
sudo Rscript -e "install.packages('devtools', repo = 'https://cran.rstudio.com')"
cd R-package
sudo Rscript -e "library(devtools); library(methods); options(repos=c(CRAN='https://cran.rstudio.com')); install_deps(dependencies = TRUE)"
sudo Rscript -e "install.packages(c('curl', 'httr'))"
sudo Rscript -e "install.packages(c('Rcpp', 'DiagrammeR', 'data.table', 'jsonlite', 'magrittr', 'stringr', 'roxygen2'), repos = 'https://cran.rstudio.com')"
cd ..
sudo make rpkg
sudo R CMD INSTALL mxnet_current_r.tar.gz
cd ..

#
# install RStudio server
#
sudo apt-get -y install gdebi-core
wget -O rstudio.deb https://download2.rstudio.org/rstudio-server-0.99.902-amd64.deb
sudo gdebi -n rstudio.deb

Note : Later I explain about USE_DIST_KVSTORE switch.

“Still so much complicated and it takes much time to compile !”

Don’t worry. If you think it’s hard, you can use pre-configured machines (VM) called “Deep Learning Toolkit for the DSVM (Data Science Virtual Machines)” (currently Windows only) in Microsoft Azure (see below). Using this VM template, you can take almost all components required for the deep neural network computing, including NC-series VM with GPUs (Telsa), those drivers (software and toolkit), Microsoft R, R Server, and GPU-accelerated MXNet (and other DNN libraries). No setup is needed !
I note that this deployment requires the access to Azure NC instances which depends on the choice the regions and HDD/SSD. In this post I selected South Central US region.

Note : Below is the matrix of available instance or services by Azure regions.
https://azure.microsoft.com/en-us/regions/services/

Here I’m using 2 GPUs labeled 0 and 1. (see below)

nvidia-smi -pm 1
nvidia-smi

If you want to use GPUs for deep neural networks, you can easily switch the device mode into GPU mode using MXNet as follows. In this code, the data batch is partitioned by 2 GPUs, and the results are summed up.
Here I’m using LeNet network (CNN) for MNIST (which is the famous handwritten digits recognition example, and you can see details here), and sorry but GPU power is not so much used (less than 10%) in this example. Please use CIFAR-10 or some other heavy workloads for the real scenario…

#####
#
# train.csv (training data) is:
# (label, pixel0, pixel1, ..., pixel783)
# 1, 0, 0, ..., 0
# 4, 0, 0, ..., 0
# ...
#
#####
#
# test.csv (scoring data) is:
# (pixel0, pixel1, ..., pixel783)
# 0, 0, ..., 0
# 0, 0, ..., 0
# ...
#
#####

require(mxnet)

# read training data
train <- read.csv(
  "C:\Users\tsmatsuz\Desktop\training\train.csv",
  header=TRUE)
train <- data.matrix(train)

# separate label and pixel
train.x <- train[,-1]
train.x <- t(train.x/255)
train.array <- train.x
dim(train.array) <- c(28, 28, 1, ncol(train.x))
train.y <- train[,1]

#
# configure network
#

# input
data <- mx.symbol.Variable('data')
# first conv
conv1 <- mx.symbol.Convolution(
  data=data,
  kernel=c(5,5),
  num_filter=20)
tanh1 <- mx.symbol.Activation(
  data=conv1,
  act_type="tanh")
pool1 <- mx.symbol.Pooling(
  data=tanh1,
  pool_type="max",
  kernel=c(2,2),
  stride=c(2,2))
# second conv
conv2 <- mx.symbol.Convolution(
  data=pool1,
  kernel=c(5,5),
  num_filter=50)
tanh2 <- mx.symbol.Activation(
  data=conv2,
  act_type="tanh")
pool2 <- mx.symbol.Pooling(
  data=tanh2,
  pool_type="max",
  kernel=c(2,2),
  stride=c(2,2))
# first fullc
flatten <- mx.symbol.Flatten(data=pool2)
fc1 <- mx.symbol.FullyConnected(
  data=flatten,
  num_hidden=500)
tanh3 <- mx.symbol.Activation(
  data=fc1,
  act_type="tanh")
# second fullc
fc2 <- mx.symbol.FullyConnected(data=tanh3, num_hidden=10)
# loss
lenet <- mx.symbol.SoftmaxOutput(data=fc2)

# train !
kv <- mx.kv.create(type = "local")
mx.set.seed(0)
tic <- proc.time()
model <- mx.model.FeedForward.create(
  lenet,
  X=train.array,
  y=train.y,
  ctx=list(mx.gpu(0),mx.gpu(1)),
  kvstore = kv,
  num.round=5,
  array.batch.size=100,
  learning.rate=0.05,
  momentum=0.9,
  wd=0.00001,
  eval.metric=mx.metric.accuracy,
  epoch.end.callback=mx.callback.log.train.metric(100))

# score (1st time)
test <- read.csv(
  "C:\Users\tsmatsuz\Desktop\training\test.csv",
  header=TRUE)
test <- data.matrix(test)
test <- t(test/255)
test.array <- test
dim(test.array) <- c(28, 28, 1, ncol(test))
preds <- predict(model, test.array)
pred.label <- max.col(t(preds)) - 1
print(table(pred.label))

Distributed Training – Scale across multiple machines

You can distribute your MXNet training workloads by not only devices, but also multiple machines.

Here we assume there’re three machines (hosts), named “server01”, “server02”, and “server03”. Now we launch the parallel job on server02 and server03 from server01 console.

Before staring, you must compile MXNet with USE_DIST_KVSTORE=1 on all hosts. (See the above bash script example)

Note : Currently this distributed kvstore settings (USE_DIST_KVSTORE=1) is not enabled in existing Data Science Virtual Machines (DSVM) or Deep Learning Toolkit for the DSVM by default. Hence you must setup (compile) by yourself.

In terms of the distribution protocol, ssh, mpirun, and yarn can be used for the remote execution and cluster management. Here we use ssh for example.

First we setup the trust between host machines.
We create the key pair on server01 using the following command, and the generated key pair (id_rsa and id_rsa.pub) is in .ssh folder. During creation, you set blank (null) to the certificate password (passphrase).

ssh-keygen -t rsa

ls -al .ssh

drwx------ 2 tsmatsuz tsmatsuz 4096 Feb 21 05:01 .
drwxr-xr-x 7 tsmatsuz tsmatsuz 4096 Feb 21 04:52 ..
-rw------- 1 tsmatsuz tsmatsuz 1766 Feb 21 05:01 id_rsa
-rw-r--r-- 1 tsmatsuz tsmatsuz  403 Feb 21 05:01 id_rsa.pub

Next you copy the generated public key (id_rsa.pub) into {home of the same user id}/.ssh directory on server02 and server03. The file name must be “authorized_keys“.

Now let’s confirm that you can pass the command (pwd) to the remote hosts (server02, server03) from server01 as follows. If succeeded, the current working directory on remote host will be returned. (We assume that 10.0.0.5 is the ip address of server02 or server03.)

ssh -o StrictHostKeyChecking=no 10.0.0.5 -p 22 pwd

/home/tsmatsuz

Next you create the file named “hosts” in your working directory on server01, and please write the ip of accessing remote hosts (server02 and server03) in each rows.

10.0.0.5
10.0.0.6

In my example, I simply use the following trainig script test01.R (which is executed on remote hosts). As you can see, here we set the dist_sync for kvstore value, which means the synchronous parallel execution. (The weight and bias on remote hosts will be updated synchronously.)

test01.R

#####
#
# train.csv (training data) is:
# (label, pixel0, pixel1, ..., pixel783)
# 1, 0, 0, ..., 0
# 4, 0, 0, ..., 0
# ...
#
#####
#
# test.csv (scoring data) is:
# (pixel0, pixel1, ..., pixel783)
# 0, 0, ..., 0
# 0, 0, ..., 0
# ...
#
#####

require(mxnet)

# read training data
train_d <- read.csv(
  "train.csv",
  header=TRUE)
train_m <- data.matrix(train_d)

# separate label and pixel
train.x <- train_m[,-1]
train.y <- train_m[,1]

# transform image pixel [0, 255] into [0,1]
train.x <- t(train.x/255)

# configure network
data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, name="fc1", num_hidden=128)
act1 <- mx.symbol.Activation(fc1, name="relu1", act_type="relu")
fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=64)
act2 <- mx.symbol.Activation(fc2, name="relu2", act_type="relu")
fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=10)
softmax <- mx.symbol.SoftmaxOutput(fc3, name="sm")

# train !
model <- mx.model.FeedForward.create(
  softmax,
  X=train.x,
  y=train.y,
  ctx=mx.cpu(),
  num.round=10,
  kvstore = "dist_sync",
  array.batch.size=100,
  learning.rate=0.07,
  momentum=0.9,
  eval.metric=mx.metric.accuracy,
  initializer=mx.init.uniform(0.07),
  epoch.end.callback=mx.callback.log.train.metric(100))

Now you can run the parallel execution.
On server01 console, you call the following launch.py with executable command “Rscript test01.R“. Then this command is executed on server02 and server03, and those are traced by the job monitor. (Here we assume that /home/tsmatsuz/mxnet is the installation directory of MXNet.)

Note that be sure to put this R script (test01.R) and the training data (train.csv) in the same directory on server02 and server03. If you want to see the weight and bias are properly updated, it’s better to use the different training data between server02 and server03. (Or it’ll be better to download appropriate data files from remote site on the fly.)

/home/tsmatsuz/mxnet/tools/launch.py -n 1 -H hosts
  --launcher ssh Rscript test01.R

The output result is the following. The upper side is the result by the single node, and bottom side is the synchronized result by server02 and server03.

Active learning (Online learning) with MXNet

In the previous section, we thought the parallel and synchronous training workloads. Lastly we think about training in time series.

As you know, it will take much wasting time to re-train from the beginning. Using MXNet, you can reserve the trained model, and refine the model with new data as follows.
As you can see, we’re setting the previously trained symbol and parameters in the 2nd training. (See below with bold fonts)

#####
#
# train.csv (training data) is:
# (label, pixel0, pixel1, ..., pixel783)
# 1, 0, 0, ..., 0
# 4, 0, 0, ..., 0
# ...
#
#####
#
# test.csv (scoring data) is:
# (pixel0, pixel1, ..., pixel783)
# 0, 0, ..., 0
# 0, 0, ..., 0
# ...
#
#####

require(mxnet)

# read first 500 training data
train_d <- read.csv(
  "C:\Users\tsmatsuz\Desktop\training\train.csv",
  header=TRUE)
train_m <- data.matrix(train_d[1:500,])

# separate label and pixel
train.x <- train_m[,-1]
train.y <- train_m[,1]

# transform image pixel [0, 255] into [0,1]
train.x <- t(train.x/255)

# configure network
data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, name="fc1", num_hidden=128)
act1 <- mx.symbol.Activation(fc1, name="relu1", act_type="relu")
fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=64)
act2 <- mx.symbol.Activation(fc2, name="relu2", act_type="relu")
fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=10)
softmax <- mx.symbol.SoftmaxOutput(fc3, name="sm")

# train !
model <- mx.model.FeedForward.create(
  softmax,
  X=train.x,
  y=train.y,
  ctx=mx.cpu(),
  num.round=10,
  array.batch.size=100,
  learning.rate=0.07,
  momentum=0.9,
  eval.metric=mx.metric.accuracy,
  initializer=mx.init.uniform(0.07),
  epoch.end.callback=mx.callback.log.train.metric(100))

# score (1st time)
test <- read.csv("C:\Users\tsmatsuz\Desktop\training\test.csv", header=TRUE)
test <- data.matrix(test)
test <- t(test/255)
preds <- predict(model, test)
pred.label <- max.col(t(preds)) - 1
table(pred.label)

#
# save the current model if needed
#

# save model to file
mx.model.save(
  model = model,
  prefix = "mymodel",
  iteration = 500
)

# load model from file
model_loaded <- mx.model.load(
  prefix = "mymodel",
  iteration = 500
)

# re-train for the next 500 data !
train_d <- read.csv(
  "C:\Users\tsmatsuz\Desktop\training\train.csv",
  header=TRUE)
train_m <- data.matrix(train_d[501:1000,])
train.x <- train_m[,-1]
train.y <- train_m[,1]
train.x <- t(train.x/255)
model = mx.model.FeedForward.create(
  model_loaded$symbol,
  arg.params = model_loaded$arg.params,
  aux.params = model_loaded$aux.params,
  X=train.x,
  y=train.y,
  ctx=mx.cpu(),
  num.round=10,
  #kvstore = kv,
  array.batch.size=100,
  learning.rate=0.07,
  momentum=0.9,
  eval.metric=mx.metric.accuracy,
  initializer=mx.init.uniform(0.07),
  epoch.end.callback=mx.callback.log.train.metric(100))

# score (2nd time)
test <- read.csv("C:\Users\tsmatsuz\Desktop\training\test.csv", header=TRUE)
test <- data.matrix(test)
test <- t(test/255)
preds <- predict(model, test)
pred.label <- max.col(t(preds)) - 1
table(pred.label)

Note : “Active Learning” here is not for educational terminology, but for machine learning terminology. Please see Wikipedia “Active Learning (Machine Learning)”.

 

[Reference] MXNet Docs : Run MXNet on Multiple CPU/GPUs with Data Parallel
http://mxnet.io/how_to/multi_devices.html

 


Bienvenido a Soporte Premier para Desarrolladores

$
0
0

Nuestra principal misión es ayudar a las empresas aumentar la calidad sus aplicaciones de negocio y aumentar productividad de sus equipos de desarrollo, así como reducir costos y riesgos inherentes durante el ciclo de desarrollos de software empresarial.

Por tal motivo abrimos este blog para compartir nuestro conocimiento en cuanto a desarrollo de software y ayudarlos en su proceso de adopción de tecnologías como ALM(Application Lifecycle Management), DevOps, desarrollo de aplicaciones en Azure y otros temas relacionados con la nube.

Todas las organizaciones están desarrollando aplicaciones….cada compañía es una empresa de software
Premier Developer
Soporte Premier para Desarrolladores les ayuda en la adopción de Azure, guiándolos y orientándolos en desarrollo de aplicaciones modernas, mediante una relación duradera, poniendo a su disposición expertos con gran capacidad técnica, innovación y un servicio de clase mundial.

Este servicio les ayudará a maximizar su inversión tecnológica, reduciendo los riesgo inherentes, aumentar la confiabilidad de sus sistemas y mejorar considerablemente la productividad de sus equipos de desarrollo.


Tareas communes que brinda nuestro soporte tecnico.
soportepremierdesarrolladores
Alguno de los topicos que planeamos cubrir en este blog incluyen:



 
nube Desarrollo en Azure ALM ALM/DevOps Aplicaciones modernasAplicaciones Modernas aplicaciones-personalizadasAplicaciones Personalizadas
  • Desarrollar en la Nube
  • Adopción técnica de Azure
  • Soporte de aplicaciones en Azure
  • Integración Continua
  • Gestión de liberaciones
  • Entrega Ágil
  • Aplicaciones multiplataforma
  • Desarrollo para móviles (Xamarin)
  • Business Insights (Big Data/BI)
  • Arquitectura para el futuro
  • SDLC (Fuzzing as a Service)
  • Telemetría
No dude en escribirnos para cualquier pregunta que tenga con respecto a nuestros servicios.

de:code 2017 開催決定!

$
0
0

日本マイクロソフトの開発者/アーキテクト/IT 技術者向けイベント「de:code 2017」の開催が、 2017 年 5 月 23 日 – 24 日にザ・プリンス パークタワー東京で決定しました。

詳細、参加申し込みの開始は間もなくです!

最新情報は Twitter:@msdevjp(ハッシュタグ #decode17)で発信していきますので、お見逃しなく!

————————

概要

de:code 2017 は、マイクロソフト テクノロジのビジョンと、「クラウド」「モバイル」を最大限に活かせる最新テクノロジをすべてのITエンジニアの皆様にご紹介するイベントです。Microsoft Azure や Windows Holographic などの最新情報をはじめ、5月10日から5月12日の3日間、米国シアトルで開催される Build 2017 で発表される内容も合わせて、IT に関わるすべてのエンジニアの皆様に役立つ情報をお届けします。

日時

5 月 23 日 (火) 9:30 – 20:30 [受付開始 8:45 予定]
5 月 24 日 (水) 9:30 – 18:30 [受付開始 9:00 予定]

場所

ザ・プリンス パークタワー東京 [access]

参加費

有償。追ってオフィシャルサイトにて公開いたします。

————————

※予告なく変更される場合があります。最新情報はオフィシャルサイトをご確認ください。

 

————————————————————————–

————————————————————————–

※本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

SharePoint 2016 February 2017 CU and MIM to import user profiles

$
0
0

MIM and User Profile Import into SharePoint Server 2016, hence not using “Active Directory” synchronization.

You may read in https://support.microsoft.com/en-us/help/3141517/february-21-2017-update-for-sharepoint-server-2016-kb3141517 the following:

This update fixes the following issue:

When you set up an External Identity Manager, the group membership isn’t synchronized as usual. This update now includes a new timer job “Updates Profile Memberships and Relationships Job” that runs by default every five minutes to update the changes after an import.

You installed this cumulative update package, because you had concerns that the attribute “Manager” will not be synced into profile store.

After the install of all the binaries, explained here: https://blogs.msdn.microsoft.com/joerg_sinemus/2017/02/22/sharepoint-2016-february-2017-cu/ ran psconfigui, the “Health Analyzer” reported; in case the scheduled job “Health Analysis Job (Hourly, User profile Service, Any Server) finished already one time after psconfig:

clip_image002

Please click that link on the Health Report page and you will reach the following page:

HealthAnalyzerScreenshot

After the repair, the issue should be solved and in the “Review job definitions” page you may now find the right new job:

image

UPA is the name we gave for the service in our farm, it may differ to your configuration.

And a positive result, after the job ran at least one time, may look like this:

clip_image007

 

The other way to find this problem with the new timer job, is when you look into ULS Logs:

Process

Area

Category

EventID

Level

Message

OWSTIMER.EXE

SharePoint Foundation

Monitoring

nasq

High

Entering Monitored Scope (Health Rule Execution: Microsoft.Office.Server.Administration.UserProfileInstalledJobsHealthRule, Microsoft.Office.Server.UserProfiles, Version=16.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c). Parent=Executing Hourly Any User Profile Service rules.

OWSTIMER.EXE

SharePoint Portal Server

User Profiles

f2no

Medium

UserProfileInstalledJobsHealthRule.Check – missing job ‘Microsoft.Office.Server.UserProfiles.OM.ExternalIdentityManagerMembershipsAndRelationshipsJob’ for UserProfileApp ‘UPA’

OWSTIMER.EXE

SharePoint Foundation

Health

2137

Critical

The SharePoint Health Analyzer detected an error. Verify that the critical User Profile Application and User Profile Proxy Application timer jobs are available and have not been mistakenly deleted. A required timer job for a User Profile Application or User Profile Application Proxy is missing. The repair action will recreate missing timer jobs required for the User Profile Application or User Profile Application Proxy. For more information about this rule, see “http://go.microsoft.com/fwlink/?LinkID=159660”.

The best way to solve this, is through the health analyzer report from above.

References:

Install Microsoft Identity Manager for User Profiles in SharePoint Server 2016
https://technet.microsoft.com/en-us/library/mt627723(v=office.16).aspx

SharePoint 2016 February 2017 CU

$
0
0

SharePoint Server 2016 and the next cumulative update is available. We may call it public update in the future, in short PU.

In addition, for this month our SharePoint product group decided to also add the “Feature Pack 1” to the cumulative update. Be sure you read the blog post from Bill Baer to learn more: https://blogs.office.com/2016/11/08/feature-pack-1-for-sharepoint-server-2016-now-available/

KB 3141515 Language independent version
https://support.microsoft.com/en-us/KB/3141515

Download it from: https://www.microsoft.com/en-us/download/details.aspx?id=54779

KB 3141517 Language dependent fixes
https://support.microsoft.com/en-us/kb/3141517

Download it from: https://www.microsoft.com/en-us/download/details.aspx?id=54781

You may download both SharePoint packages, the language what you see on the download page is not important.

No Update for Office Online Server 2016; known as Office Web Application Server 2013, OWAS and let us call it now OOS2016

After installing the fixes you need to run the SharePoint 2016 Products Configuration Wizard on each machine in the farm. If you prefer to run the command line version psconfig.exe ensure to have a look here for the correct options.

SharePoint 2016 CU Build Number: 16.0.4498.1002 and for the language independent 16.0.4498.1002
The language independent Fix updates the configuration database schema – so the configuration database schema version will show the version number above.

You can use the SharePoint Server 2016 Patch Build Numbers PowerShell Module to identify the patch level of your SharePoint components.

Related Links:

#TheFeed – How good might our leadership be? By Merlin John

$
0
0

The following post features in the Jan/Feb 2017 issue of #TheFeed, our online magazine bringing you the best stories from Microsoft Showcase Schools and #MIEExperts, thought leadership, and news from the Microsoft in Education team. This piece is written Merlin John – a widely respected journalist known for his expertise in education and ICT – and explores the vital role that leadership plays in enhancing children’s learning, and ways to transform your school to through the use of technology.

Head over to SlideShare to browse all the latest stories from this edition of #TheFeed:


#TheFeed – How good might our leadership be?

by Merlin John

“How good might our children really be?” It’s a simple question, one you are likely to hear from leading UK academic Professor Stephen Heppell, a figure synonymous with technology for learning. Less is more: its profundity lies as much in what it doesn’t ask as in what it actually does.

How good might they be, if we didn’t:

  • Narrow their horizons with prescriptive curricula?
  • Limit their contributions with rigid, outdated assessment regimes?
  • Hamper them from pursuing learning across subject confines, as they would in adult life?
  • Stultify their creativity and engagement by placing them in unsuitable learning environments?
  • Fail to exploit the power of technology to support, extend and improve learning and teaching?

Education reform expert Professor Michael Fullan has brought technology into his work in recent years (see “A Rich Seam – How New Pedagogies find Deep Learning”). His view is that while technology is not one of the key levers for education reform, it can accelerate all the ones he has identified in his writings.

Professor Fullan is committed to change at scale, and has already demonstrated how it can be done. So what’s the problem with education’s encounter with technology?

Pundits have been fond of blaming teachers for failing to engage with technology, but here in the UK what has become clear is that technology for learning has virtually disappeared from the political agenda for schools in England. That’s not the case in Wales, Scotland and Northern Ireland but the acme of schools strategy for England, where all were once encouraged to become academies, is seen by many as a backwards move, the reintroduction of grammar schools.

“The staff’s support in welcoming more than half the new intake with stimulating learning activities, was inspirational…”

win17a_lisbon_stem_school_devices_hpspectrex360_0045


DRAGONS SHOW WAY TO TACKLE ‘CROSS PHASE’ AT SHIRELAND

If only the former promise to trust teachers and schools had been held on to. Because there’s plenty of evidence to show that they are perfectly capable of ensuring that we discover just how good our children might be, and that successful policy works best when it comes from proven practice. A visit to Shireland Collegiate Academy this year as a dragon for the ‘Digital Dragons Den’ culmination of their annual Summer School showed duty of care taken to new levels.

Given that research shows that most children’s progress stalls in the move from primary to secondary (known as ‘cross phase’), the staff’s support in welcoming more than half the new intake with stimulating learning activities, was inspirational for this particular visitor. Just as it clearly was for the new students who even got a taste of the ‘flipped learning’ that Shireland is pioneering (and with its local primary schools for a major national research study with the Education Endowment Foundation – see also European Schoolnet’s “Enhancing learning through the Flipped Classroom: Shireland Collegiate Academy”).


BROADCLYST TAKES ENTERPRISE GLOBAL – AND SECONDARIES JOIN IN

Then there is Broadclyst Community Primary School in Devon, another of those schools that defies the boxes that people try to place them in. Yes it’s a Microsoft showcase school (like Shireland), but schools like this are always going to achieve the best for their students with or without technology partners. Of course the partnership makes it work better. And Microsoft and Broadclyst learn from each other.

Take Broadclyst’s Global Education Challenge which, in its second year, reached 700 students aged 9-12 in 200 teams across 20 countries. The 2016-17 event has been extended to involve secondary students (aged 12-15). Schools from the Dominican Republic, Spain, Jordan, the Netherlands, USA and Albania have already signed up (see “Devon primary’s global enterprise reels in secondaries”).

This also has huge implications for cross-phase work as both primary and secondary will collaborate on similar enterprise projects that entail real-life tasks, international collaboration, sharing with external audiences.

It’s a mistake to see the great learning and teaching in schools like Shireland and Broadclyst as down to the technology, although it does play an important part. At the heart of both schools lies tremendous duty of care, to do the very best for the learners to show just how good they can be. Every tool, including edtech, is enlisted for that purpose.Change is spreading. Back in 2003 US academic Professor Larry Cuban authored the iconic Oversold and Underused: Computers in the Classroom.

It was a timely warning about ineffective investments in schools ICT. Now he is beginning to see benefits appear in California classrooms and is working on a new book to be published in 2017 (see “Can learning fly like a butterfly or a bullet?”). Professor Cuban, someone inured to technology proselytisers and lobbyists, has been finding successful integration of technology for learning at teacher, school and district (local authority) level. You can already see the work in progress on his blog.

There is so much to celebrate in schools when it comes to their successful use of technology, but right now the UK does need a strategic political touch to ensure that this is not just something happening in a minority of schools. It’s time for the reluctant policymakers to recognise that, join in the celebrations and help embed in policy what has been proven by great schools and their teachers and learners. “How good might our children really be?” Is a really good question to work with.


Follow Merlin John on Twitter @MerlinJohn

de:code 社内稟議書を活用ください

$
0
0

de:code 2017 ご参加にあたり、社内稟議が必要な方向けに、
稟議書無料テンプレートをご用意いたしました。
また、合わせてイベントのリーフレットもダウンロードいただけます!

ぜひご活用ください。

————-

de:code 2017 稟議書テンプレート

de:code 2017 リーフレット

————-

お申込み開始まで間もなくです!

 

 

————————————————————————–

————————————————————————–

※本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Announcing TypeScript 2.2

$
0
0
Today our team is happy to present our latest release with TypeScript 2.2!

For those who haven’t yet heard of it, TypeScript is a simple extension to JavaScript to add optional types along with all the new ECMAScript features. TypeScript builds on the ECMAScript standard and adds type-checking to make you way more productive through cleaner code and stronger tooling. Your TypeScript code then gets transformed into clean, runnable JavaScript that even older browsers can run.

While there are a variety of ways to get TypeScript set up locally in your project, the easiest way to get started is to try it out on our site or just install it from npm:

npm install -g typescript

If you’re a Visual Studio 2015 user with update 3, you can install TypeScript 2.2 from here. You can also grab this release through NuGet. Support in Visual Studio 2017 will come in a future update.

If you’d rather not wait for TypeScript 2.2 support by default, you can configure Visual Studio Code and our Sublime Text plugin to pick up whatever version you need.

As usual, we’ve written up about new features on our what’s new page, but we’d like to highlight a couple of them.

More quick fixes

One of the areas we focus on in TypeScript is its tooling – tooling can be leveraged in any editor with a plugin system. This is one of the things that makes the TypeScript experience so powerful.

With TypeScript 2.2, we’re bringing even more goodness to your editor. This release introduces some more useful quick fixes (also called code actions) which can guide you in fixing up pesky errors. This includes

  • Adding missing imports
  • Adding missing properties
  • Adding forgotten this. to variables
  • Removing unused declarations
  • Implementing abstract members

With just a few of these, TypeScript practically writes your code for you.

As you write up your code, TypeScript can give suggestions each step of the way to help out with your errors.

Expect similar features in the future. The TypeScript team is committed to ensuring that the JavaScript and TypeScript community gets the best tooling we can deliver.

With that in mind, we also want to invite the community to take part in this process. We’ve seen that code actions can really delight users, and we’re very open to suggestions, feedback, and contributions in this area.

The object type

The object type is a new type in 2.2 that matches any types except for primitive types. In other words, you can assign anything to the object type except for string, boolean, number, symbol, and, when using strictNullChecks, null and undefined.

object is distinct from the {} type and Object types in this respect due to structural compatibility. Because the empty object type ({}) also matches primitives, it couldn’t model APIs like Object.create which truly only expect objects – not primitives. object on the other hand does well here in that it can properly reject being assigned a number.

We’d like to extend our thanks to members of our community who proposed and implemented the feature, including François de Campredon and Herrington Darkholme.

Easier string indexing behavior

TypeScript has a concept called index signatures. Index signatures are part of a type, and tell the type system what the result of an element access should be. For instance, in the following:

interface Foo {
    // Here is a string index signature:
    [prop: string]: boolean;
}

declare const x: Foo;

const y = x["hello"];

Foo has a string index signature that says “whenever indexing with a string, the output type is a boolean.” The core idea is that index signatures here are meant to model the way that objects often serve as maps/dictionaries in JavaScript.

Before TypeScript 2.2, writing something like x["propName"] was the only way you could make use of a string index signature to grab a property. A little surprisingly, writing a property access like x.propName wasn’t allowed. This is slightly at odds with the way JavaScript actually works since x.propName is semantically the same as x["propName"]. There’s a reasonable argument to allow both forms when an index signature is present.

In TypeScript 2.2, we’re doing just that and relaxing the old restriction. What this means is that things like testing properties on a JSON object has become dramatically more ergonomic.

interface Config {
    [prop: string]: boolean;
}

declare const options: Config;

// Used to be an error, now allowed!
if (options.debugMode) {
    // ...
}

Better class support for mixins

We’ve always meant for TypeScript to support the JavaScript patterns you use no matter what style, library, or framework you prefer. Part of meeting that goal involves having TypeScript more deeply understand code as it’s written today. With TypeScript 2.2, we’ve worked to make the language understand the mixin pattern.

We made a few changes that involved loosening some restrictions on classes, as well as adjusting the behavior of how intersection types operate. Together, these adjustments actually allow users to express mixin-style classes in ES2015, where a class can extend anything that constructs some object type. This can be used to bridge ES2015 classes with APIs like Ember.Object.extend.

As an example of such a class, we can write the following:

type Constructable = new (...args: any[]) => object;

function Timestamped<BC extends Constructable>(Base: BC) {
    return class extends Base {
        private _timestamp = new Date();
        get timestamp() {
            return this._timestamp;
        }
    };
}

and dynamically create classes

class Point {
    x: number;
    y: number;
    constructor(x: number, y: number) {
        this.x = x;
        this.y = y;
    }
}

const TimestampedPoint = Timestamped(Point);

and even extend from those classes

class SpecialPoint extends Timestamped(Point) {
    z: number;
    constructor(x: number, y: number, z: number) {
        super(x, y);
        this.z = z;
    }
}

let p = new SpecialPoint(1, 2, 3);

// 'x', 'y', 'z', and 'timestamp' are all valid properties.
let v = p.x + p.y + p.z;
p.timestamp.getMilliseconds()

The react-native JSX emit mode

In addition to the preserve and react options for JSX, TypeScript now introduces the react-native emit mode. This mode is like a combination of the two, in that it emits to .js files (like --jsx react), but leaves JSX syntax alone (like --jsx preserve).

This new mode reflects React Native’s behavior, which expects all input files to be .js files. It’s also useful for cases where you want to just leave your JSX syntax alone but get .js files out from TypeScript.

Support for new.target

With TypeScript 2.2, we’ve implemented ECMAScript’s new.target meta-property. new.target is an ES2015 feature that lets constructors figure out if a subclass is being constructed. This feature can be handy since ES2015 doesn’t allow constructors to access this before calling super().

What’s next?

Our team is always looking forward, and is now hard at work on TypeScript 2.3. While our team’s roadmap should give you an idea of what’s to come, we’re excited for our next release, where we’re looking to deliver

  • default types for generics
  • async iterator support
  • downlevel generator support

Of course, that’s only a preview for now.

We hope TypeScript 2.2 makes you even more productive, and allows you to be even more expressive in your code. Thanks for taking the time to read through, and as always, happy hacking!


Rückblick: TrashLAN an der TU-Braunschweig

$
0
0

Hallo liebe Student Partner,trashlan1

die TU Braunschweig war Ende Januar der Gastgeber einer “TrashLAN”. Neben Spaß am Spielen lag ein besonderer Fokus auf Nachhaltigkeit. Genau aus diesem Grund wurden ausschließlich gespendete, aus der Mode gekommene Computer und Komponenten verwendet. Also keine “High-End” Kisten, sondern eher Rechner die man aus dem 90er und 2000er kennt.

Pünktlich zur Veranstaltung konnten das Team an der TU Braunschweig allen Gästen insgesamt 45 “alte” Rechner zum “zocken” der guten alten 90er und 2000er Spiele-Klassiker stellen. Dieses Retro-Feeling hat man den ganzen Abend der Lan-Party gespürt und erlebt.

Im heutigen Zeitalter der fast grenzenlosen Ressourcen und Möglichkeiten denkt man nicht mehr über jedes Bit & Byte nach.

Wie ist es mit dir?

Denkst du über Ressourcen nach? Entwickelst du nachhaltig und Ressourcen-sparend?

#NachhaltigEntwickeln #BytesSparen

AX for Retail: Instant Email Receipts in Enterprise POS (EPOS)

$
0
0

Dynamics AX For Retail Enterprise POS (EPOS) has functionality to email receipts but the implementation of the feature has a few tradeoffs:

  • Email does not get sent immediately at the conclusion of the transaction.  Instead, the process goes like this:  1) a transaction is uploaded to HQ (via P-job); 2) it gets calculated on a statement; 3) it then gets posted via statement posting; 4) finally, the Send Email Receipts batch process runs and sends the email.  Statement posting is usually done nightly, so this means emails usually won’t get sent out until the next day.  While a simple customization will make Send Email Receipts send email for unposted transactions, there is still a lag until the P-job uploads the transactions.  Randy Higgins has a good blog post on how this process works.
  • Generation of the receipt is done via an SSRS report.  This adds quite a bit of overhead to the process.
  • Receipts are sent via a PDF attachment instead of the body of the email.
  • Changing the look and feel of the receipt means customizing an SSRS report instead of using the receipt designer.  This means that the emailed receipts will look very different than the printed receipt.

These limitations are largely overcome in the Modern Point of Sale (MPOS) for AX 2012 R3 and Microsoft Dynamics 365 for Operations:

  • Email is sent via SMTP immediately at the conclusion of the transaction.  This is done with a Realtime Service call to Headquarters.
  • The receipt is sent in the body of the email message.
  • Generation of the email receipt is done by the MPOS application.  This is done by using the same layout as the printed receipt.

Randy’s follow-up blog post for MPOS goes through the details of how this process works.

Ideas for Improvement

Because upgrading to MPOS is not an option for some customers, I looked into some options for improving email receipts in EPOS.  Here are a few ideas I came up with:

  • Change the Send Email Receipts batch process to forego SSRS and generate a message body (HTML or simple text) and send directly.  This would get past the SSRS performance issue and PDF attachment, but would be a lot of X++ development and would still not necessarily match the printed receipt.  It would also not get past the lag of needing to upload the the transactions via the P-job first.
  • Have EPOS generate and send an email directly via SMTP without calling Realtime Service.  This would eliminate the Realtime Service call at the end of each transaction, but it would also mean managing SMTP communication directly from each EPOS instance.  This would also mean database changes (EPOS would have to have information about its SMTP server) and making sure that SMTP communication from dozens or hundreds of different IP addresses is healthy.  I like having AX manage the SMTP traffic and besides, EPOS can call a Realtime Service call just as easily as MPOS can.
  • Mirror the MPOS functionality as much as possible as an EPOS customization.  I settled on this one so as to not introduce a third way of emailing receipts:  there is still just the “MPOS way” and the “EPOS way.”

Customization Approach

There are really only a few things that the customization has to do:

  • Generate the email body of the receipt
  • Determine the email address to send it to
  • Call the same Realtime Service method that MPOS does to send the message

Realtime Service call from PostEndTransaction Trigger

As a refresher, I talk about customizing Realtime Service (previously Transaction Service) in this blog post.  In this case, the X++ work is already done for us since we’ll be using the same method that MPOS calls:  SendEmail.  A good place to add this call is in the PostEndTransaction trigger (in the TransactionTriggers project).

The RetailTransactionService::SendEmail X++ method takes the following parameters:

  • SysEmailId emailId:  This is the email template as defined in Organization Administration > Setup > E-mail Templates.  MPOS hard-codes this to “EmailRecpt” so we’ll do the same.
  • CustLanguageId languageId:  This is the language of the EPOS client.  We have this available in the System.Threading.Thread.CurrentThread.CurrentUICulture.Name variable.
  • Email email:  The recipient’s email address.  EPOS already has logic to ask the user to confirm an email address when needed.  This gets stored to the RECEIPTEMAIL column of the RETAILTRANSACTIONTABLE and is used by the Send Email Receipts batch job.  We can use the same value in our trigger:  retailTransaction.ReceiptEmailAddress
  • str serializedMappings:  This is a series of key/value pairs for substitution in your email template.  For instance, if you have a %customername% in the body of your email template, you would send in the pair “key: customername, value:Contoso Entertainment” as part of this XML.  As noted in Randy’s blog post, MPOS only sends one key/value pair:  %message%.  This means that we will send the entire receipt as this one key/value:  “key: message, value: a big string for the entire receipt”.
  • str xmlData:  Not used.
  • boolean isTraceable:  Hard-coded to false.
  • boolean isWithRetries:  Hard-coded to true.

Note that I pulled this from MPOS code in the SDK:  Retail SDK CU9Commerce Run-timeWorkflowOrdersGetReceiptRequestHandler.cs (the parameters are in a different order).

In our EPOS code, our Realtime Service call looks like this:

01a - Email RTS Call

Getting the Receipt Text for the Email Body

The text of a printed receipt is generated in the Printing service.  We could repeat that logic in our PostEndTransaction trigger but that seems like an unnecessary task when the work has already been done for us.

Keep in mind that receipt printing is pretty simple:  using a form layout as defined in the Receipt Format Designer, data from the Retail Transaction object is converted to a simple string of formatted text.  This is then sent as-is (with some simple formatting) to the printer.  For our customization, we will intercept that string and save it to a PartnerData variable on the transaction (see this blog post for information about PartnerData).  This value will then persist to our trigger and we can use it as the body of the email.

This can be done with a small customization to the PrintFormTransaction() method in ServicesPrintingPrintingActions.cs:

02 - Save Email Body PartnerData

Pulling it Together:

The text that comes back from GetTransformedTransaction has some special formatting characters which need to be cleaned up.  This can be done in the trigger (more code taken from the MPOS SDK):

03 - Cleanup Characters

Note that this is essentially the same code from MPOS (GetReceiptRequestHandler.cs).

Also, even though we are only sending in a single key/value pair, to make it consistent with MPOS, it still needs to be sent in as a serialized collection.  I’ve added a helper class and a couple helper methods for this usage:

04 - Serialized Collection

Finally there is some simple logic of whether to send the message and some basic error handling:

05 - Should we send

06 - Error Handling

 

Wrap-up

If things work correctly, you can test it with a customer that has an email address associated.  EPOS will prompt the user to confirm or change the recipient’s address:

07 Email Address

And if you are using smtp4dev for your testing, you’ll see the message come across:

08 Email Received

09 Receipt Body

 

You’ll find the the full source code for this customization attached below and it can be used as-is.  I’ve included the baseline R3 CU12 versions of the files from the SDK which can be used for code diffs.

There are a few areas that you might want to improve upon:

  • The receipt email depends on a Realtime Service call.  If this call fails the user will get a message but there is no retry logic in place (which is is the same limitation as MPOS).  You may want to add some error handling or a mechanism to track failed messages.  Also, if Realtime Service connection is lost, it may add processing time to the transactions before a timeout occurs.
  • For retry logic, you could look into adding a button on the Show Journal page to re-send the email message.  Alternatively, it might be useful to have the ability to send an email directly from AX.
  • By default, EPOS will only prompt to send messages for named customers that have an email address associated with their account.  You may want to add UI and logic to prompt for an email receipt for one-off customers as well.
  • If you look closely at the email body above you’ll notice that the horizontal spacing is not implemented correctly.  I noticed this as I was writing up this post and it is also an issue in MPOS.  Keep an eye out for an update to this article when I find a solution for that problem.

Code Sample

Learn C++ Concepts with Visual Studio and the WSL

$
0
0

Concepts promise to fundamentally change how we write templated C++ code. They’re in a Technical Specification (TS) right now, but, like Coroutines, Modules, and Ranges, it’s good to get a head start on learning these important features before they make it into the C++ Standard. You can already use Visual Studio 2017 for Coroutines, Modules, and Ranges through a fork of Range-v3. Now you can also learn Concepts in Visual Studio 2017 by targeting the Windows Subsystem for Linux (WSL). Read on to find out how!

About concepts

Concepts enable adding requirements to a set of template parameters, essentially creating a kind of interface. The C++ community has been waiting years for this feature to make it into the standard. If you’re interested in the history, Bjarne Stroustrup has written a bit of background about concepts in a recent paper about designing good concepts. If you’re just interested in knowing how to use the feature, see Constraints and concepts on cppreference.com. If you want all the details about concepts you can read the Concepts Technical Specification (TS).

Concepts are currently only available in GCC 6+. Concepts are not yet supported by the Microsoft C++ Compiler (MSVC) or Clang. We plan to implement the Concepts TS in MSVC but our focus is on finishing our existing standards conformance work and implementing features that have already been voted into the C++17 draft standard.

We can use concepts in Visual Studio 2017 by targeting the Linux shell running under WSL. There’s no IDE support for concepts–thus, no IntelliSense or other productivity features that require the compiler–but it’s nice to be able to learn Concepts in the same familiar environment you use day to day.

First we have to update the GCC compiler. The version included in WSL is currently 4.8.4–that’s too old to support concepts. There are two ways to accomplish that: installing a Personal Package Archive (PPA) or building GCC-6 from source.

But before you install GCC-6 you should configure your Visual Studio 2017 install to target WSL. See this recent VCBlog post for details: Targeting the Windows Subsystem for Linux from Visual Studio. You’ll a working setup of VS targeting Linux for the following steps. Plus, it’s always good to conquer problems in smaller pieces so you have an easier time figuring out what happened if things go wrong.

Installing GCC-6

You have two options for installing GCC-6: installing from a PPA or building GCC from source.

Using a PPA to install GCC

A PPA allows developers to distribute programs directly to users of apt. Installing a PPA tells your copy of apt that there’s another place it can find software. To get the newest version of GCC, install the Toolchain Test PPA, update your apt to find the new install locations, then install g++-6.

sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt update
sudo apt install g++-6

The PPA installs GCC as a non-default compiler. Running g++ --version shows version 4.8.4. You can invoke GCC by calling g++-6 instead of g++. If GCC 6 isn’t your default compiler you’ll need to change the remote compiler that VS calls in your Linux project (see below.)

g++ --version
g++-6 --version
Building GCC from source

Another option is to build GCC 6.3 from source. There are a few steps, but it’s a straightforward process.

  1. First you need to get a copy of the GCC 6.3 sources. Before you can download this to your bash shell, you need to get a link to the source archive. Find a nearby mirror and copy the archive’s URL. I’ll use the tar.gz in this example:
    wget http://[path to archive]/gcc-6.3.0.tar.gz
    
  2. The command to unpack the GCC sources is as follows (change /mnt/c/tmp to the directory where your copy of gcc-6.3.0.tar.gz is located):
    tar -xvf /mnt/c/tmp/gcc-6.3.0.tar.gz
    
  3. Now that we’ve got the GCC sources, we need to install the GCC prerequisites. These are libraries required to build GCC. (See Installing GCC, Support libraries for more information.) There are three libraries, and we can install them with apt:
    sudo apt install libgmp-dev
    sudo apt install libmpfr-dev
    sudo apt install libmpc-dev
    
  4. Now let’s make a build directory and configure GCC’s build to provide C++ compilers:
    cd gcc-6.3.0/
    mkdir build
    cd build
    ../configure --enable-languages=c,c++ --disable-multilib
    
  5. Once that finishes, we can compile GCC. It can take a while to build GCC, so you should use the -j option to speed things up.
    make -j
    

    Now go have a nice cup of coffee (and maybe watch a movie) while the compiler compiles.

  6. If make completes without errors, you’re ready to install GCC on your system. Note that this command installs GCC 6.3.0 as the default version of GCC.
    sudo make install
    

    You can check that GCC is now defaulting to version 6.3 with this command:

    $ gcc --version
    gcc (GCC) 6.3.0
    Copyright (C) 2016 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.  There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    

Trying out Concepts in VS

Now that you’ve updated GCC you’re ready to try out concepts! Let’s restart the SSH service again (in case you exited all your bash instances while working through this walkthrough) and we’re ready to learn concepts!

sudo service ssh start

Create a new Linux project in VS:

newlinuxproject

Add a C++ source file, and add some code that uses concepts. Here’s a simple concept that compiles and executes properly. This example is trivial, as the compile would fail for any argument i that doesn’t define operator==, but it demonstrates that concepts are working.

#include <iostream>

template <class T>
concept bool EqualityComparable() {
	return requires(T a, T b) {
		{a == b}->bool;
		{a != b}->bool;
	};
}

bool is_the_answer(const EqualityComparable& i) {
	return (i == 42) ? true : false;
}

int main() {
	if (is_the_answer(42)) {
		std::cout << "42 is the answer to the ultimate question of life, the universe, and everything." << std::endl;
	}
	return 0;
}

You’ll also need to enable concepts on the GCC command line. Go to the project properties, and in the C++ > Command Line box add the compiler option -fconcepts.

fconcepts

If GCC 6 isn’t the default compiler in your environment you’ll want to tell VS where to find your compiler. You can do that in the project properties under C++ > General > C++ compiler by typing in the compiler name or even a full path:

gplusplus6

Now compile the program and set a breakpoint at the end of main. Open the Linux Console so you can see the output (Debug > Linux Console). Hit F5 and watch concepts working inside of VS!

concepts

Now we can use Concepts, Coroutines, Modules, and Ranges all from inside the same Visual Studio IDE!

In closing

As always, we welcome your feedback. Feel free to send any comments through e-mail at visualcpp@microsoft.com, through Twitter @visualc, or Facebook at Microsoft Visual Cpp.

If you encounter other problems with Visual C++ in VS 2017 please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions, let us know through UserVoice. Thank you!

Coloring an entire state in Power BI

$
0
0

Was asked during today Power BI User Group leaders call how fill in an entire state with a color based on a measure.

Turns out the shape map makes this pretty pretty easy and since a picture is worth a 1000 words have decided to save myself some time and create one with a screen shot.

 

 

image

Microsoft Dynamics 365 新機能: 新しいアプリケーション: カスタムアプリの作成

$
0
0

みなさん、こんにちは。

前回の記事では、Dynamics 365 新しいアプリケーションの概要から各デザイナーについて紹介してきました。
ご覧になっていない方は是非一読ください。

Microsoft Dynamics 365 新機能: 概要
Microsoft Dynamics 365 新機能: 新しいアプリケーション: 概要
Microsoft Dynamics 365 新機能: 新しいアプリケーション: アプリケーションデザイナー
Microsoft Dynamics 365 新機能: 新しいアプリケーション: サイトマップデザイナー

本記事では、独自のアプリケーションを作成する手順を紹介していきます。
今回は、簡単な勤怠管理アプリを作成してみます。

事前準備

まずは勤怠入力用のカスタムエンティティを1つ、勤務表のダッシュボード 1 つを事前に作成します。

エンティティ作成

1. Dynamics 365 にログインします。

2. [設定]  [カスタマイズ] をクリックします。

image

3. [システムのカスタマイズ] クリックします。

image

4. [エンティティ] をクリックし、新規をクリックします。

image

5. カスタムエンティティを作成します。

image

6. フィールドに開始日、終了日、作業時間を作成します。

image

7. 作業時間フィールドは計算フィールドにしています。

image

グラフ作成

続いてダッシュボードに表示するグラフを作成します。

1. グラフをクリックし、[新規] をクリックします。

image

2. 新規から所有者ごとの作業時間のグラフを作成します。

image

ダッシュボード作成

続いてダッシュボードを作成します。

1. [ダッシュボード] をクリックし、[新規] ボタンをクリックします。

image

2. 名前を入力し、グラフアイコンをクリックします。

image

3. 作成したグラフを設定し、[追加] ボタンをクリックします。

image

4. ダッシュボードにグラフが挿入されたことがわかります。

image

以上で事前準備は完了です。

アプリケーションの作成

早速、勤怠管理アプリケーションを作成してみましょう。

1. Dynamics 365 にログインします。

2. [設定]  [カスタマイズ] をクリックします。

3. [システムのカスタマイズ] クリックします。

4. [アプリ] をクリックし、[新規] ボタンをクリックします。

image

5. 名前、説明、URL の接頭辞を入力します。

image

6. [完了] をクリックします。アプリデザイナーが起動します。

image

7. サイトマップをクリックします。

image

7. サイトマップデザイナーが起動されます。今回は、勤務表ダッシュボードと勤務表エンティティをサイトマップに表示します。

image

8. 新しいエリアをクリックし、タイトルを変更します。

image

8. 続いて、新しいグループをクリックし、タイトルを変更します。

image

9. 続いて、新しいサブエリアをクリックし、ダッシュボードを選択します。

image

9. 勤怠管理エリアをクリックし、追加をクリックします。

image

10. 新しいグループが追加されました。名前を変更します。

image

11. タイトルを変更します。

image

12. グループを選択して、サブエリアを追加します。

image

13. 新しいサブエリアを追加しました。

image

14. エンティティ、タイトルを設定します。

image

15. [保存して閉じる] をクリックします。

image

16. サイトマップに指定したエンティティが、アプリデザイナーに表示されていることがわかります。

image

17. 検証をクリックします。

image

18. 警告が表示されます。

image

19. エラーはないため、公開をクリックします。

image

以上で、アプリの作成は完了です。

動作確認

1. Dynamics 365 のホームページにアクセスします。

https://home.dynamics.com/

2. 勤怠管理アプリが表示されていることがわかります。

image

3. アプリをクリックします。アプリが起動されます。

image

3. URL の接頭辞をから直接アプリにアクセスすることも可能です。

image

4. サイトマップを展開すると設定したエリア、サブエリアが表示されていることがわかります。

image

5. [勤怠] をクリックします。開始日時、終了日時を入力します。作業時間が自動で計算されます。

image

6. 複数の日付の勤務情報を登録します。

7. 勤務表をクリックします。

image

8. ダッシュボードが表示され、月ごとの作業時間が表示されます。

image

まとめ

いかがでしょうか。新しいアプリケーションを利用することで、既存のエンティティ、ダッシュボードから
簡単にアプリケーションを作成することができました。次回は、Dynamics 365 の新機能をそれぞれ紹介していきます。

– プレミアフィールドエンジニアリング 河野 高也

Viewing all 35736 articles
Browse latest View live