Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

ASP.NET Core 2.2.0-preview1: HTTP/2 in Kestrel

$
0
0

As part of the 2.2.0-preview1 release, we’ve added support for HTTP/2 in Kestrel.

What is HTTP/2?

HTTP/2 is a major revision of the HTTP protocol. Some of the notable features of HTTP/2 are support for header compression and fully multiplexed streams over the same connection. While HTTP/2 preserves HTTP’s semantics (HTTP headers, methods, etc) it is a breaking change from HTTP/1.x on how this data is framed and sent over the wire.

As a consequence of this change in framing, servers and clients need to negotiate the protocol version used. While it is possible to have prior knowledge between the server and the client on the protocol, all major browsers support ALPN as the only way to establish a HTTP/2 connection.

Application-Layer Protocol Negotiation (ALPN)

Application-Layer Protocol Negotiation (ALPN) is a TLS extension that allows the server and client negotiate the protocol version used as part of their TLS handshake.

How do I use it?

In 2.2.0-preview1 of Kestrel, HTTP/2 is enabled by default (we may change this in subsequent releases). Since most browsers already support HTTP/2 any request you make will already happen over HTTP/2 provided certain conditions are met:

  • The request is made over an HTTPS connection.
  • The native crypto library used by .NET Core on your platform supports ALPN

In the event that either of these conditions is unmet, the server and client will transparently fallback to using HTTP1.1.

The default binding in Kestrel advertises support for both HTTP/1.x and HTTP/2 via ALPN. You can always configure additional bindings via KestrelServerOptions. For example,

WebHost.CreateDefaultBuilder()
    .ConfigureKestrel(options =>
    {
        options.Listen(IPAddress.Any, 8080, listenOptions =>
        {
            listenOptions.Protocols = HttpProtocols.Http1AndHttp2;
            listenOptions.UseHttps("testcert.pfx", "testPassword")
        }); 
    })
    .UseStartup<Startup>();

If you do not enable HTTPS/TLS then Kestrel will be unable to use ALPN to negotiate HTTP/2 connections.

It is possible to establish a HTTP/2 connection in Kestrel using prior knowledge on all platforms (since we don’t rely on ALPN). However, no major browser supports prior knowledge HTTP/2 connections. This approach does not allow for graceful fallback to HTTP/1.x.

WebHost.CreateDefaultBuilder()
    .ConfigureKestrel(options =>
    {
        options.Listen(IPAddress.Any, 8080, listenOptions =>
        {
            listenOptions.Protocols = HttpProtocols.Http2;
        }); 
    })
    .UseStartup<Startup>();

Caveats

As mentioned earlier, it is only possible to negotiate an HTTP/2 connection if the native crypto library on your server supports ALPN.

ALPN is supported on:

  • .NET Core on Windows 8.1/Windows Server 2012 R2 or higher
  • .NET Core on Linux with OpenSSL 1.0.2 or higher (e.g., Ubuntu 16.04)

ALPN is not supported on:

  • .NET Framework 4.x on Windows
  • .NET Core on Linux with OpenSSL older than 1.0.2
  • .NET Core on OS X

What’s missing in Kestrel’s HTTP/2?

  • Server Push: An HTTP/2-compliant server is allowed to send resources to a client before they have been requested by the client. This is a feature we’re currently evaluating, but haven’t planned to add support for yet.
  • Stream Prioritization: The HTTP/2 standard allows for clients to send a hint to the server to express preference for the priority of processing streams. Kestrel currently does not act upon hints sent by the client.
  • HTTP Trailers: Trailers are HTTP headers that can be sent after the message body in both HTTP requests and responses.

What’s coming next?

In ASP.NET Core 2.2,

  • Hardening work on HTTP/2 in Kestrel. As HTTP/2 allows multiplexed streams over the same TCP connection, we need to introduce HTTP/2 specific limits as part of the hardening.
  • Performance work on HTTP/2.

Feedback

The best place to provide feedback is by opening issues at https://github.com/aspnet/KestrelHttpServer/issues.


LCS (August 2018) release notes

$
0
0

The Microsoft Dynamics Lifecycle Services (LCS) team is happy to announce the availability of the release notes for LCS (August 2018).

Export updates to CSV

When you download updates from the tiles in environment details, you can now export the KB information to CSV format on the Review and download updates page using the Export updates to CSV button. This export includes the KB number, title, release date, problem description, and solution.

Issue Search sorting improvement

When using LCS Issue Search to search for KBs, the order by Date descending or Date ascending wasn't sorted correctly. This fix corrects the sort by information when the released date is available on the KB.

IoT Hub の一時的なエラーの対処について

$
0
0

IoT Hub との通信において、サービス側のエラーとしてタイムアウト エラーや Internal Server Error などが稀に発生する場合があります。例えば、IoT デバイス上のアプリケーションから、IoT Hub HTTPS でメッセージを送信した際、これらのエラーを返すことがありますが、ほとんどの場合サービス側の一時的な負荷やネットワークの問題により発生する一過性の事象と考えられます。マイクロソフトではできる限りこうしたエラーを減らすようサービス品質の向上に努めていますが、ネットワークの状態およびその他の予測不能な要因のため、エラーを完全に防ぐことは現実的に困難です。

 

また、デバイスと IoT Hub の接続は多くの場合パブリックなインターネットを経由しており、インターネットもしくはオンプレミスのネットワークの状態によって一時的なエラーが発生する可能性も考えられるため、IoT デバイス側に何らかの再試行(リトライ) ロジックを実装していただくことを推奨しています。

 

この詳細について、以下のドキュメントにも記載がありますので、ご参考になりますと幸いです。

 

    • 一時的な障害の処理

<https://docs.microsoft.com/ja-jp/azure/architecture/best-practices/transient-faults >

 

    • 再試行パターン

<https://docs.microsoft.com/ja-jp/azure/architecture/patterns/retry >

 

もし時間をおいて複数回リトライしても解消せず、継続してエラーが発生している場合は一時的な問題ではない可能性があるため、必要に応じてサポートまでお問い合わせください。

 

上記の内容がお役に立てば幸いです。

 

Azure IoT 開発サポートチーム 津田

 

Microsoft Office 365 Service Communications API を使用した Dynamics 365 Customer Engagement サービス監視: 概要

$
0
0

みなさん、こんにちは。

Microsoft Office 365 Service Communications API を使用した Dynamics 365 Customer Engagement (CE) のサービス監視について紹介します。最初に本記事で概要を説明します。弊社が提供しているブログ Dynamics 365 Customer Engagement in the Field の記事を翻訳したものです。次回以降で実際に評価環境で試していきます。

情報元: Monitoring Dynamics 365 CE service health and messages using the Microsoft Office 365 Service Communications API

====================================================

最近、Microsoft Dynamics 365 CE サービスの正常性を監視するオプションについて、お客様と会話をしました。具体的な要望は、Office 365 管理センターより提供されているサービス正常性の表示よりもより高い可視性が必要とのことでした。これにより Dynamics 365 サービス管理者は便利にサービスの正常性を確認できます。しかし、場合によっては、利用者がこれらのメッセージにアクセスするために Office 365 内で適切な役割を持たないため、実際は管理者に依存するようになることがあります。

そこで Office 365 Service Communications API を見つけました。 これを確認すると、独自の認証メカニズム、サービス契約などを持つ 2 つの別個のエンドポイントがあり、Office 365 Service Communications APIOffice 365 Service Communications API (プレビュー) が作成されました。両方の API を利用して得たものは、Office 365 サービス管理者 または認証のために(AOBO)の代理として動作するパートナー役割と、OAuth を使用するパートナーロールが必要です。 つまり、Office 365 にログインするのと同様のユーザー名とパスワードを使用する、もう一方は、Client ID と Client Secret を使用します。 これをあなたのアプリケーションセキュリティチームと議論して、どのオプションが適切か判断してください。ただし、元の API リファレンスで推奨されている OAuth のプレビューエンドポイントをお勧めします。

以下の記事では、アプリケーションを Azure Active Directory に登録するのに必要な手順を詳述しています。

https://msdn.microsoft.com/en-us/office-365/get-started-with-office-365-management-apis

利用可能なデータとイベントに関しては、2 つのバージョンの API では大きな違いは見られないため、ここから プレビュー API について説明します。元の API に興味がある場合は、サンプルMVCアプリケーションを参考にしてください。

認証トークンを取得すると、Office 365 Service Communications API (プレビュー) への任意の要求に追加できるようになりました。 API は、テナント識別子 (テナント GUID またはテナント名、たとえば contoso.onmicrosoft.com) と希望の操作を取るようにフォーマットされています。

https://manage.office.com/api/v1.0/{tenant_identifier}/ServiceComms/{operation}

私が作成した評価用テナントに対するリクエストのサンプルは次のとおりです。

Request:

GET https://manage.office.com/api/v1.0/contoso.onmicrosoft.com/ServiceComms/Services

Authorization: Bearer <authorization bearer token>

Host: manage.office.com

上記の通りリクエストは、認証ヘッダーと正しい URL で簡単に GET ることができます。 上記により Office 365 テナント内のすべての Office 365 アプリケーションの現在のサービスを提供します。 この記事は Dynamics 365 CE に焦点を当てます。Dynamics 365 のステータスのみ取得するため CurrentStatus API コールにフィルタを追加してみましょう。(太字)

Request:

GET https://manage.office.com/api/v1.0/contoso.onmicrosoft.com/ServiceComms/CurrentStatus?$filter=Workload%20eq%20'DynamicsCRM'

Authorization: Bearer <authorization bearer token>

Host: manage.office.com

Response Body:

{

"@odata.context":"https://office365servicecomms-prod.cloudapp.net/api/v1.0/contoso.onmicrosoft.com/$metadata#CurrentStatus","value":[

{

"FeatureStatus":[

{

"FeatureDisplayName":"Sign In","FeatureName":"signin","FeatureServiceStatus":"ServiceOperational","FeatureServiceStatusDisplayName":"Normal service"

},{

"FeatureDisplayName":"Sign up and administration","FeatureName":"admin","FeatureServiceStatus":"ServiceOperational","FeatureServiceStatusDisplayName":"Normal service"

},{

"FeatureDisplayName":"Organization access","FeatureName":"orgaccess","FeatureServiceStatus":"ServiceOperational","FeatureServiceStatusDisplayName":"Normal service"

},{

"FeatureDisplayName":"Organization performance","FeatureName":"orgperf","FeatureServiceStatus":"ServiceOperational","FeatureServiceStatusDisplayName":"Normal service"

},{

"FeatureDisplayName":"Components/Features","FeatureName":"crmcomponents","FeatureServiceStatus":"ServiceRestored","FeatureServiceStatusDisplayName":"Service restored"

}

],"Id":"DynamicsCRM","IncidentIds":[

"CR134863"

],"Status":"ServiceRestored","StatusDisplayName":"Service restored","StatusTime":"2018-04-26T19:09:29.3038421Z","Workload":"DynamicsCRM","WorkloadDisplayName":"Dynamics 365"

}

]

}

このレスポンスオブジェクトを確認します。FeatureServiceStatusDisplayName プロパティを参照してください。Dynamics 365 の現在のステータス (Status) と、サインイン(Sign In)、組織アクセス(Organization access)、コンポーネント/機能(Components/Features)などの機能を確認できます。 このレスポンスでは、Dynamics 365 の現在のステータスが “Service Restored” であり、影響を受ける特定の機能が「コンポーネント/機能」であることがわかります。

続いて過去機能がどれくらいの間影響を受けていたかを得るには、以下のように HistoricalStatus を使用できます。

Request:

GET https://manage.office.com/api/v1.0/contoso.onmicrosoft.com/ServiceComms/HistoricalStatus?$filter=Workload%20eq%20'DynamicsCRM'

Authorization: Bearer <authorization bearer token>

応答の全文は膨大なため割愛しますが、応答の中では、メッセージセンターの識別子を含む一定期間にわたる Dynamics 365 の機能のステータスを得ることができます 。

最後に、現在の停止しているサービス、および計画されているメンテナンスのメッセージを確認できる GetMessages を紹介します。GetMessages は、タイトル、説明、メッセージテキスト、影響日、影響を受けたテナント、メッセージIDなどを返します。このメソッドは、多くの点で非常に有益です。現在の停止しているサービスの確認、計画されたメンテナンスのメッセージの確認できます。メッセージはフィルターできます(メッセージ識別子、特徴、関心領域、時間枠など)。

特定の条件でフィルタリングするためのサンプルリクエストのリファレンスを次に示します。

Requests:

Filter by Id:

https://manage.office.com/api/v1.0/contoso.onmicrosoft.com/ServiceComms/Messages?$filter=Id%20eq%20'CR133521'

Filter by Message Center:

https://manage.office.com/api/v1.0/contoso.onmicrosoft.com/ServiceComms/Messages?$filter=MessageType%20eq%20Microsoft.Office365ServiceComms.ExposedContracts.MessageType'MessageCenter'

Incidents:

https://manage.office.com/api/v1.0/contoso.onmicrosoft.com/ServiceComms/Messages?$filter=MessageType%20eq%20Microsoft.Office365ServiceComms.ExposedContracts.MessageType'Incident'

Planned Maintenance:

https://manage.office.com/api/v1.0/contoso.onmicrosoft.com/ServiceComms/Messages?$filter=MessageType%20eq%20Microsoft.Office365ServiceComms.ExposedContracts.MessageType'PlannedMaintenance'

Filter By Start Time and End Time:

https://manage.office.com/api/v1.0/contoso.onmicrosoft.com/ServiceComms/Messages?$filter=StartTime%20ge%202018-04-23T00:00:00Z&EndTime%20le%202018-04-28T00:00:00Z

Filter by Workload:

https://manage.office.com/api/v1.0/contoso.onmicrosoft.com/ServiceComms/Messages?$filter=Workload%20eq%20'DynamicsCRM'

現時点では、Dynamics 365 CE の現在のステータスとその機能、潜在的な劣化がテナントに与える影響の情報、アプリケーションで実行される計画的な更新とメンテナンスの計画に関する情報を提供する API があります。 API を使用するようにワークフローをスケジュールする方法と、Microsoft Flow を使用して現在のステータスとメッセージを単一のユーザーまたは配布リストに報告する電子メールを送信する方法を詳しく説明する次のブログ記事を先読みしましょう!

References:

Get started with Office 365 Management APIs

About Office 365 Admin Roles

Office 365 Service Communications API Overview

Office 365 Service Communications API Sample Code

Office 365 Service Communications API Overview (preview)

Thanks and happy coding!

Ali Youssefi

====================================================

まとめ

Office 365 Service Communications API を利用すると Office 365 テナント内にあるサービスの正常性を取得できます。
次回は、実際の評価環境を利用して Dynamics 365 CE のサービス正常性を取得してみましょう。お楽しみに。

- プレミアフィールドエンジニアリング 河野 高也

※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります

ASP.NET Core 2.2.0-preview1: Healthchecks

$
0
0

What is it?

We're adding a health checks service and middleware in 2.2.0 to make it easy to use ASP.NET Core in environments that require health checks - such as Kubernetes. The new features are set of libraries defining an IHealthCheck abstraction and service, as well as a middleware for use in ASP.NET Core.

Health checks are used by a container orchestrator or load balancer to quickly determine if a system is responding to requests normally. A container orchestrator might respond to a failing health check by halting a rolling deployment, or restarting a container. A load balancer might respond to a health check by routing traffic away from the failing instance of the service.

Typically health checks are exposed by an application as a simple HTTP endpoint used by monitoring systems. Creating a dedicated health endpoint allows you to specialize the behavior of that endpoint for the needs of monitoring systems.

How to use it?

Like many ASP.NET Core features, health checks comes with a set of services and a middleware.

public void ConfigureServices(IServiceCollection services)
{
...

    services.AddHealthChecks(); // Registers health checks services
}

public void Configure(IApplicationBuilder app)
{
...

    app.UseHealthChecks("/healthz");

...
}

This basic configuration will register the health checks services and and will create a middleware that responds to the URL path "/healthz" with a health response. By default no health checks are registered, so the app is always considered healthy if it is capable of responding to HTTP.

You can find a few more samples in the repo:

Understanding liveness and readiness probes

To understand how to make the most out out health checks, it's important to understand the difference between a liveness probe and a readiness probe.

A failed liveness probe says: The application has crashed. You should shut it down and restart.

A failed readiness probe says: The application is OK but not yet ready to serve traffic.

The set of health checks you want for your application will depend on both what resources your application uses and what kind of monitoring systems you interface with. An application can use multiple health checks middleware to handle requests from different systems.

What health checks should I add?

For many applications the most basic configuration will be sufficient. For instance, if you are using a liveness probe-based system like Docker's built in HEALTHCHECK directive, then this might be all you want.

// Startup.cs
public void ConfigureServices(IServiceCollection services)
{
...

    services.AddHealthChecks(); // Registers health checks services
}

public void Configure(IApplicationBuilder app)
{
...

    app.UseHealthChecks("/healthz");

...
}


# Dockerfile
...

HEALTHCHECK CMD curl --fail http://localhost:5000/healthz || exit

If your application is running in Kubernetes, you may want to support a readiness probe that health checks your database. This will allow the orchestrator to know when a newly-created pod should start receiving traffic.

public void ConfigureServices(IServiceCollection services)
{
...
    services
        .AddHealthChecks()
        .AddCheck(new SqlConnectionHealthCheck("MyDatabase", Configuration["ConnectionStrings:DefaultConnection"]));
...
}

public void Configure(IApplicationBuilder app)
{
    app.UseHealthChecks("/healthz");
}

...
spec:
  template:
  spec:
    readinessProbe:
      # an http probe
      httpGet:
        path: /healthz
        port: 80
        # length of time to wait for a pod to initialize
        # after pod startup, before applying health checking
        initialDelaySeconds: 30
        timeoutSeconds: 1
    ports:
      - containerPort: 80

Customization

The health checks middleware supports customization along a few axes. All of these features can be accessed by passing in an instance of HealthCheckOptions.

  • Filter the set of health checks run
  • Customize the HTTP response
  • Customize the mappings of health status -> HTTP status codes

What is coming next?

In a future preview we plan to add official support for health checks based on an ADO.NET DbConnection or Entity Framework Core DbContext.

We expect that the way that IHealthCheck instances interact with Dependency Injection will be improved. The current implementation doesn't provide good support for interacting with services of varying lifetimes.

We're working with the authors of Polly to try and integrate health checks for Polly's circuit breakers.

We plan to also provide guidance and examples for using the health check service with push-based health systems.

How can you help?

There are a few areas where you can provide useful feedback during this preview. We're interested in any thoughts you have of course, these are a few specific things we'd like opinions on.

Is the IHealthCheck interface general and useful enough to be used broadly?

  • Including other health check systems
  • Including health checks written by other libraries and frameworks
  • Including health checks written by application authors

The best place to provide feedback is by logging issues on https://github.com/aspnet/Diagnostics

Caveats and notes

The health check middleware doesn't disable caching for responses to the health endpoint. We plan to add this in the future, but it didn't make it into the preview build.

Orica’s S/4HANA Foundational Architecture Design on Azure

$
0
0

This blog is a customer success story detailing how Cognizant and Orica have successfully deployed and gone live with a global S/4HANA transformation project on Azure. This blog contains many details and analysis of key decision points taken by Cognizant and Orica over the last two years leading to their successful go live in August 2018.

This blog below written by Sivakumar Varadananjayan Siva is Global head of Cognizant SAP Cloud Practice and He Personally involves in Orica 4s Program from day 1 as Presales head and now as Chief Architect for Orica's S/4HANA on Azure Adoption

Over the last 2 years, Cognizant has partnered and engaged as a trusted technology advisor and managed cloud platform provider to build Highly Available, Scalable, Disaster Proof IT platforms for SAP S/4HANA and other SAP applications in Microsoft Azure. Our customer Orica is the world's largest provider of commercial explosives and innovative blasting systems to the mining, quarrying, oil and gas and construction markets, a leading supplier of sodium cyanide for gold extraction, and a specialist provider of ground support services in mining and tunneling. As a part of this program, Cognizant has built Orica's new SAP S/4HANA Platform on Microsoft Azure and provides a Managed Public Cloud Platform as a Service (PaaS) offering.

Cognizant started the actual cloud foundation work during December 2016. In this blog article, we will cover some of the best practices that Cognizant adopted and share key learnings which may be essential for any customer planning to deploy their SAP workloads on Azure.

The following topics will be covered:

  • Target Infrastructure Architecture Design
    • Choosing the right Azure Region
    • Write Accelerator
    • Accelerated Networking
  • SAP Application Architecture Design
    • Sizing Your SAP Landscape for the Dynamic Cloud
    • Increasing/decreasing capacity
  • HA / DR Design (SUSE HA Cluster)
    • SUSE cluster
    • Azure Site Recovery (ASR)
  • Security on Cloud
    • Network Security Groups
    • Encryption – Disk, Storage account, HANA Data Volume, Backup
    • Role-Based Access Control
    • Locking resources to prevent deletion
  • Operations & Management
    • Reporting
    • Costing
    • Creation of clone environments
    • Backup & restore

Target Infrastructure Architecture Design

The design of a fail-proof infrastructure architecture involves visualizing the end-state with great detail. Capturing key business requirements and establishing a set of design principles will clarify objectives and help in proper prioritization while making design choices. Such design principles include but are not limited to choosing a preferred Azure Region for hosting the SAP Applications, as well as determining preferences of Operating System, database, end user access methodology, application integration strategy, high availability, disaster recovery strategy, definition of system criticality and business impacts of disruption, definition of environments, etc. During the Design phase, Cognizant involved Microsoft and SUSE along with other key program stakeholders to finalize the target architecture based on the customer's business & security requirements. As part of the infrastructure design, critical foundational aspects such as Azure Region, ExpressRoute connectivity with Orica's MPLS WAN, and integration of DNS and Active Directory domain controllers were finalized.

At the time of discussing the infrastructure preparation, various topics including VNet design (subnet IP ranges), host naming convention, storage requirements, and initial VM types based on compute requirements were derived. In the case of Orica's 4S implementation, Cognizant implemented a three tier subnet architecture – Web Tier, Application Tier and Database Tier. The three tier subnet design was applied for each of Sandpit, Development, Project Test, Quality and Production so that it provides the flexibility for Orica to deploy fine-grained NSGs at subnet levels as per security requirements. Having a clearly defined tier-based subnet architecture will also enable to avoid complex NSGs being defined for individual VM hosts.

The Web Tier subnet is intended to host the SAP Web Dispatcher VMs; the Application Tier is intended to host the Central Services Instance VMs, Primary Application Server VMs and any additional application server VMs, the Database Tier is intended to host the database VMs. This is supplemented by additional subnets for infrastructure and management components, such as jump servers, domain controllers, etc.

Choosing the Right Azure Region

Although Azure operates over several regions, it is essential to choose a primary region into which main workloads will be deployed. Choosing the right Azure region for hosting the SAP Application is a vital decision to be made. The following factors must be considered for choosing the Right Azure Region for Hosting: (1) Legal and regulatory requirements dictating physical residence, (2) Proximity to the company's WAN points of presence and end users to minimize latency, (3) Availability of VMs and other Azure Services, and (4) Cost. For more information on availability of VMs, refer to the section "Sizing Your SAP Landscape for the Dynamic Cloud" under SAP Application Architecture Design.

Accelerated Networking

Accelerated Networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the data path, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types. Without accelerated networking, all network traffic in and out of the VM must traverse the host machine and the virtual switch.
With accelerated networking, network traffic arrives at the VM's network interface (NIC), and is then forwarded directly to the guest VM. All network policies that the virtual switch applies are now offloaded and applied in hardware. While essential for good and predictable HANA performance, not all VM types and operating system versions support Accelerated Networking, and this must be taken into account for the infrastructure design. Also, it is important to note that Accelerated Networking helps to minimize latency for network communication within the same Azure Virtual Network (VNet). This technology has minimal impact to overall latency during network communication over multiple Azure VNets.

Storage

Azure provides several storage options including Azure Disk Storage – Standard, Premium and Managed (attached to VMs), Azure Blob Storage, etc. At the time of writing this article, Azure is the only public cloud service provider that offers Single VM SLA of 99.9% under the condition of operating the VM with Premium Disks attached to it. The cost value proposition of choosing Premium Disk over Standard Disk for the purpose of getting an SLA for Single VM SLA is significantly beneficial and hence Cognizant recommends provisioning all VMs with Premium Disks for application and database storage. Standard Disks are appropriate to store Backups of Databases, and Azure Blob is used for Snapshots of VMs and transferring Backups and storing them as per the Retention Policy. For achieving an SLA of > 99.9%, High Availability techniques can be used. Refer to the section 'High Availability and Disaster Recovery' in this article for more information.

Write Accelerator

Write Accelerator is a disk capability for M-Series Virtual Machines (VMs) on Azure running on Premium Storage with Azure Managed Disks exclusively. As the name states, the purpose of the functionality is to improve the I/O latency of writes against Azure Premium Storage. Write Accelerator is ideally suited to be enabled for disks to which database redo logs are written to meet the performance requirement of modern databases such as HANA. For production usage, it is essential that the final VM infrastructure thus setup should be verified using SAP HANA H/W Configuration Check Tool (HWCCT). These results should be validated with relevant subject matter experts to ensure the VM is capable of operating production workloads and is thus certified by SAP as well.

SAP Application Architecture Design

The SAP Application Architecture Design must be based on the guiding principles that must be adopted for building the SAP applications, systems and components. To have a well laid out SAP Application Architecture Design, you must determine the list of SAP Applications that are in scope for the implementation.

It is also essential to review the following SAP Notes that provide important information on deploying and operating SAP systems on public cloud infrastructure:

  • SAP Note 1380654 - SAP support in public cloud environments
  • SAP Note 1928533 - SAP Applications on Azure: Supported Products and Azure VM types
  • SAP Note 2316233 - SAP HANA on Microsoft Azure (Large Instances)
  • SAP Note 2235581 - SAP HANA: Supported Operating Systems
  • SAP Note 2369910 - SAP Software on Linux: General information

    Choosing the OS/DB Mix of your SAP Landscape

    Using this list, the SAP Product Availability Matrix can be leveraged to determine whether the preferred Operating System and Database is supported for each of the SAP application in scope. From an ease of maintenance and management perspective, you may want to consider not having more than two variants of databases for your SAP application database. SAP has started providing support for SAP HANA database for most of the applications and since SAP HANA supports multi-tenant database, you might as well want to have most of your SAP applications run on SAP HANA database platform. For some applications that do not support HANA database, other databases might be required in the mix. SAP's S/4HANA application runs only on HANA database. Orica chose to run HANA for every SAP application where supported and SQL Server otherwise – as this was in line with the design rationale and simplified database maintenance, backups, HA/DR configuration, etc.

    With SAP HANA 2.0 becoming mainstream (it is also mandatory for S/4HANA 1709 and higher), fewer operating systems are supported than with SAP HANA 1.0. For example SUSE Enterprise for SAP Applications is now the only flavor of SUSE supported, while "normal" SUSE Enterprise was sufficient for HANA 1.0. This may have a licensing impact for the customers, as Azure only provides BYO Subscription images. Hence customers must supply their own operating system licenses.

    Type of Architecture

    SAP offers deploying its NetWeaver Platform based applications either in a Central System Architecture (Primary Application Server and Database in same host) or in a Distributed System Architecture (Primary Application Server, Additional Application Servers and Database in separate hosts). You need to choose the type of architecture based on a thorough cost value proposition, business criticality and application availability requirements. You also need to determine the number of environments that each SAP application will require such as Sandbox, Development, Quality, Production, Training, etc. This is predominantly determined based on the change governance that you plan to setup for the project. Systems that are business critical and have requirements for high availability such as the Production Environment must always be considered to be deployed in a Distributed System Architecture Scenario with High Availability Cluster. In the case of public cloud infrastructure, this is even more critical as VMs tend to fail much more frequently than traditional "expensive" on-premises kit (e.g. IBM p-Series). In the past one could afford to be lax about HA, because individual servers tended to fail only rarely. However we're seeing a relatively higher rate of server failure in public cloud, so if uptime is important, then HA must be set up for business critical systems. For both critical and non- critical systems, parameters should be enabled to ensure the application and database starts automatically in the event of an inadvertent server restart. Disaster Recovery is often recommended for most of the SAP Applications that are business critical based on Recovery Point Objective (RPO) and Recovery Time Objective (RTO).

    Cognizant designed 4 system landscape and distributed SAP architecture for Orica. We separated the SAP Application and DB servers, because when taken in context of HANA MDC and running everything on HANA by default, a Central System Architecture no longer makes sense. We have also named HANA Database SIDs without any correlation to the tenants that each HANA database holds. This is done with the intention of future proofing and allowing the tenants to change the HANA hosts in future if needed. In the case of Orica, we have also implemented custom scripting for automated start of SAP applications which can further be controlled (disabled or enabled) by a centrally located parameter file. High availability is designed for production and quality environments. Disaster recovery is designed as per Orica's Recovery Point Objective (RPO) and Recovery Time Objective (RTO) defined by business requirements.

    Sizing Your SAP Landscape for the Dynamic Cloud

    Once you have determined the type of SAP architecture, you will now have a fair idea about the number of individual Virtual Machines that will be required to deploy each of these components. From an infrastructure perspective, the next step that you will need to perform is to size the Virtual Machines. You can leverage standard SAP methodologies such as Quick Sizer using Concurrent User or Throughput Based sizing. Best Practice is doing the sizing using Throughput based sizing. This will provide you the SAPS and memory requirement for the application and database components and memory requirement in the case of HANA database. Tabulate the critical piece of sizing information in a spreadsheet and refer to the standard SAP notes to determine the equivalent VM types in Azure Cloud infrastructure. Microsoft is getting SAP certification for new VMs on regular basis so it is always advisable to check the recent SAP notes for latest information. For HANA databases, you may most often require VMs with E-Series (Memory Optimized) and
    M-Series (Large Memory Optimized) based on the size of the database. At the time of writing this article, the maximum capacity supported with E-Series and M-Series are 432 GB and 3.8 TB respectively. E-series offers better cost value proposition compared to the earlier GS-series VMs offered by Azure. At this point you need to evaluate that the resulting VMs are available in the Azure region that you have preferred to host your SAP landscape. In some cases, depending upon the geography there is a possibility that some of these VM types may not be available and it is essential to be careful and choose the right Geography and Azure Region where all the required VM types are available. However, remember that Public Cloud offers great scalability and elasticity. You do not need an accurate peak sizing to provision your environments. You always have the room to scale-up or scale-down your SAP Systems based on the actual usage by monitoring the utilization metrics such as CPU, Memory and Disk Utilization. Within the same Virtual Machine series, this can be done just by powering off the VM, changing the VM Size and powering on the VM. Typically, the whole VM resizing procedure does not take more than a few minutes. Ensure that your system will fit into what's available in Azure at any point of time. For instance, spinning up a 1 TB M-series and then finding that a 1.7TB instance is needed instead does not cause much of an hassle as it can be easily re-sized. However, if you are not sure if your system will grow beyond 3.8 TB (maximum capacity of M-Series), then it puts you in a bigger risk as complications will start of creep up (Azure Large Instances may be needed for rescue in such cases). Reserved Instances are also available in Azure, and can be leveraged for further cost optimization if accurate sizing of actual hardware requirements is performed before purchasing (to avoid over-committing).

High Availability and Disaster Recovery

Making business critical systems such as SAP S/4HANA highly available with > 99.9% high availability requires a well-defined High Availability architecture design. As per Azure, VM clusters deployed in an availability set within an availability zone in a region offers 99.95% availability. Azure offers an SLA of 99.99% when the compute VMs are deployed within a region in multiple Availability Zones. For achieving this, it is recommended to look for availability of Availability Zones in the region that is chosen for hosting the SAP applications. Note that Azure Availability Zones are still being rolled out by Microsoft and they will eventually arrive in all regions over a period of time. Also, components that are Single Point of Failure (SPOF) in SAP must be deployed in a cluster such as SUSE Cluster. Such cluster must reside within an availability set to attain 99.95% availability. To achieve the High Availability for Azure Infrastructure level all the VMs are added in availability set and exposed with Azure Internal Load Balancer (ILB). These components include (A)SCS Cluster, DB Cluster and NFS. It is also recommended to provision at least two application servers within an availability set (Primary Application Server and Additional Application Server), so as to ensure the Application Servers are redundant. Cognizant, Microsoft and SUSE worked together to build a collaborative solution based on Multi-Node iSCSI server configuration. This Multi-Node iSCSI server HA configuration for SAP Applications in Orica were the first to be deployed with this configuration in Azure Platform.

As discussed earlier, in cases where SAP components are not prevented from failure using High Availability setup, it is recommended to provision such VMs with Premium Storage Disks attached to it to take advantage of the Single VM SLA. All VMs at Orica use Premium Disks for their application and database volumes because this is the only way they would be covered by the SLA, and we also found performance to be better and more consistent.

Details about SUSE Cluster is described below

SUSE Cluster Setup for HA:

(A)SCS Layer high availability is achieved using SUSE HA Extension cluster. DRBD technology for replication is not used for replication of application files such as SAP kernel files. This design is based on the recommendation from Microsoft and it is supported by SAP as well. The reason for not enabling DRBD replication is due to potential performance issues that could pop-up when a synchronous replication is configured and a recovery could not be guaranteed which such configuration when ASR is enabled for Disaster Recovery replication at Application layer. NFS Layer high availability is achieved using SUSE HA Extension cluster. DRBD technology for replication is used for data replication. It is also recommended by Microsoft to use single NFS Cluster to cater for multiple SAP Systems to reduce complexity of the overall design.

HA testing needs to be performed thoroughly, and must be simulated for many different failure situations beyond a simple clean shutdown of the VM. E.g. Usage of halt command to simulate a VM power off, adding firewall rules in the NSG to simulate problems with the VM's network stack, etc.

We are excited to announce that Orica is the first customer on the Multi-SID SUSE HA cluster configuration.

More details on the technical configuration of setting up HA for SAP is described here. The pacemaker on SLES in Azure is recommended to be setup with an SBD device, the configuration details are described here. Alternatively, if you do not want to invest in one additional virtual machine, you can also use the Azure Fence agent. The downside with Azure Fencing Agent is that a failover can take between 10 to 15 minutes if a resource stop fails or the cluster nodes cannot communicate which each other anymore.

Another important aspect on ensuring application availability is during a disaster through a well architected DR solution which can be invoked through a well-orchestrated Disaster Recovery Plan.

Azure Site Recovery (ASR):

Azure Site Recovery assists in business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines (VMs) from a primary site to a secondary location. At the time of failover, apps are started in the secondary location and accessed from there after making relevant changes in the cluster configuration and DNS. After the primary location is running again, you can fail back to it. ASR was not tested for Orica at the time of current Go-Live as the support for SLES 12.3 was made GA by Microsoft too close to Cut-Over. However, we are currently evaluating this feature and we will be using this for DR at the time of Go-Live of the next phase.

Security on Cloud

Most of the traditional security concepts such as security at physical, server, hypervisor, network, compute and storage layers are applicable for overall security of the cloud. These are provided by the public cloud platform inherently and are audited as well by 3rd party IT security certification providers. Security on the Cloud will help you to protect the hosted applications on the cloud by leveraging features and customization aspects that are available through the cloud provider and those security features provided within the applications hosted on cloud.

Network Security Groups

Network Security Groups (NSGs) are rules applied in Networking Layer that will control traffic and communication with VMs hosted in Azure. In Azure, Separate NSGs can be associated for Prod, Non-Prod, Infra, management and DMZ environment. It is important to arrive at a strategy for defining the NSG rules in such a way that it is modularized and easy to comprehend and implement. Strict procedures need to be implemented to control these rules. Otherwise, you may often end-up with unnecessary redundant rules which will make it harder to troubleshoot any network communication related issues.

In the case of Orica, an initiative was implemented to optimize the number of NSG rules by adding multiple ports for the same source and destination ranges in the same rule. A change approval process was introduced once the NSG was associated. All the NSG rules are maintained in a custom formatted template (CSVs) which is utilized by a script for actual configuration in Azure. We expect it will be too difficult doing this manually for multiple VNets across multiple regions (e.g. primary, DR, etc.).

Encryption of Storage Account and Azure Disk

Azure Storage Service Encryption (SSE) is recommended to be enabled for all the Azure Storage Accounts. Through this, Azure Blobs will be encrypted in the Azure Storage. Any data that is written to the storage after enabling the SSE will be encrypted. SSE for Managed Disks is enabled by default.

Azure Disk Encryption leverages the industry standard BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and the data disks. The solution is integrated with Azure Key Vault to help you control and manage the disk-encryption keys and secrets in your key vault subscription. Encryption of the OS volume will help protect the boot volume's data at rest in your storage. Encryption of data volumes will help protect the data volumes in your storage. Azure Storage automatically encrypts your data before persisting it to Azure Storage, and decrypts the data before retrieval.

SAP Data at Rest Encryption

Data at rest is encrypted for SAP Applications by encrypting the database. SAP HANA 2.0 and SQL Server natively support data at rest encryption and they provide the additional security that is needed in case of a data theft. In addition to that the backups of both these databases are encrypted and secured by a Pass Phrase to ensure these backups are only readable and can be leveraged by authentic users.

In the case of Orica, both Azure Storage Service Encryption and Azure Disk Encryption were enabled. In addition to this, SAP Data at Rest Encryption was enabled in SAP HANA 2.0 and TDE encryption was enabled in SQL Server database.

Role-Based Access Control (RBAC)

Azure Resource Manager provides a granular Role-Based Access Control (RBAC) model for assigning administrative privileges at the resource level (VMs, Storage etc.). Using an RBAC model (For e.g. service development team, App development team) can help in segregation and control of duties and grant only the amount of access to users/groups that they need to perform their jobs in selected resources. This enforces the principle of least privilege.

Resource Lock

An administrator may need to lock a subscription, resource group, or resource to prevent other users in organization from accidentally deleting or modifying critical resources. We can set the lock level to CanNotDelete or ReadOnly. In the portal, the locks are called Delete and Read-only respectively. Unlike RBAC, Locking Resources would prevent intentional and accidental deletion of resources for all the users including the users who have owner access as well. CanNotDelete means authorized users can still read and modify a resource, but they can't delete the resource. ReadOnly means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role. For Orica, we have configured this for critical pieces of Azure infrastructure, to provide an additional layer of safety.

Operations & Management

Cognizant provides Managed Platform as a Service (mPaaS) for Orica through Microsoft Azure Cloud. Cognizant has leveraged several advantages of operating SAP systems in public cloud including scheduled automated Startup and Shutdown, automated backup management, monitoring and alerting, automated technical monitoring for optimizing the overall cost of technical operations and management. Some of the recommendations are described below.

Azure Costing and Reporting

Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary, allows you to track cloud usage and expenditures for your Azure resources and other cloud providers including AWS and Google. Monitoring your usage and spending is critically important for cloud infrastructures because organizations pay for the resources they consume over time. When usage exceeds agreement thresholds, unexpected cost overages can quickly occur.

Reports help you monitor spending to analyze and track cloud usage, costs, and trends. Using Over Time reports, you can detect anomalies that differ from normal trends.
More
detailed, line-item level data may also be available in the EA Portal (https://ea.azure.com) which are more flexible compared to Cloudyn reports and could be more useful.

Backup & Restore

One of the primary requirements of system availability management as part of technical operations is to protect the systems from accidental data loss due to factors such as infrastructure failure, data corruption or even complete loss of the systems in the event of a disaster. While concepts such as High Availability and Disaster Recovery will help to mitigate infrastructure failures, for handling events such as data corruption, loss of data, etc. a robust backup and restore strategy is essential. Availability of backups allow us to technically restore an application back to working state in case of a system corruption and present the "last line of defense" in case of a disaster recovery scenario. The main goal of backup/restore procedure is to restore the system to a known-working state.

Some of the key requirements for Backup and Restore Strategy include:

  • Backup should be restorable
  • Prefer to use native database backup and restore tools
  • Backup should be secure and encrypted
  • Clearly defined retention requirements

    VM Snapshot Backups

    Azure Infrastructure offers native backup for VMs (inclusive of disks attached) using VM Snapshots. VM Snapshot backups are stored within Azure Vaults which are part of the Azure Storage architecture and are geo-redundant by default. It is to be noted that Microsoft Azure does not support traditional data retention medium such as tapes. Data retention in cloud environment is achieved using technologies such as Azure Vault and Azure Blob which are part of Azure Storage Account architecture. In general, all VMs provisioned in Microsoft Azure (including databases) should be included as part of the VM Snapshot backup plan although the frequency can vary based on the criticality of the environment and criticality of the application. Encryption should be enabled at Azure Storage Account level so that the backups when stored in the Azure Vault are also encrypted when accessed outside the Azure Subscription.

    Database Backups

    While the restorability of file system and database software can be achieved using VM Snapshot process described above, VMs containing database may not be able to restore the database to a consistent state. Hence, backup of databases is highly essential to guarantee restorability of databases. It is advisable to have all databases in the landscape to be included as part of the Full Database Backups, the schedules for must be described based on the business criticality and requirements for the application. Consistency of the database backup file should be checked after the database backup is taken. This is to ensure restorability of the database backup.

    In addition to Full Database backups, it is recommended to perform transaction log backups at regular intervals. This frequency must be higher for a production environment to support point in time recovery requests and the frequency can be relatively lower for non-production environments.

    Both Full Database Backups and Transaction Log Backups must be transferred to an offline device (such as Azure Blob) and retained as per data retention requirement. It is recommended to have all database backups to be encrypted using Native Database Backup Data Encryption methodology if the database supports it. SAP HANA 2.0 supports Native DB Backup Encryption.

    Database Backup Monitoring and Restorability Tests

    Backup Monitoring is essential to ensure the backups are occurring as per frequency and schedule. This can be automated through scripts. Restorability Test of backups will assist in guaranteeing the restorability of an application in the event of a disaster or data loss or data corruption.

    Conclusion

    Cognizant SAP Cloud Practice in collaboration with SAP, Microsoft and SUSE leveraged and built some of the best practices for deploying SAP landscape in Azure for Orica's 4S Program. Through this article, some of the key topics that are very relevant for architecting an SAP landscape on Azure are exhibited. Hope you found this blog article useful. Feel free to add your comments.

ASP.NET Core 2.2.0-preview1: SignalR Java Client

$
0
0

This post was authored by Mikael Mengistu.

In ASP.NET Core 2.2 we are introducing a Java Client for SignalR. The first preview of this new client is available now. This client supports connecting to an ASP.NET Core SignalR Server from Java code, including Android apps.

The API for the Java client is very similar to that of the already existing .NET and JavaScript clients but there are some important differences to note.

The HubConnection is initialized the same way, with the HubConnectionBuilder type.

HubConnection hubConnection = new HubConnectionBuilder()
        .withUrl("www.example.com/myHub")
        .configureLogging(LogLevel.Information)
        .build();

Just like in the .NET client, we can send an invocation using the send method.

hubConnection.send("Send", input);

Handlers for server invocations can be registered using the on method. One major difference here is that the argument types must be specified as parameters, due to differences in how Java handles generics.

hubConnection.on("Send", (message) -> {
      // Your logic here
}, String.class);

Installing the Java Client

If you’re using Gradle you can add the following line to your build.gradle file:

implementation 'com.microsoft.aspnet:signalr:0.1.0-preview1-35029'

If you’re using Maven you can add the following lines to the dependencies section of your pom.xml file:

<dependency>
  <groupId>com.microsoft.aspnet</groupId>
  <artifactId>signalr</artifactId>
  <version>0.1.0-preview1-35029</version>
</dependency>

For a complete sample, see https://github.com/aspnet/SignalR-samples/tree/master/AndroidJavaClient

This is an early preview release of the Java client so there are many features that are not yet supported. We plan to close all these gaps before the RTM release:

  • Only primitive types can be accepted as parameters and return types.
  • The APIs are synchronous.
  • Only the “Send” call type is supported at this time, “Invoke” and streaming return values are not supported.
  • The client does not currently support the Azure SignalR Service.
  • Only the JSON protocol is supported.
  • Only the WebSockets transport is supported.

//DevTalk : App Service – SSL Settings Revamp

$
0
0

App Service SSL settings experience is one of the most used features in App Service. Based on customer feedback we are making the following changes to the UX to address and improve the overall experience of managing certificates in Azure App Service.

Tabs

The new SSL settings experience divides the features into 3 tabs. Namely SSL Bindings, Private certificates (.pfx), and Public certificates (.cer). The Bindings tab allows the user to configure the protocol settings and add/edit/delete SSL bindings, the private certificates tab allows the user to upload and manage private certificates (.pfx) used in SSL bindings and the public certificates tab allows the user to upload and manage public certificates (.cer). We also call out what type of certificate type the customer needs to use for each feature.

 

 

Editing SSL Bindings

SSL Settings didn't have a way to update an existing SSL binding, the feature to edit was present but unfortunately hidden under the Add Binding flow. We enabled the ability to edit a few sprints ago, we polished the feature further and now the customer is free to edit any binding by clicking on the row. When you change the thumbprint for the IP Based SSL binding the IP will not be lost, but if you change out from IP Based SSL to SNI and back you will lose the IP. The ability to change the certificate without removing and adding back the binding was an issue in the past we are addressing with this release. For more details on adding SSL Bindings click here.


Private Certificate Details

Private certificates used in App Service required a facelift to show the information we already gather when the certificate is uploaded, imported from App Service Certificate or imported from KeyVault. The driving reason was when we saw hundreds of private certificates configured on their app and had a tough time browsing through the certificates, the revamp allows the customer to now to get details of the certificates uploaded and imported. We added a new status column to the grid showing three possible states. Healthy, Warning and Expired. Warning being a certificate about to expire in the next 60 days. We also explicitly mention that you will need to upload a .pfx file to add a private certificate.

Showing a test certificate that is valid.

We also show the KeyVault details and the sync status of the certificates pulled from KeyVault like certificates imported from App Service Certificate.

 

Uploading certificates

The Upload Certificate experience overall is more consistent in showing that private certificates only accept .pfx file and you need a valid pfx to add a private certificate to your App Service. Addressing another feedback, we stopped showing both upload path. Now upload certificates from the private certificates opens the UX where you can only upload the private certificates to avoid the confusion about uploading .cer or .pfx file. When the Upload certificate flow is opened from the public certificate tab we show only the public certificate option.

We updated the upload certificate UX (and the underlying way it is implemented) to show errors while trying to upload a certificate without leaving the upload blade.

 

 

Public Certificates

Public certificates can only be used by your app and they cannot be used for making SSL bindings. We are working out a way to move it to a place where it will make more sense. For now, SSL Setting is the place where Public certificates will reside. Private certificates require app settings to enable that feature (it's covered in this very old but reliable blog here) and now public certificates add another dimension to that feature by allowing you to upload (.cer) files and get the certificates in runtime.

 

Thanks for reading! I am writing this blog to showcase the changes we made to improve the overall certificate management experience for customers using App Service day in and day out. We are always open to feedback and looking forward to your comments.

Feel free to reach out to us for any feature request or issues.

App Service MSDN forum

Feature Requests


Visual Studio Code C/C++ extension August 2018 Update

$
0
0

Late last week we shipped the August 2018 update  to the C/C++ extension for Visual Studio Code. This update included support for “Just My Code” symbol search, a gcc-x64 option in the intelliSenseMode setting, and many bug fixes. You can find the full list of changes in the release notes.

“Just My Code” symbol search

Keyboard shortcut Ctrl+T in Visual Studio Code lets you jump to any symbols in the entire workspace.

We have heard feedback that sometimes it is desired to have the system header symbols excluded from this search. In this update, we enabled “Just My Code” symbol search to filter out system symbols, which offers a cleaner result list and significantly speeds up symbol search in large codebases, and as such we’ve made this behavior the default.

If you need symbol search to also include system headers, simply toggle the C_Cpp.workspaceSymbols setting in the VS Code Settings file (File > Preferences > Settings).

Tell us what you think

Download the C/C++ extension for Visual Studio Code, try it out and let us know what you think. File issues and suggestions on GitHub. If you haven’t already provided us feedback, please take this quick survey to help shape this extension for your needs.

Add a Staff Notebook to Microsoft Teams

$
0
0

If you are using the EDU SKU for Office 365 you have been able to enjoy creating Staff Notebooks when creating a new Microsoft Teams team using the classroom template.  If you're not using the EDU SKU, you can still create a Staff Notebook and include it as a tab within your team channel.

Create a new Staff Notebook

Even though a Staff Notebook has a lot of language around education, you can still adapt it for a corporate Manager/Employee type of relationship.

  1.  Navigate to https://www.onenote.com/staffnotebookedu and login to your tenant.
  2. Select the "Create a staff notebook" tile.

3.  Give it a name.

4.  Review the contents of what will be inside the staff notebook.  Keep in mind that the content will be more focused on education but you can modify this to your needs.

 

5.  Add staff members.  This will come from the users in your tenant.  Be aware that you should only add members who will be a part of your Team in Microsoft Teams.

 

6.  You can choose from a standard set of sections that will be in your notebook, or create new ones to meet your needs.

 

7.  You can review what the staff leader and staff members notebook will look like.  Remember that the leader sees everything, the member only sees the common team sections and their section.

8.  Member view:

 

9.  View your staff notebook in the browser

Add the Staff Notebook to Microsoft Teams

Now that you have the staff notebook created, let's add it to Teams.

  1. Create a new Team in Microsoft Teams or use an existing one...doesn't matter.
  2. Get the URL from the Staff Notebook you created above.
  3. In Microsoft Teams, create a new tab in the Team you want to add the staff Notebook (Click the + sign in the respective Team channel)
  4. Choose the "Website" tile

 

5.  The tile properties will ask you to name the tab and provide the URL, which is the staff notebook you created.

 

6.  Save the new tab and now the Staff Notebook will appear in Microsoft Teams.  when staff members login to the Team, the security model will display only their section of the notebook. Again, there is some education focused content but you can modify this to your needs.

MIM 4.5.26.0 – MPR Creation – The Required Field Cannot Be Empty

$
0
0

I recently ran into an issue after updating MIM 2016 to version 4.5.26.0 where I was unable to select workflows when creating an MPR.  The error displayed was The Required Field Cannot Be Empty. The Selected workflow would be cleared when hitting next or submit button.

clip_image002[6]

Further testing identified that you could successfully add workflows from the first page of the paginated list of workflows, but not from subsequent pages of the paginated list of workflows.

MIM 4.5.26.0 Release documentation:

https://support.microsoft.com/en-us/help/4073679/hotfix-rollup-package-build-4-5-26-0-is-available-for-microsoft

ENVIRONMENT:

Windows Server 2012 R2

SharePoint Foundation 2013

SQL Client 2012

SQL Server 2016

.Net 4.6 (KB3045563)

RESOLUTION:

The resolution to the issue was to uninstall .Net 4.6 (KB3045563) after applying the MIM 4.5.26.0 patch which allowed workflows to be selected and successfully saved to the MPR from all pages of the paginated workflow list.

Visual Studio Toolbox: Creating Games with Unity and Visual Studio

$
0
0

In this episode, I am joined by Arturo Nunez, who shows us the seamless integration of Visual Studio and Unity and how this makes you a much more productive game developer. You get the benefits of things like IntelliSense and full debugging support for your scripts, as well as Unity specific features like directly implementing Unity API messages in MonoBehavior scripts and the MonoBehavior wizard for adding method definitions [09:00].

(And for a limited time, you can take advantage of the Unity Pro and Visual Studio Professional Bundle, which includes Visual Studio Pro, Unity Pro, $50 in monthly Azure credits and more.)

Resources:

Visual Studio Toolbox: Managing User Secrets

$
0
0

In this episode, I am joined by Andrew Cheung and Alicia Chan, who show how Visual Studio can help you stop storing sensitive data like connection strings and other user secrets in your code. They show how to store secrets locally in json or XML files and how to store them in Azure Key Vault.

[重要] Office 365 の TLS 1.0/1.1 廃止に伴う Skype for Business の対応

$
0
0

こんにちは、 Japan Lync/Skype for Business サポートチームです。

 

メッセージセンターや下記公開情報で公開されている通り、2018 年 10 月 31 日以降、Skype for Business Online を含む Office 365 全体において TLS 1.0/1.1 のサポートが廃止されます。

Title : Office 365での TLS 1.2 の必須使用に対する準備
URL : https://support.microsoft.com/ja-jp/help/4057306

 

これに伴う  Skype for Business 関連の準備・対応については、下記 Blog にて情報を公開しています。

Title : Preparing for TLS 1.0/1.1 Deprecation - O365 Skype for Business
URL : https://techcommunity.microsoft.com/t5/Skype-for-Business-Blog/Preparing-for-TLS-1-0-1-1-Deprecation-O365-Skype-for-Business/ba-p/222247

 

Skype for Business 関連では、古いクライアントや電話機などで一部  TLS 1.0/1.1 廃止後は利用できなくなるものがあることや、バージョンアップなどの対応が必要となるケースもあります。重要な内容となりますため 、本 Blog でも詳細についてご説明したいと思いますので、ご確認のうえ準備くださいますようお願いいたします。

 

  1. クライアントからの Office 365 への接続性
    • 準備が必要なケース
    • Lync/Skype for Business クライアントの対応  *重要*
    • Windows OS の対応 (Windows 7 のみ)  *重要*
    • Windows OS の対応 (共通) *念のためご確認ください*
  2. オンプレミス サーバーと Office 365 との連携
    • 準備が必要なケース
    • オンプレミス Lync/Skype for Business Server の対応 *重要*
  3. Skype for Business Online と連携する 3 rd Party 製品について
  4. その他の考慮事項

*注意* 本記事は、クライアントやオンプレミス Lync/Skype for Business Server で TLS 1.0/1.1 を無効化する方法について説明した記事ではありません。Office 365 のおいて TLS 1.0/1.1 が廃止され TLS 1.2 が強制されることに伴い、クライアントやオンプレミス Lync/Skype for Business Server と Office 365 が 2018 年 10 月 31 日以降も TLS 1.2 で通信できるようにするための準備について説明します。

 

1. クライアントからの Office 365 への接続性

準備が必要なケース

Skype for Business Online を利用している場合はもちろんのこと、Exchange Online を利用している場合にも準備が必要となります。これは Lync/Skype for Business のクライアントには、Exchange に接続する機能があるためです。

したがって、下記表の通り、Skype for Business Online/Exchange Online の少なくとも一方を使用している場合は、TLS 1.0/1.1 廃止に伴う準備が必要となるのでご注意ください。

 

また、オンプレミス Lync/Skype for Business Server および Exchange Server を利用している場合 (上記表の "No*" の場合) であっても、外部組織とのフェデレーションに関する考慮が必要となります。Skype for Business Online を利用している組織とのフェデレーションを行っている場合にも、TLS 1.0/1.1 廃止に伴う準備が必要です。

 

Lync/Skype for Business クライアントの対応  *重要*

TLS 1.0/1.1 が廃止され TLS 1.2  必須となった Office 365 に接続するためには、次の対応するバージョン以上のクライアントをご利用ください。 この要件を満たさないクライアントからの接続はサポートされませんので、必ずアップデートを行うようお願いいたします

<TLS 1.2 対応のクライアント・デバイスおよび最小バージョン>

  • Lync 2013/Skype for Business 2015 デスクトップ クライアント (MSI版/C2R 形式) (Basic 版を含む) 15.0.5023.1000 以上
  • Skype for Business 2016 デスクトップ クライアント (MSI 版) (Basic 版を含む) 16.0.4678.1000 以上
  • Skype for Business 2016 デスクトップ クライアント (C2R 形式)   2018 年 4 月 更新版以上
    • 月次チャネル・半期チャネル (対象指定) – 16.0.9126.2152 以上
    • 半期チャネル – 16.0.8431.2242 以上
  • Skype for Business on Mac 16.15 以上
  • Skype for Business for iOS and Android 6.19 以上

 

なお、次のクライアント アプリケーションは TLS 1.2 に完全に対応していません。そのため、上述の対応するクライアント アプリケーションへの移行をご検討ください。

<TLS 1.2 に対応していないクライアント・デバイス>

  • Lync for Mac 2011
  • Lync 2013 for Mobile (iOS, iPad, Android, Windows Phone)
  • Lync "MX" Windows ストア クライアント
  • すべての Lync 2010 クライアント
  • Lync Phone Edition   *LPE に関するガイダンスは、こちらをご参照ください。
  • Lync Room System (いわゆる SRSv1)
    • LRS のオプション - SRSv1 (LRS) から SRS v2 へのアップグレードしてください。なお、これについては、引き続き弊社開発部門において確認中のため、追加のガイダンスが公開される予定です。

 

以下のデバイスは、現時点では TLS 1.2 に対応していません。しかし、弊社開発部門において引き続き確認が進められており、追加のガイダンスが公開される予定です。最新の情報については、TechCommunity の Blog 記事をご参照いただければと思いますが、本 Blog 記事についても追ってアップデートをお伝えさせていただきます。

<継続確認中のデバイス>

  • Skype Room System (いわゆる 'SRSv2' / Rigel)
  • Surface Hub

 

Windows OS の対応 (Windows 7 のみ)  *重要*

下記表の通り、Windows 8 以上の OS では TLS 1.2 が既定で有効になっているため、OS 側での追加の対応は必要ありません。Windows 7 をご利用の場合は TLS 1.2 が無効になっており対応が必要なため、本項の内容を参考にご対応ください。

 

Windows 7 で必要な対応は次の 2 つです。

  • SCHANNEL のTLS 1.2 の有効化
  • WinHTTP で TLS 1.2 を既定で有効にする

 

<SCHANNEL のTLS 1.2 の有効化>

このための対応としては、次の 2 つのレジストリ値を追加して、TLS 1.2 を有効にしてください。

キー : HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSecurityProvidersSCHANNELProtocolsTLS 1.2Client

値の名前 : Enabled (REG_DWORD)
値のデータ : 1

値の名前 : DisabledByDefault (REG_DWORD)
値のデータ : 0

 

<WinHTTP で TLS 1.2 を既定で有効にする>

このために必要な対応は、次の 2 点になります。

1) 更新プログラム  KB3140245 の適用

後述のレジストリ設定により TLS 1.2 を既定で有効にするため、次の更新プログラムを適用してください。なお、Windows 7 SP1 が前提となります。

Title : WinHTTP が Windows での既定のセキュリティで保護されたプロトコルとして TLS 1.1 および TLS 1.2 を有効にする更新プログラム.
URL :
https://support.microsoft.com/ja-jp/help/3140245 (日本語、機械翻訳)
https://support.microsoft.com/en-us/help/3140245 (英語、原文)

2) レジストリ値 DefaultSecureProtocols の設定

次のレジストリ値 DefaultSecureProtocols を追加し、Windows 7 SP1 で TLS 1.2 を既定で有効にしてください。

キー :
・x86 OS の場合: HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionInternet SettingsWinHttp
・x64 OS の場合 **両方のパスに追加** :
- HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionInternet SettingsWinHttp
- HKEY_LOCAL_MACHINESOFTWAREWow6432NodeMicrosoftWindowsCurrentVersionInternet SettingsWinHttp
値の名前 : DefaultSecureProtocols (REG_DWORD)
値のデータ :
・TLS 1.2 のみ既定で有効にする場合: 0x00000800
・TLS 1.2/1.1 を既定で有効にする場合: 0x00000A00
・TLS 1.2/1.1/1.0 を既定で有効にする場合: 0x00000A80

DefaultSecureProtocols の詳細については、上述の技術情報 KB3140245 を参照してください。
また、Easy fix を使用して DefaultSecureProtocols の設定を行うことも可能です。この場合、上述の技術情報 KB3140245 の [Download] をクリックしてください。

 

Windows OS の対応 (共通) *念のためご確認ください*

<WinInet の TLS 1.2 の有効化>

Internet Explorer 11 がインストールされている場合、WinInet の TLS 1.2 は既定で有効になっています。しかし、この設定は、インターネット オプションの [詳細設定] から変更可能なため、念のため TLS 1.2 が有効な状態で展開されていることをご確認ください。

インターネット オプション >[詳細設定] タブ>  [TLS 1.2 の使用]

 

 

2. オンプレミス サーバーと Office 365 との連携

準備が必要なケース

ユーザーがオンプレミス Lync/Skype for Business Server にホストされている場合であっても、ハイブリッド展開やフェデレーションなど Office 365 との連携が発生するシナリオが考えられます。Skype for Business が関連する連携のシナリオと TLS 1.0/1.1 廃止に伴う準備の要否は、次の表の通りとなります。

*1 Exchange Server TLS guidance, part 1: Getting Ready for TLS 1.2 を参照

 

オンプレミス Lync/Skype for Business Server の対応

TLS 1.0/1.1 が廃止され TLS 1.2  必須となった Office 365 との連携のためには、次のバージョンへのアップデートを行ってください。

  • Skype for Business Server 2015
    •  更新プログラム: CU6 HF2 (6.0.9319.516/2018 年 3 月の更新) 以上
    • OS : Windows Server 2012*, Windows Server 2012 R2, Windows Server 2016
  • インプレース アップグレードの Skype for Business Server 2015
    • 更新プログラム : CU6 HF2 (6.0.9319.516/March 2018 update) 以上
    • OS : Windows Server 2008 R2*, Windows Server 2012*, Windows Server 2012 R2
  • Lync Server 2013
    • 更新プログラム : CU10 (5.0.8308.1001/2018 年 4 月の更新) 以上
    • OS : Windows Server 2008 R2, Windows Server 2012, Windows Server 2012 R2

*  KB3140245 もしくはその置き換えの更新プログラムを適用済みであること

オンプレミス Lync Server 2010 のハイブリッド展開のシナリオはサポートされません。オンプレミス Lync Server 2010 を利用している場合は、上述の Skype for Business Server 2015 CU HF2 以上へのアップグレードをお勧めします。
また、Office Communication Server 2007 R2 以前のサーバーとのハイブリッド展開についても、サポートされません。

 

3. Skype for Business Online と連携する 3 rd Party 製品について

Skype for Business Online では、Skype for Business Online と連携するアプリケーション開発のための SDK/API が提供されています。これらの SDK/API を使用して実装されたアプリケーションが TLS 1.2 に完全に対応しているかは、アプリケーションの実装に依存します。

そのため、これらの SDK/API を使用して実装された 3rd Party 製のアプリケーションを利用している場合は、アプリケーションの提供元に、アプリケーションが TLS 1.2 を完全にサポートしているかをご確認ください。また、これらの SDK/API を使用したアプリケーションを開発している場合は、下記ホワイトペーパーを参照し、アプリケーションが TLS 1.2 に対応にするよう評価・改修を行ってください。

Title : Whitpaper: Solving the TLS 1.0 Problem
URL : https://www.microsoft.com/download/details.aspx?id=55266

 

4. その他の考慮事項

フェデレーション認証を構成している場合には、下記 Blog を参考に AD FS/WAP (AD FS Proxy) の対応についてもご確認ください。

Title : Office 365 の TLS 1.0/1.1 無効化に伴う AD FS / WAP (AD FS Proxy) の対応
URL : https://blogs.technet.microsoft.com/jpazureid/2018/01/10/adfs-tls12/

また、ネットワーク観点では、Office 365 への経路上の Proxy サーバーやその他ネットワーク機器・セキュリティ デバイスにおける TLS 1.2 への対応状況についてもご確認くださいますようお願いします。

 

 

参考情報

Title : Office 365での TLS 1.2 の必須使用に対する準備
URL : https://support.microsoft.com/ja-jp/help/4057306

Title : Preparing for TLS 1.0/1.1 Deprecation - O365 Skype for Business
URL : https://techcommunity.microsoft.com/t5/Skype-for-Business-Blog/Preparing-for-TLS-1-0-1-1-Deprecation-O365-Skype-for-Business/ba-p/222247

Title : Whitpaper: Solving the TLS 1.0 Problem
URL : https://www.microsoft.com/download/details.aspx?id=55266

Title : WinHTTP が Windows での既定のセキュリティで保護されたプロトコルとして TLS 1.1 および TLS 1.2 を有効にする更新プログラム.
URL :
https://support.microsoft.com/ja-jp/help/3140245 (日本語、機械翻訳)
https://support.microsoft.com/en-us/help/3140245 (英語、原文)

Title : Certified Skype for Business Online Phones and what this means for Microsoft Teams
URL : https://techcommunity.microsoft.com/t5/Skype-for-Business-Blog/Certified-Skype-for-Business-Online-Phones-and-what-this-means/ba-p/120035

Title : Exchange Server TLS guidance, part 1: Getting Ready for TLS 1.2
URL : https://blogs.technet.microsoft.com/exchange/2018/01/26/exchange-server-tls-guidance-part-1-getting-ready-for-tls-1-2/

Title : Office 365 の TLS 1.0/1.1 無効化に伴う AD FS / WAP (AD FS Proxy) の対応
URL : https://blogs.technet.microsoft.com/jpazureid/2018/01/10/adfs-tls12/

Title : Outlook 2016/2013/2010 から Exchange Online に接続する際に TLS 1.2 が利用されるようにする方法 (Windows 7 では作業が必要)
URL : https://blogs.technet.microsoft.com/outlooksupportjp/2018/01/05/tls/

Title : Outlook クライアント TLS 1.2 対応についてよくある質問
URL : https://blogs.technet.microsoft.com/outlooksupportjp/2018/02/14/tlsfaq/

Title: Outlook が実際に TLS 1.2 を使用した通信を行っているか確認する方法
URL : https://blogs.technet.microsoft.com/outlooksupportjp/2018/01/23/outlook_tls_check/

 

 

免責事項:
本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Introducing yet another approach for iot compiler toolchains – iotz

$
0
0

iotz is an extension based containerized wrapper for other IoT compiler toolchains.
There are many toolchains with specific needs and way of using. We developed this experimental tool to make compiling things easier.

- cross compiling tools are mostly platform specific (and sometimes hard to setup thoroughly)
- the tools may not be available on user's favorite OS (or may have a platform specific bug(s)/inconsistencies)
- toolchains or their dependencies sometimes don't play well with each other on the same user host system.
- there are many platforms for IoT toolchain developers to target.
- reproducing build reliability across systems is not easy.
- higher entry level for a device framework / tooling
- advanced users might still need a transparent way to reach actual sub framework
- some platforms already benefit the pre-built docker containers as a build environment

iotz;
- tries to answer the problems above
- provides a seamless interface for IoT and cross platform toolchains.
- provides an external extension support, so anyone (big or small) can attach their platform freely
- doesn't provide any toolchain by itself. (extension can add commands or define the behavior for pre-exist commands)

(It is in an early phase hence both feedback and contributions are appreciated)

The source code repository is hosted on Github.

How does iotz work?

iotz, brings a Ubuntu base image with most common dependencies are pre-installed.
That image is named/tagged as `azureiot/iotz`.

In order to explain things easier, lets assume that we want to develop an application for `Arduino Uno` board using Arduino toolchain.
First, we need to setup the environment. iotz expects us to execute `iotz init arduino uno` under the project folder.

Once we do that, iotz will call `createExtension` function from iotz Arduino extension.
That function returns a set of docker specific commands to create a specialized arduino base container.
Eventually, we will find the container named as `azureiot/iotz_local_arduino`.
This specialized container image (fork from the base image) has all the necessary tools to compile an Arduino project.

Initial creation process for extension image may take time. However, it happens once (unless you force update them).

Lets step back for a second and talk more about `iotz init arduino uno` call we made.
Assuming, we executed that command under a project folder located at `pre_folder/app_folder/` path.
On the file system, that path has a unique id / inode number (locally unique).
iotz uses that unique id to create a specific image for that folder only. (image is based on Arduino base image)
If unique id for `pre_folder/app_folder/` path was `8595881942`, the final image name would be `aiot_iotz_8595881942`.

Unless you list the docker container list manually, all the things mentioned above won't be visible to you.

Later calls to iotz (under the same path) will always resolve under the same container.
i.e. `iotz connect` under the same path will simply mount `pre_folder` on the same container named `aiot_iotz_8595881942`.
However, it will put you on the shell under the `pre_folder/app_folder` instead.
Mount location is predefined to `../` but the actual work path and unique id are set based on current path.

please note; the path approach mentioned above is premature and will be configurable to suit more needs

During the `init` step, iotz gathered everything it needs to setup the environment.
We have a configuration file that has been filled by iotz (iotz.json).
So, next time, we may just call `iotz init` and it will grab the rest from that file.
Finally, iotz created a specialized Docker image that is bound to our project folder (`aiot_iotz_8595881942`).

As a last step, we will execute `iotz compile`.
iotz will gather the `compile` related set of commands from the Arduino extension and execute them on `aiot_iotz_8595881942`.

That's it!

Details for the extension template is given below.

reminder; By default, iotz searches the extension under the official extensions folder and then tries to require blindly. So, if you have published your iotz extension to npm, the user should install that extension globally to make it available to iotz

// meaning for some of the arguments below
// runCmd -> args after the command. i.e. -> iotz init mbed mxchip. -> runCmd == 'mbed mxchip'
// command -> the command itself. i.e. -> iotz init mbed mxchip -> command is 'init'
// project_path -> pwd (current path)

exports.detectProject = function(project_path, runCmd, command) {
  // return config file json or null based on whether the project on `project_path` is a
  // match to extension. If multiple extensions returns with a configuration. User has to set the
  // correct extension manually
}

exports.selfCall = function(config, runCmd, command, compile_path) {
  // define the behavior for a named call
  // i.e. your extension name is `abc`
  // user might call `iotz abc`
  // what bash command you want to execute on the container?
  // check what we do with arduino, mbed, and raspberry-pi
}

exports.createExtension = function() {
  // what bash command you want to run on the container to prepare the environment
  // for your extension?
  return {
    run: // commands to run under Dockerfile
    callback: function()... optional callback to run after container is created
  }
}

exports.buildCommands = function(config, runCmd, command, compile_path) {
  var callback = null;
  var runString = "";

    // define things to do for `init`, `localFolderContainerConstructer`, `clean`,
    // `compile`, and `export` commands
    // set bash stuff into `runString`
    // if you want to run any additional post init logic, set the callback = function(config)
    // `config` corresponds to `iotz.json` contents

  if (command == 'init') {
    // set init stuff here
  } else if (command == 'localFolderContainerConstructer') {
    // set localFolderContainerConstructer things here.
    // difference between `localFolderContainerConstructer` and `init` is.. `init` is a user command
    // `localFolderContainerConstructer` will be called no matter what and will be called
    // prior to `init`
  } else if (command == 'clean') {
    // set what to do for `clean`
  } else if (command == 'compile') {
    // things for `compile`
  } else if (command == 'export') {
    // things for `export`
  } else {
    console.error(" -", colors.red("error :"),
              "Unknown command", command);
    process.exit(1);
  }

  return {
    run: runString,
    callback: callback
  };
}

exports.createProject = function createProject(compile_path, runCmd) {
  // create an empty project based on the information provided by user
}

Microsoft Office 365 Service Communications API を使用した Dynamics 365 Customer Engagement サービス監視: 事前準備

$
0
0

みなさん、こんにちは。

前回に続き、Office 365 Service Communications API を使って Dynamics 365 CE のサービス監視を試してみます。
まだ前回の記事をご覧になっていない方は一読ください。

Microsoft Office 365 Service Communications API を使用した Dynamics 365 Customer Engagement サービス監視: 概要

Office 365 管理センターの正常性

Office 365 の管理センターにてテナントに含まれるサービスの正常性を確認することができます。

image

正常性メニュー配下に 2 つのメニューがあり、主に以下の情報が確認できます。

- サービス正常性(現在の状態、過去の状態)
- メッセージセンター(予定されているメンテナンス情報)

早速 Office 365 Service Communications API で取得してみます。

事前準備

最初に Office 365 Service Communications API を呼び出すクライアントアプリを決めます。
プログラムを書いてもいいですが素早く検証するため OAuth2.0 に対応した PostMan というツールを利用します。
アプリが決まったら、取得先の Office 365 テナントの Azure AD にアプリを登録します。

以下は PostMan を使用した例です。

1. Office 365 管理センターにログインします。

2. [管理センター] > [Azure Active Directory] をクリックします。

image

3. Azure Active Directory 管理センターが開かれます。[Azure Active Directory] > [アプリの登録] をクリックします、

image

4. [新しいアプリケーションの登録] をクリックします。

5. 名前、アプリケーションの種類、サインオン URL を入力し、[作成] をクリックします。

今回は PostMan を利用するため以下になります。

image

サインオン URL : https://www.getpostman.com/oauth2/callback

6. アプリの画面が開かれます。アプリケーション ID の値をコピーし、メモ帳に貼ります。

image

これがクライアントで利用する Client ID になります。

7. [設定] をクリックします。

image

8. [必要なアクセス許可] をクリックします。

9. [追加] > [APIの選択] で “Office 365 Management APIs” をクリックし [選択] をクリックします。

image

10. 続いてアクセス許可を 2 つ選択し、[保存] します。

image

11. [設定] メニューに戻り、[キー] をクリックします。

image

12. 説明と有効期限を設定し、[保存] をクリックします。

image

12. 保存した直後のみ値が表示されます。値をコピーしメモ帳に保持します。

image

画面が切り替わると値が非表示になるため注意してください。

これがクライアントで利用する Client Secret になります。

以上で事前準備は完了です。

まとめ

今回は APi を利用するため Office 365 テナントにアプリを登録しました。次回は、実際にサービス正常性を取得してみましょう。

- プレミアフィールドエンジニアリング 河野 高也

※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります

Registration is now open for Building Apps with Dynamics 365 Business Central

$
0
0

The Building Apps with Dynamics 365 Business Central course is designed for helping solution architects and developers on designing and developing extensions for Dynamics 365 Business Central. Participants will get an overview and in-depth information about the technical aspects involved in designing a great app or extension.

REGISTER NOW

Date: September 17-20

Location: Cliftons Sydney Level 13, 60 Margaret Street Sydney

Price: $999 AUD

Target Audience: This training is intended for developers, solution architects and technical consultants with proven experience with Dynamics 365 Business Central or Microsoft Dynamics NAV. Students must have Dynamics 265 Business Central or Microsoft Dynamics NAV implementation and architecture experience and be familiar with Dynamics 365 Business Central or Microsoft Dynamics NAV development topics.

At Course Completion: After completing this training, the participants will be able to

  • Create your Development Environment
  • Understand the architecture of the application
  • Develop Extensions
  • Extend Tables, Pages and Reports
  • Add New Objects
  • Use Events and its architecture
  • Use the Inn-Client Designer
  • Implement Architectural Design Patterns
  • Extend XMLPorts and Queries
  • Implement Azure functions
  • Use Microsoft Flow and Microsoft PowerApps
  • Implement Assisted Setup using Page Wizards
  • Create Notifications
  • Use Application Areas
  • Understand PowerShell Functions

Note: This course is not appropriate for those who are advanced in NAV 2018. The differences between NAV 2018 and Business Central (user interface and cloud offering) would not warrant attendance for those advanced in NAV 2018. This course is intended for those who are at the intermediate level of NAV 2018 and wish to become more advanced in Business Central.

Azure Blockchain Workbench, Object Detection, and More on The Friday Five!

$
0
0

Docker and Azure Kubernetes Service for .NET Developers

Daniel Krzyczkowski is a Senior Software Developer and Microsoft MVP. He is passionate about Microsoft technologies and loves learning new things about cloud and mobile development. He enjoys sharing his knowledge about C# programming language, Microsoft Azure, Xamarin and Universal Windows Platform. Follow him on Twitter @DKrzyczkowski.

                                                                                                  

Object detection with Microsoft Custom Vision

Henk Boelman works as a Cloud Solutions Architect in the Netherlands. He started out as a software developer in the late '90s and later moved on to the role of architect. He now guides organizations in their cloud adventure, with a strong focus on cloud native software development. During these years, Henk has built and designed numerous web-based platforms for small and large companies. He loves to share his knowledge on topics such as DevOps, Azure and Cognitive Services by providing training courses and he is a regular speaker at user groups and conferences. In June 2018 he received a Microsoft MVP award in the AI category. Follow him on Twitter @hboelman

Deploy a new application in Azure Blockchain Workbench

Rebaï Hamida is a Tunisian Microsoft MVP now based in Canada. She is a software architect, and developer who enjoys building sample code source, writing articles and blogs. Rebaï always strives to find the right way to explain tips, learning and the best teaching practices. She has been using Microsoft technologies in her work for international companies since 2009 and is on a mission to make a change in her country by leaving her footprint. Follow her on Twitter @RebaiHamida

 

Demystifying Project Service Resource Utilization: The Sequel

Scott LeFante has been involved within the Dynamics CRM community for over 4 years and the CRM Community for almost 20 years. A Field Service and Project Service expert, and recently named Microsoft MVP, Scott has successfully implemented CRM in various industries including Media and Entertainment, Manufacturing, Consumer Goods, Retail, Health Care and Sports/Entertainment.  Scott also co-hosts the popular CRM Audio podcast, At Your Service, with fellow MVP Shawn Tabor.  You can follow Scott at the At Your Service Blog site or subscribe to the CRM.Audio podcast. Follow him on Twitter @FldService_Guru

Google Chrome Installation One-Liner

Mick Pletcher is a Cloud and Datacenter Management MVP and senior system administrator at Waller Lansden Dortch & Davis, LLP, a  Nashville, Tennessee. He is a nationally respected technology expert specializing in System Center Configuration Manager, Microsoft Deployment Toolkit, Active Directory, and PowerShell. Follow him on Twitter @mick_pletcher

RSVP and join us for our Aug. 29 Meetup: Future of Gov Security – Automated ATOs, Revamped TIC & Beyond

$
0
0

The President’s Management Agenda (PMA) is calling on agencies to accelerate their IT modernization efforts with a continued focus on security. At this month’s meetup, we will discuss how agencies can navigate ATOs and TIC compliance, so they can realize the benefits of the cloud and achieve greater agility while strengthening their security posture.

To take a closer look and gain insight on how government and industry are working together to drive innovation, we invite you to RSVP and join us for the Microsoft Azure Government Meetup, “Future of Gov Security – Automated ATOs, Revamped TIC & Beyond,” on Wednesday, Aug. 29 from 6 – 8:15 p.m. at 1776 Crystal City, Virginia.*

Featured speakers include:

 Susie Adams, CTO, Microsoft Federal
Mark Cohn, CTO, Unisys Federal
Greg Elin, CEO, GovReady and former Chief Data Officer, Federal Communications Commission (FCC)
Nate Johnson, Cloud Security & Compliance Director, Microsoft

 

Be sure to reserve your spot today for this can’t-miss Meetup along with excellent networking opportunities and refreshments. As always, this event is free and open to the public – please invite your colleagues and connection to join also!

*IMPORTANT: Due to construction at our usual 1776 DC location, the August Meetup will be in 1776’s Crystal City location.

The Microsoft Azure Government DC User Community, a growing community of nearly 1,900 members, hosts monthly Meetups that include industry and government professionals sharing best practices, lessons learned, and insights on government cloud innovation. Please join us!

Kanban Board Tools Extension Now Available

$
0
0

Introduction

The Kanban Tools extension for VSTS is now available on the marketplace.  This is our first public release of the extension and includes the ability to copy to and from a team's Kanban board.

Here is one scenario where you may find this extension to be useful.  One team at an organization has spent a lot of time and trial-and-error to get their Kanban board just so.  Maybe that team is part of a pilot project, or maybe they are a center of excellence for the organization, or maybe they are just a very passionate team amongst many.  Regardless, the board configuration is of value.  Perhaps it provides greater productivity or suits the culture particularly well.  Perhaps it meets certain company or regulatory requirements.  Or, maybe, it just looks like it might be beneficial and worth taking for a spin.

Regardless of the reasons, other teams at that organization want to use the board configuration themselves.  Or, perhaps, the organization prefers that a team's board be adopted as a standard across the enterprise.  Either way, the Kanban Board Tools extension will let one team easily copy one board configuration to other teams.  It can be applied in two directions, too, meaning that one team can apply their board to another team, or another team may copy the board to their own board.

Let's get the extension installed so that you can try it out!

 

Installation

To install the extension, first sign in to your VSTS tenant, then select Browse Marketplace:

 

Search for Kanban Board Tools:

  

And install the extension:

 

 That's it!  At this point, you should see the new tool icon when you look at a board.  To get there, select 1) Work, 2) Backlogs, 3) Board, and 4) Kanban Tools extension as shown:

  

Using the Kanban Tools

Once you select the extension icon, a dialog will appear that lets you choose your options:

 

You have the option to either copy your team's board settings to another team or to copy another team's board settings to your team.

NOTE: This extension makes a couple of important assumptions to keep in mind.  First, in order for the copy to work, you must have administrator permissions to the target team project (i.e. the one that will have a new board when you are done).  Keep in mind that the target team board will be completely replaced with the new configuration, so be really sure you want to proceed.  As a precaution, you can make a 'backup' of the target team's board by creating a temporary team project and copying its original settings there first.

If you wish to copy To another team, you then have the option to choose the destination team:

 

Next, you can select the work item levels whose boards you would like to copy:

 

By default, the extension will attempt to connect elements between the old and new boards automatically.  The connections are based on item states.  In other words, lanes that share the same underlying state as configured when customizing the Kanban board:

 

Those states will be matched automatically.  Sometimes, however, there may be discrepancies, and you can manually set the mappings by toggling the Customize Column Mappings switch.  Then, pick the board item level you wish to change, and set the mappings appropriately in the drop-down:

 

When everything looks good to you, click OK, and the tool will copy the board settings:

 

The copy operation will apply columns, swim lanes, WIP limits, Doing/Done split columns, and state mappings.

If you need to copy settings From another team instead, everything works in exactly the same way as above but with the source and destination boards reversed:

 

Now It's Your Turn

Try out the Kanban Board tools for yourself.  We hope that it simplifies an otherwise tedious task and brings value to your development efforts.  In the true agile sense, we value your candid feedback.  Please take a moment to rate the extension and share your thoughts below.

 

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>