Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

How to disable/enable HTTP/2, Azure App Service

$
0
0

The official announcement can be found here “Announcing HTTP/2 support in Azure App Service”.

Currently, to disable or enable HTTP/2 on an Azure App Service you need to use Resource Explorer.  I have written numerous articles about Resource Explorer so if you need more information about it and what it is, take a quick look at these articles.

To perform this action using Postman, check this article here “Make changes to Azure App Service setting using Postman”.

To access Resource Manager from the Azure Portal, click on the App Service which you want to change the HTTP/2 value and click on the Resource explorer link, as seen in Figure 1, the Go.

image

Figure 1, how to disable enable HTTP 2, HTTP2, HTTP/2 on an Azure App Service

Then you navigate to the App Service you want to modify.  I selected the Go link from the Portal and was navigated to the App Service that I had focus on from the portal, that is cool.

Click on + CONFIG and scroll down to the bottom and you will see an attribute names “http20Enabled”, as seen in Figure 2.

image

Figure 2, how to disable enable HTTP 2, HTTP2, HTTP/2 on an Azure App Service

*NOTE: although the value is shown at the +CONFIG level, you cannot modify it from there.  You need to expand +CONFIG and then click on web, as seen in Figure 3.

image

Figure 3, how to disable enable HTTP 2, HTTP2, HTTP/2 on an Azure App Service

Also make sure that you are in Read/Write mode, then click the Edit button, shown in Figure 3.

After pressing the Edit button, change the “http20Enabled” to the desired value and then click the PUT button, as seen in Figure 4.

image

Figure 4, how to disable enable HTTP 2, HTTP2, HTTP/2 on an Azure App Service

After pressing the PUT button, HTTP/2 is is enabled.


Getting SQLSTATE[28000] [1045] Access denied for user ‘user’@’IPaddress’ (using password: YES) when trying to restore mySQL dump to Azure MySQL DB PaaS

$
0
0

I was debugging a Magento deployment on Microsoft Azure for one of my customers as they were trying to restore MySQL dump to Azure MySQL DB PaaS and they kept on getting the following error:

Getting SQLSTATE[28000] [1045] Access denied for user ‘user’@'IPaddress' (using password: YES)

I was able to restore DB dump normally from my machine to the same DB with the same user and password so I suspected that there is something wrong with the DB dump itself and that is when I asked them to share the dump and I found that the dump had the attribute "DEFINER" which requires super privilege which is not allowed on Azure mySQL PaaS and the other issue was that Azure MySQL PaaS is built on INNODB engine and you need to make sure that the script is using this engine.

Actions to solve the issues:

use the following command to remove the attribute “DEFINER” as it requires super privilege

sed's/sDEFINER=`[^`]*`@`[^`]*`//g' -ibackupfile_name.sql

I use the following command to replace MyISAM engine with INNODB engine which is the core of the mySQL implementation.

:%s/MyISAM/InnoDB/gc

Azure SQL Database Read Scale-Out

$
0
0

Microsoft Japan Data Platform Tech Sales Team

大林裕明

Azure SQL Database に Read Scale-Out の機能が追加されました。(4/17 現在はプレビューです)

■ Use read-only replicas to load balance read-only query workloads (preview) 英語

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-read-scale-out

この機能が利用できるのは購入モデルが DTU の場合は Premium 、新しく追加された vCoreモデルでは Business Critical になります。

上記のサービス階層では、可用性の SLA を担保するため、AlwaysOn のレプリカを自動的にプロビジョニングしています。

このレプリカは通常使用されるデータベースと同じパフォーマンスレベルで準備されています。

Read Scale-Out 機能ではこのレプリカを Read Only(読み取り専用)の処理に使うことで、Read/Write のデータベースから負荷を分離することができます。

image

例えば、売上伝票の入力などのデータの追加/更新系の処理は通常のデータベースに接続し、帳票や分析処理など検索系の処理は読み取り専用レプリカに接続することで負荷の高い検索が行われても、売上伝票の入力には影響を与えないシステムが構築できます。

またレプリカ間では若干の遅延が生じる可能性がありますが、常にトランザクション的には一貫した状態にあります。

さらにお得なのは追加コストなしで使うことができることです。

では、早速使ってみましょう。

Read Scale-Out の機能を使えるようにするには、PowerShell もしくは REST API を使って行います。

今回は PowerShell で実行したいと思います。

※ PowerShell は 2016/12 以降にリリースされたバージョンをご利用ください。

Azure のアカウントにログインし、下記のコマンドを発行します。

Set-AzureRmSqlDatabase -ResourceGroupName <resourcegroup> -ServerName <server> -DatabaseName <database> -ReadScale Enabled

<resourcegroup> :対象の Azure SQL Database のリソースグループ

<server> :対象の Azure SQL Database の SQL Server のサーバー名をドメイン(database.windows.net) なしで指定

<database> :対象の Azure SQL Database のデータベース名

-ReadScale :Enabled で機能オン、Disabled で機能オフ

■ 実行例

image

これで Read Scale-Out 機能が利用できます。

では実際にデータベースに接続して使ってみましょう。

Read Only データベースへの接続は ApplicationIntent = ReadOnly を接続文字列に追加します。

Read/Write のデータベースへの接続は、接続文字列に ApplicationIntent = ReadWrite を指定するか、ApplicationIntent を書かなかった

ときに接続されます。

■ Read only データベースへの接続文字列

Server=tcp:<server>.database.windows.net;Database=<mydatabase>;ApplicationIntent=ReadOnly;User ID=<myLogin>;Password=<myPassword>;Trusted_Connection=False; Encrypt=True;

■ Read/Write データベースへの接続文字列

Server=tcp:<server>.database.windows.net;Database=<mydatabase>;ApplicationIntent=ReadWrite;User ID=<myLogin>;Password=<myPassword>;Trusted_Connection=False; Encrypt=True;

Server=tcp:<server>.database.windows.net;Database=<mydatabase>;User ID=<myLogin>;Password=<myPassword>;Trusted_Connection=False; Encrypt=True;

SQL Server Management Studio から Read Only データベースに接続する方法

サーバへの接続のオプションをクリック

image

オプション画面の「追加の接続パラメータ」のタブをクリックし、追加の接続文字列パラメーターとして「ApplicationIntent = ReadOnly」を追加します。

image

これで接続すれば Read Only データベースに接続されます。

では Read Only になっているか試しに INSERT文を投げてみましょう。

image

Read Only データベースだとエラーが返されました。

この Read Only の接続で負荷のかかる検索を実行してみました。

image

Azure Portal のメトリックで監視する DTU の使用状況は 0% のままで、Read/Writeのデータベースには影響が出ていないことが分かります。

※このメトリックは Read/Write のデータベースのリソースのみを表示しています。image

Read/Write の接続で同じ検索をすると DTU が 100% になります。

image

※ 現在は、Read Only のメトリックは確認することはできません。

今まで Premium が高いということで Standard を選択されていた方も、Premium では 追加コストなしで読み取りの処理がオフロードできるということにメリットを感じて使っていただければと思います。

Web Server Logging, IIS logs, deployment slots and swaps

$
0
0

I wrote a number of articles related to this topic, have a look to get a better context in regards to this one.

The following is a scenario I worked on, which was a little confusing, but understandable, nonetheless.

The misunderstanding was that when a production slot is swapped with a staging slot, the worker is no restarted and therefore, the production slot, after a slot is running in a different W3WP process.  When the swap happens, the process does not change only the routing of requests changes.  Because of that, you might see different information in your IIS / Web Server logs after a swap.  Let me try to explain why now.

Production – Pre swap

This is how the configuration looked liked before I swapped my slots.  As you can see in Figure 1, the Azure Blob Storage account is iislogs and the container is blackforest-iislogs.  I also made this slot setting sticky.  This means, when I swap, this setting will not be moved.

image

Figure 1, swapping slots and IIS Web Server logs Azure App Service

I look in KUDU/SCM and see the process user_name is BLACKFOREST, see Figure 2.

image

Figure 2, swapping slots and IIS Web Server logs Azure App Service, KUDU/SCM

I make some requests to the PRODUCTION slot and see that a folder named BLACKFOREST is created, Figure 3, which contains my IIS / Web Server logs.

image

Figure 3, swapping slots and IIS Web Server logs Azure App Service

Then, looking into the logs I see that the s-sitename is also BLACKFOREST.  *NOTE – the s-sitename with the tilde “~” is a request to the KUDU/SCM site.

#Fields: date time s-sitename
2018-04-18 10:45:29 BLACKFOREST GET
2018-04-18 10:42:36 ~1BLACKFOREST

No let’s look at the staging slot.

Staging – pre swap…

This is how the configuration looked liked before I swapped my slots.  As you can see in Figure 4, the Azure Blob Storage account is iislogs and the container is blackforest-iislogs-staging.  I also made this slot setting sticky.  This means, when I swap, this setting will not be moved.

image

Figure 4, swapping slots and IIS Web Server logs Azure App Service

I look in KUDU/SCM for the staging slot and see the process user_name is BLACKFOREST_2DC8, see Figure 5.

image

Figure 5, swapping slots and IIS Web Server logs Azure App Service, KUDU/SCM

I make some requests to the STAGING slot and see that a folder named BLACKFOREST_2DC8 is created, Figure 6, which contains my IIS / Web Server logs.

image

Figure 6, swapping slots and IIS Web Server logs Azure App Service

Then, looking into the logs I see that the s-sitename is also BLACKFOREST_2DC8.

#Fields: date time s-sitename
2018-04-18 10:49:53 BLACKFOREST__2DC8
2018-04-18 10:50:15 ~1BLACKFOREST__2DC8

No let’s do a swap.

After the slot swap

The Application Setting is sticky to the slot and therefore when I swap I would expect the Application Setting “WEBSITE_HTTPLOGGING_CONTAINER_URL” not to change.  Let’s confirm.

image

It did not change, it remained the same as is seen in Figure 1 and Figure 4.

However, now my production slot is running with a user_name value of the previous staging slot, I saw this by logging into the PRODUCTION KUDU/SCM site and I see the following, Figure 7, in Process Explorer.

Production after swap with staging

image

Figure 7, swapping slots and IIS Web Server logs Azure App Service, KUDU/SCM

Additionally, when I look at the storage container, I see a new folder which matched the user-name, Figure 8.

image

Figure 8, swapping slots and IIS Web Server logs Azure App Service

This means that Web Server / IIS logs are being written into that folder and the s-sitename in the log file contains the same name too, I.e. BLACKFOREST_2DC8.

Staging slot after the swap with production

You see the same when accessing the staging slot, it is now staging, but has a user_name of BLACKFOREST, Figure 9.

image

Figure 9, swapping slots and IIS Web Server logs Azure App Service, KUDU/SCM

And, as expected, a directory named BLACKFOREST is existing in the storage container, Figure 10.

image

Figure 10, swapping slots and IIS Web Server logs Azure App Service

If you are doing analysis of your IIS / Web Application logs, then you need to keep this in mind if you use s-sitename as a filter.  You might instead consider using cs-host.

The MIPS R4000, part 13: Function prologues and epilogues

$
0
0


We saw last time how functions are called.
Today we'll look at the receiving end of a function call.



As noted earlier,
all functions (except for lightweight leaf functions)
must declare unwind codes in the module metadata so that the
kernel can figure out what to do if an exception occcurs.



The stack for a typical function looks like this:














































param 6 (if function accepts more than 4 parameters)
param 5 (if function accepts more than 4 parameters)
param 4 home space
param 3 home space
param 2 home space
param 1 home space ← stack pointer at function entry

local variables



outbound parameters

beyond 4 (if any)


param 4 home space
param 3 home space
param 2 home space
param 1 home space ← stack pointer after prologue complete


On entry to the function,
the first four parameters are in registers,
but they have reserved home space on the stack.
Even if a function has fewer than four parameters,
there is home space for all four registers.
If there are more than four parameters,
then those beyond the fourth are on the stack.



The function prologue needs to move the stack pointer down
to make room for the local stack frame.
The local variables include the return address and
any saved registers.
After the local variables come the
outbound parameters
(either directly on the stack for parameters beyond 4,
or home space for the four register-based parameters).
Again, even if a function accepts fewer than four parameters,
it gets a full four words of home space.¹



The 1992 compiler organized the local variables
with the declared function local variables at higher addresses,
followed by saved registers, and the return address closest
to the outbound parameters.
By 1995, the compiler started exploring other ways of
organizing its local variables.



A typical function prologue looks like this:



ADDIU sp, sp, -n1 ; carve out a stack frame
SW ra, n2(sp) ; save return address
SW s1, n3(sp) ; save nonvolatile register
SW s0, n4(sp) ; save nonvolatile register


The prologue must start by updating the stack pointer,
and then it can store its registers in any order.
You are allowed to interleave instructions from the function
body proper into the prologue, provided they are purely computational
instructions (no branches or memory access),
and provided they do not mutate
sp, ra,
or any nonvolatile registers.²
In practice, the Microsoft compiler does not take advantage of this.



To return from a function,
the function places the return value, if any,
in the v0 register and possibly the v1 register.
It then executes the formal function epilogue:



MOVE v0, return_value
LW s0, n4(sp) ; restore nonvolatile register
LW s1, n3(sp) ; restore nonvolatile register
LW ra, n2(sp) ; restore return address
JR ra ; return to caller
ADDIU sp, sp, n1 ; restore stack pointer (in branch delay slot)


Notice that the adjustment of the stack pointer happens as the very
last thing, even after the return instruction!
That's because it sits in the branch delay slot,
so it executes even though the branch is taken.



¹
If a function uses alloca,
then the memory is carved out between the existing local variables
and the outbound parameters.



²
This rule exists so that when the exception unwinder needs to
reverse-execute a function prologue, it can just ignore the
instructions it doesn't understand.

Microsoft brings AI-powered translation to end users and developers, whether you’re online or offline

$
0
0

Microsoft Translator has added new capabilities that allow users and developers to get artificial intelligence-powered translations whether or not they have access to the Internet.

The new capabilities allow both end-users and third-party app developers to have the benefit of neural translation technology regardless of whether the device is connected to the cloud or offline. ​

When using the Microsoft Translator app, end users can now download free AI-powered offline packs. In addition, through the new Translator app local feature preview, Android developers will be able to quickly and easily integrate online and offline AI text translations into their apps.

 

New AI-powered offline language packs for the Translator apps for Android, iOS, and Amazon Fire

The development comes after two years of work, and it complements Microsoft’s overall effort to make sure developers and users can access AI-powered tools where their data is, whether that’s in the cloud or on a device. That ability, which experts refer to as edge computing, comes as experts are figuring out ways to run powerful AI algorithms without the massive computing power of the cloud.

Microsoft Translator released AI-powered online neural machine translation (NMT) in 2016. Because of the computing power needed to run these high-quality translation models, this capability was only available online. In the latter part of 2017, this capability was made available on specific Android phones equipped with a dedicated AI chip. It allowed their users to get offline translation quality that was on par with the quality of online neural translation.

Building on this initial work, the Translator team was able to further optimize these algorithms, allowing them to run directly on any modern device’s CPU without the need for a dedicated AI chip. The new Translator apps now bring NMT to the edge of the cloud for all Android, iOS*, and Amazon Fire devices. Support for Windows devices is coming soon.

These new NMT packs produce higher quality translations, which are up to 23 percent better, and about 50 percent smaller than the previous non-neural offline language packs. These NMT packs are available in Translator’s most popular languages and new NMT languages will be added regularly. For the complete up to date list please check out https://translator.microsoft.com/help/articles/languages.

 

New Translator local feature preview for Android

For Android developers, the Translator app also now offers a preview of the new local feature, which enables developers to quickly and easily add text translation to any Android app that benefit from translation capabilities.

In addition, thanks to these new NMT offline packs, Android developers can for the first time add offline NMT to their apps, allowing their users to get access to NMT translated content without an Internet connection.

To integrate translation in their app, developers will just need to add some simple code that will use Android’s bound service technology with an AIDL interface to silently call the Translator app. The Translator app will do the rest. If the device is connected to the Internet, the Translator app will retrieve the translation from the Microsoft Translator service on Azure. If Internet connectivity isn’t available, the Microsoft Translator app will use the local NMT offline language packs to deliver this translation back to their app.

The feature is expected to graduate from preview to general availability within 90 days of the preview release.

When the device is online, translations can also leverage customized translation models that match the app and company’s unique terminology.

Whether the app gets its translations online or offline, the local feature bills the developer’s existing Microsoft Cognitive Services Translator Text API subscription. There is no need to create a new one and, as if the cloud API is being called directly, requests are not logged for either online or offline translations.

Learn more about how the local feature preview works in our GitHub documentation and sample app.

 

 

* The Microsoft Translator app for iOS is currently in the review process in the App Store and should be available by the end of the week (April 21, 2018). The newest update, which includes support for the new AI-powered offline language packs, will be version 3.2.0

 

Learn more

Release Management performance degradation in West Europe – 04/18 – Mitigated

$
0
0

Final Update: Wednesday, April 18th 2018 13:52 UTC

We’ve confirmed that all systems are back to normal as of 12:30 UTC. While the issue has self-healed, during the incident we were able to collect key diagnostic information from our web front-ends that the team is actively reviewing in order to under the root cause of the incident. Sorry for any inconvenience this may have caused.

Sincerely,
Ladislau


Initial Update: Wednesday, April 18th 2018 12:40 UTC

We're investigating performance degradation of release management in West Europe.

  • Next Update: Before Wednesday, April 18th 2018 13:15 UTC

Sincerely,
Ladislau

Announcing the launch of Azure M-series VMs with up to 4TB RAM in USGov Virginia region

$
0
0

We are excited to announce the launch of Azure M-series Virtual Machines in the USGov Virginia region. Azure M-series VMs offer memory up to 4 TB on a single VM with configurations of 64 or 128 hyper-threaded vCPUs, powered by Intel® Xeon® 2.5 GHz E7-8890 v3 processors. We’re also excited to announce that Azure is the first hyperscale cloud provider to offer VMs with up to 4TB optimized for large in-memory database workloads in a sovereign cloud (Adriene please verify if AWS offers X1e in their Gov cloud).

The Azure M-series is perfectly suited for your large in-memory workloads like SAP HANA and SQL Hekaton. With the M-series, these databases can load large datasets into memory and utilize the fast memory access with huge amounts of vCPU parallel processing to speed up queries and enable real-time analytics. You can deploy these large workloads in minutes and on-demand, scaling elastically as your usage demands. With availability SLAs of 99.95% for an Availability Set, 99.9% for a single node, you can provide application level SLA guarantees to your. Like all Azure VMs, you will be billed per-second (rounded down to the nearest minute) and you can even set-up automation on the platform to make sure you shut down and scale these VMs automatically, saving even more cost.

Learn more about M-Series.

Size vCPU’s Memory (GiB) Local SSD (GiB) Max data disks
M64s 64 1024 2048 32
M64ms 64 1792 2048 32
M128s 128 2048 4096 64
M128ms 128 3800 4096 64

 

For more information, please visit the Virtual Machines page and the Virtual Machines pricing page.

To request access to the M-series, submit your request quota and after your quota has been approved, you can use the Azure portal or API’s to deploy.

You can learn more about running SAP on Azure here: https://azure.com/sap/.

 

We welcome your comments and suggestions to help us improve your Azure Government experience. To stay up to date on all things Azure Government, be sure to subscribe to our RSS feed and to receive emails by clicking “Subscribe by Email!” on the Azure Government Blog.

 


Python in Visual Studio 15.7 Preview 4

$
0
0

Today we have released the first preview of our next update to Visual Studio 2017. You will see a notification in Visual Studio within the next few days, or you can download the new installer from visualstudio.com.

In this post, we're going to look at some of the new features we have added for Python developers: faster debugging, Conda environments, type hints and MyPy support. As always, the preview is a way for us to get features into your hands early, so you can provide feedback and we can identify issues with a smaller audience. If you encounter any trouble, please use the Report a Problem tool to let us know.

Faster Debugging

This release includes a new version of our ptvsd debug engine based on PyDevD, which we expect to be much faster than the previous version.

Most of the features you used in the previous version of the debugger are still available, but the following features are not yet supported:

  • Set Next Statement
  • Just My Code

If you want to use these features, you can revert back to the previous debugger version by unchecking “Use experimental debugger” in Tools > Options > Python > Experimental.

The following cases debugging still works but we fall back to the previous version of the debugger:

  • IronPython debugging
  • Attach to process
  • Debug unit tests
  • Mixed-mode debugging

For remote debugging you will need to start ptvsd on the remote server using the command:

py -3 -m ptvsd --server --port <port_num> --file main.py

We have also made a preview of the new ptvsd available in the Python extension for Visual Studio Code, which continues to keep our debugging capabilities consistent across Visual Studio and Visual Studio Code.

IntelliSense for Type Hints

As type hints in Python continue to gain popularity, we want to make sure you have easy access to the best tools to take advantage of them. These will help you improve your code quality and help make refactoring safer.

In this release we have added support for type hints in our IntelliSense. When you add type hints to parameters or variables they’ll be shown in hover tooltips.

For example, in the example below a Vector type is declared as a list of floats, and the scale() method is decorated with types to indicate the parameters and return types. Hovering over the scale() method when calling it shows the expected parameters and return type:

Using type hints you can also declare class attributes and their types, which is handy if those attributes are dynamically added later. Below we declare that the Employee class has a name and an id, and that information is then available in IntelliSense when using variables of type Employee:

In the next section, we’ll see how you can run MyPy to use these type hints to validate your code and detect errors.

Using MyPy with Type Hints

To fully validate your code against type hints, we recommend using MyPy. MyPy is the industry standard tool for validating type hints throughout your entire project. As a separate tool, you can easily configure it to run in your build system as well as the development environment, but it is also useful to have it be easily accessible while developing.

To run MyPy against your project, right-click on the project in Solution Explorer and find it under the Python menu.

This will install it if necessary, and run against every file included in your project. Warnings will be displayed in the Error List window and selecting an item will take you directly to the location in your sources.

By default, you may see many more warnings than you are prepared to fix straight away. To configure the files and warnings you see, use the Add New Item command and select the MyPy Configuration file. This file is automatically detected by MyPy and contains various filters and settings. See the MyPy documentation for full information about configuration options.

Conda Environments

You can now create and use Conda environments as well as manage packages for your Conda environments using pip or Conda.

To manage or use Conda environments from Visual Studio, you'll need Anaconda or Miniconda. You can install Anaconda directly from the Visual Studio installer or get it separately if you'd rather manage the installation yourself.

Note: There are known issues when using older versions of the conda package (4.4.8 or later is recommended). The latest distributions of Anaconda/Miniconda have the necessary version of the conda package.

You can create a new Conda environment using the Python Environments window, using a base Python from version 2.6 to 3.6.

Any environments created using Visual Studio or the Conda tool will be detected and listed in the Python Environments window automatically. You can open interactive windows for these environments, assign them in projects or make them your default environment. You can also delete them using Visual Studio.

To manage packages, you have the option of using either Conda or Pip package manager from the Python Environments window.

The user interface for both package manager is the same. It displays the list of installed packages and let you update or uninstall them.

You can also search for available Conda packages.

Solution Explorer also displays packages for each environment referenced in your project. It lists either Conda or pip packages, picking the most appropriate for the type of environment.

Give Feedback

Be sure to download the latest preview of Visual Studio and try out the above improvements. If you encounter any issues, please use the Report a Problem tool to let us know (this can be found under Help, Send Feedback) or continue to use our GitHub page. Follow our Python blog to make sure you hear about our updates first, and thank you for using Visual Studio!

 

What Happened to Bower?

$
0
0

Bower is a popular package management system for managing static content used by client-side web applications. Visual Studio provides rich support for Bower, including templates and package management tools.

In October 2017, there were announcements on Twitter hinting that the Bower platform was being deprecated. While Bower hasn’t gone away, the official website is encouraging people to use different frameworks, even going so far as to provide detailed instructions on “How to migrate away from Bower” and “How to drop Bower support”.

In their own words:

Message on Bower website: 'While Bower is maintained, we recommend using Yarn and Webpack for front-end projects'

Though it doesn’t say it explicitly, it implies that Bower is deprecated. Existing projects that depend on package management via Bower will continue to work for the time being; but it’s recommended that new projects should not take a dependency on Bower.

Introducing Library Manager

While there are other useful package managers, as Bower points out (e.g. npm), most are designed to handle a variety of tasks, which adds unnecessary complexity when you only need them for a single task (acquiring client-side libraries). So, here at Visual Studio, we decided to create a new tool that would be as simple as possible for specifically addressing the need to acquire client-side content for web applications. Hence, the introduction of “Library Manager”.

Library Manager ("LibMan" for short) is Visual Studio’s new client-side static content management system. Designed as a replacement for Bower and npm, LibMan helps users find and fetch library files from an external source (like CDNJS) or from any file system library catalog.

You can specify the library files required for your project by adding entries to the LibMan configuration file - libman.json. See the image below; it shows an example libman.json file in which some jQuery files are added to the wwwroot/lib directory.

Example libman.json

To learn more about LibMan, see the article "Library Manager: Client-side content management for web apps".

Publish Improvements in Visual Studio 2017 15.7

$
0
0

Today we released Visual Studio 2017 15.7 Preview 4. Our 15.7 update brings some exciting updates for publishing applications from Visual Studio that we’re excited to tell you about, including:

  • Ability to configure publish settings before you publish or create a publish profile
  • Create Azure Storage Accounts and automatically store the connection string for App Service
  • Automatic enablement of Managed Service Identity in App Service

If you haven’t installed a Visual Studio Preview yet, it’s worth noting that they can be installed side by side with your existing stable installations of Visual Studio 2017, so you can try the previews out, and then go back to the stable channel for your regular work. We’d be very appreciative if you’d try Visual Studio 2017 15.7 Preview 4 and give us any feedback you might have while we still have time to change or fix things before we ship the final version (download now). As always, if you run into any issues, please report them to us using Visual Studio’s built in “Report a Problem” feature.

Configure settings before publishing

When publishing your ASP.NET Core applications to either a folder or Azure App Service you can configure the following settings prior to creating your publish profile:

To configure this prior to creating your profile, click the “Advanced…” link on the publish target page to open the Advanced Settings dialog.

clip_image002

Create Azure Storage Accounts and automatically store the connection string in App Settings

When creating a new Azure App Service, we've always offered the ability to create a new SQL Azure database and automatically store its connection string in your app’s App Service Settings. With 15.7, we now offer the ability to create a new Azure Storage Account while you are creating your App Service, and automatically place the connection string in the App Service settings as well. To create a new storage account:

  • Click the “Create a storage account” link in the top right of the “Create App Service” dialog
  • Provide in the connecting string key name your app uses to access the storage account in the “(Optional) Connecting String Name” field at the bottom of the Storage Account dialog
  • Your application will now be able to talk to the storage account once your application is published

clip_image006

Managed Service Identity enabled for new App Services

A common challenge when building cloud applications is how to manage the credentials that need to be in your code for authenticating to other services. Ideally, credentials never appear on developer workstations or get checked into source control. Azure Key Vault provides a way to securely store credentials and other keys and secrets, but your code needs to authenticate to Key Vault to retrieve them. Managed Service Identity (MSI) makes solving this problem simpler by giving Azure services an automatically managed identity in Azure Active Directory (Azure AD). You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having any credentials in your code.

Starting in Visual Studio 2017 15.7 Preview 4, when you publish an application to Azure App Service (not Linux) Visual Studio automatically enables MSI for your application. You can then give your app permission to communicate with any service that supports MSI authentication by logging into that service's page in the Azure Portal and granting access your App Service. For example, to create a Key Vault and give your App Service access

  1. In the Azure Portal, select Create a resource > Security + Identity > Key Vault.
  2. Provide a Name for the new Key Vault.
  3. Locate the Key Vault in the same subscription and resource group as the App Service you created from Visual Studio.
  4. Select Access policies and click Add new.
  5. In Configure from template, select Secret Management.
  6. Choose Select Principal, and in the search field enter the name of the App Service.
  7. Select the App Service’s name in the result list and click Select.
  8. Click OK to finishing adding the new access policy, and OK to finish access policy selection.
  9. Click Create to finish creating the Key Vault.

clip_image008

Once you publish your application, it will have access to the Key Vault without the need for you to take any additional steps.

Conclusion

If you’re interested in the many other great things that Visual Studio 2017 15.7 brings for .NET development, check out our .NET tool updates in Visual Studio 15.7 post on the .NET blog.

We hope that you’ll give 15.7 a try and let us know how it works for you. If you run into any issues, or have any feedback, please report them to us using Visual Studio’s features for sending feedback. or let us know what you think below or via Twitter.

Announcing Visual Studio 2017 15.7 Preview 4

$
0
0

As you know we continue to incrementally improve Visual Studio 2017 (version 15), and our 7th significant update is currently well under way with the 4th preview shipping today. As we’re winding down the preview, we’d like to stop and take the time to tell you about all of the great things that are coming in 15.7 and ask you to try it and give us any feedback you might have while we still have time to correct things before we ship the final version.

From a .NET tools perspective, 15.7 brings a lot of great enhancements including:

  • Support for .NET Core 2.1 projects
  • Improvements to Unit Testing
  • Improvements to .NET productivity tools
  • C# 7.3
  • Updates to F# tools
  • Azure Key Vault support in Connected Services
  • Library Manager for working with client-side libraries in web projects
  • More capabilities when publishing projects

In this post we’ll take a brief tour of all these features and talk about how you can try them out (download 15.7 Preview). As always, if you run into any issues, please report them to us using Visual Studio’s built in “Report a Problem” feature.

.NET Core 2.1 Support

.NET Core 2.1 and ASP.NET Core 2.1 brings a list of great new features including performance improvements, global tools, a Windows compatibility pack, minor version roll-forward, and security improvements to name a few. For full details see the .NET Core 2.1 Roadmap and the ASP.NET Core 2.1 Roadmap respectively.

Visual Studio 15.7 is the recommended version of Visual Studio for working with .NET Core 2.1 projects. To get started building .NET Core 2.1 projects in Visual Studio,

You’ll now see ASP.NET Core 2.1 as an option in the One ASP.NET dialog

clip_image002[4]

If you are working with a Console Application or Class Library, you’ll need to create the project and then open the project’s property page and change the Target framework to “.NET Core 2.1”

clip_image004[4]

Unit Testing Improvements

  • The Test Explorer has undergone more performance improvements which results smoother scrolling and faster updating of the test list for large solutions.
  • We’ve also improved the ability to understand what is happening during test runs. When a test run is in progress, a progress ring appears next to tests that are currently executing, and a clock icon appears for tests that are pending execution.

clip_image006[4]

Productivity Improvements

Each release we’ve been working to add more and more refactorings and code fixes to make you productive. In 15.7 Preview 4, invoke Quick Actions and Refactorings (Ctrl+. or Alt+Enter) to use:

  • Convert for-loop-to-foreach (and vice versa)
  • Make private field readonly
  • Toggle between var and the explicit type (without code style enforcement)

clip_image008[4]

To learn more about productivity features see our Visual Studio 2017 Productivity Guide for .NET Developers.

C# 7.3

15.7 also brings the newest incremental update to C#, 7.3. C# 7.3 features are:

To use C# 7.3 features in your project:

  • Open your project's property page (Project -> [Project Name] Properties...)
  • Choose the "Build" tab
  • Click the "Advanced..." button on the bottom right
  • Change the "Language version" dropdown to "C# latest minor version (latest)".  This setting will enable your project to use the latest C# features available to the version of Visual Studio you are in without needing to change it again in the future.  If you prefer, to you can pick a specific version from the list.

F# improvements

15.7 also includes several improvements to F# and F# tooling in Visual Studio.

  • Type Providers are now enabled for .NET Core 2.1. To try it out, we recommend using FSharp.Data version 3.0.0-beta, which has been updated to use the new Type Provider infrastructure.
  • .NET SDK projects can now generate an F# AssmeblyInfo file from project properties.
  • Various smaller bugs in file ordering for .NET SDK projects have been fixed, including initial ordering when pasting a file into a folder.
  • Toggles for outlining and Structured Guidelines are now available in the Text Editor > F# > Advanced options page.
  • Improvements in editor responsiveness have been made, including ensuring that error diagnostics always appear before other diagnostic information (e.g., unused value analysis)
  • Efforts to reduce memory usage of the F# tools have been made in partnership with the open source community, with much of the improvements available in this release.

Finally, templates for ASP.NET Core projects in F# are coming soon, targeted for the RTW release of VS 2017 15.7.

Azure Key Vault support in Connected Services

We have simplified the process to manage your project’s secrets with the ability to create and add a Key Vault to your project as a connected service. The Azure Key Vault provides a secure location to safeguard keys and other secrets used by applications so that they do not get shared unintentionally. Adding a Key Vault through Connected Services will:

  • Provide Key Vault support for ASP.NET and ASP.NET Core applications
  • Automatically add configuration to access your Key Vault through your project
  • Add the required Nuget packages to your project
  • Allow you to access, add, edit, and remove your secrets and permissions through the Azure portal

To get started:

  • Double click on the “Connected Services” node in Solution Explorer in your ASP.Net or ASP.Net Core application.
  • Click on “Secure Secrets with Azure Key Vault”.
  • When the Key Vault tab opens, select the Subscription that you would like your Key Vault to be associated with and click the “Add” button on the bottom left. By default Visual Studio will create a Key Vault with a unique name.
    Tip: If you would like to use an existing Key Vault, change location settings, resource group, or pricing tiers from the preselected values, you can click on the ‘Edit’ link next to Key Vault
  • Once the Key Vault has been added, you will be able to manage secrets and permissions with the links on the right.

clip_image010[4]

Library Manager

Library Manager ("LibMan" for short) is Microsoft's new client-side static content management system for web projects. Designed as a replacement for Bower and npm, LibMan helps users find and fetch library files from an external source (like CDNJS) or from any file system library catalogue.

To get started, right-click a web project from Solution Explorer and choose "Manage Client-side Libraries...". This creates and opens the LibMan configuration file (libman.json) with some default content. Update the "libraries" section to add library files to your project. This example adds some jQuery files to the wwwroot/lib directory.

clip_image012[4]

For more details, see Library Manager: Client-side content management for web apps.

Azure Publishing Improvements

We also made several improvements for when publishing applications from Visual Studio, including:

For more details, see our Publish improvements in Visual Studio 2017 15.7 post on the Web Developer blog.

Conclusion

If you haven’t installed a Visual Studio preview yet, it’s worth noting that they can be installed side by side with your existing stable installations of Visual Studio 2017, so you can try the previews out, and then go back to the stable channel for your regular work. So, we hope that you’ll take the time to install the Visual Studio 2017 15.7 Preview 4 update and let us know what you think. You can either use the built-in feedback tools in Visual Studio 2017 or let us know what you think below in the comments section.

Azure API Management Release notes – April 18, 2018

$
0
0

On April 18, 2018, we started a regular service update. We upgrade service instances in batches, and it takes about a week for the update to reach every active service instance.

New functionality

  • Now it is possible to disable MSI using management API by either omitting the identity property or setting its value to null.
  • Management API now includes endpoints for managing issues.
  • We added the following capabilities that were missing in the Azure portal UI:
    • Configure caching policy using a form-based editor.
    • Add/edit user note.
    • Specify a description for the request body.
    • Send email when a user is subscribed to a product.

Changes and fixes

  • We changed default TLS settings for newly created service instances, per this announcement.
  • JWT validation policy now works properly for claims with values set to null.
  • Based on customer feedback we made a couple of changes to the OpenAPI import. See this blog post for information about the import process.
    • We now preserve the case of operationId properties.
    • Instead of failing the import we now treat operationId and summary with empty values as if they were not specified.
  • APIs and operations in the rate limit and quota policies can now be referenced by their ids. In the past, the name was the only way to reference APIs and operations. If both name and id are provided we will use the id and ignore the name.

Automated Testing in Dynamics 365 with EasyRepro

$
0
0

Easy Repro is a library that is hosted in GitHub built on selenium and developed internally in Microsoft by some incredible people. The aim of this library is to facilitate automated UI testing for any Dynamics 365 Customer Engagement project. The functionality provided covers most of the CRM actions that users would perform normally within the application so you can emulate that behavior and see if everything works.

The first thing you are going to need is to download or clone the repository in this link: https://github.com/Microsoft/EasyRepro

Once you have it locally on your machine, you will notice this is a Visual Studio solution, open it by double-clicking the .sln file and you will notice there are 3 projects in it:

  • UIAutomation.API
  • UIAutomation.Browser
  • UIAutomation.Sample

The first two projects are the ones the tests we write are going to use to perform any sort of action in the browser that we want to automate, and the last one is basically a set of pre-built tests we can build on.

After this what you are going to need is to decide what browser you are going to use, depending on what browser it is, you will need a driver that the library will use to automate all of the actions that are required to access the Dynamics 365 CE Org and then perform what you are going to code in the test. I'm going to go for Chrome in this case, therefore what I need to do is to get the NuGet package in the Sample Project Selenium.Chrome.WebDriver. At the time of these tests, I'm using version 2.37.0 of the package.

Once you have the driver, to understand this as faster as we can we are going to look directly into a test that has been pre-built, the create contact one, and for this, we need to go into the Samples Project and look for the file CreateContact.cs, lets open that file and you will see the following code:

As you can see in the first three lines of the call (14-16) we declare three properties that are getting their values from the ConfigurationManager, this is our App.config file, this is pretty much getting the URL and credentials we are going to be performing the testing on, let's define these now opening and editing the app.config file of the Sample project:

<appSettings>
<add key="OnlineUsername" value="[EmailAddress]" />
<add key="OnlinePassword" value="[Password]" />
<add key="OnlineCrmUrl" value="https://[ORGNAME].crm[X].dynamics.com" />
<add key="AzureKey" value="[APP INSIGHTS INSTRUMENTATION KEY]" />
</appSettings>

Our code will take these setting to do the automatic opening of the browser in that URL and typing of user credentials.

Then let's look into line 21 this is where we create the instance the XrmBrowser object, as you can observe we pass some settings, in this case using the public static class you can find in TestSettings.cs file in the Sample Project, in it we establish the options of the browser instance we are opening which in this case specifies to use Chrome as a browser and to open the browser on Private Mode and to allow the firing of events. This is where you can change the browser you want to use when running the test, but remember you will need the browser driver, more info about the drivers can be found on the repo readme file at the end of it.

After this is where we programmatically specify what we want to do once we open the browser, line 23 does the login for us and line 24 closes the guided help you might get with the prompts from Learning Path, in this case, I comment it out since I have it already disabled it in the system settings and I'm not interested in spending time on that during the test. If you want to disable it go to, Settings > Administration > System Settings > General (tab) > Enable Learning path set that to No. also you want to disable the option to Display welcome screen to users when they sign in.

After that as you can see we pretty much navigate our way into the Active Contacts View to then press the New button in the command bar, obviously this is absolutely customizable, we can specify what area, what entity to navigate to, what view to open, what button to press. You can also notice the ThinkTime method call with some number in it, this is basically to emulate the time it takes for a user to get to that area to actually press that object in the app, the number indicates the amount of time measured in milliseconds.

After that we pretty much specify what fields we are populating in the form, in the case of the full name, being a composite control, we create a list of the fields that are part of the composite control to then populate it, and to finish we click the save button.

What we can do next is right click the methods name TestCreateNewContact, and click on Run Tests, this will build up the project and run the tests for us, you should then see a Chrome Window open and this is when the magic starts 🙂 You will see everything starts to auto-populate and get done automatically. The browser will close automatically once the tests are completed.

As this test you have several other you can use to build your own set of tests, now imagine you are working on a project and your customer has a key set of processes or walkthroughs the users need to go thru frequently, every time you do a customization that could impact those, instead of doing it manually all the time, write a script with those walkthroughs so that you can run all of those automatically after every solution publish for example.

In Visual Studio you have the Test Explorer

From here you can run all tests, check the outcome from last execution, run only those that failed or haven't been executed, etc... You can also create playlists of your tests to create a custom one to run a series of tests you need to exclude others.

In the next post we will see how we can combine this to grab the CRM performance center telemetry and upload it to a tool such as Application INsights in Azure then then analyze as well the performance of the tests we have done to get more accurate numbers about the time it takes to load scripts, subgrids, full form load etc... stay tuned.

Cant wait to see what you have done with this, please share any interesting findings you might have with the library from your testing or hints.

Updated documentation for Visual Studio Build Tools container

$
0
0

I've updated the documentation for building a Docker container image for Visual Studio Build tools based on recent feedback that managed code may fail to run. In the case of MSBuild, you might see an error like,

C:BuildToolsMSBuild15.0binRoslynMicrosoft.CSharp.Core.targets(84,5): error MSB6003: The specified task executable "csc.exe" could not be run. Could not load file or assembly 'System.IO.FileSystem, Version=4.0.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified.

To resolve this, base your image on microsoft/dotnet-framework:4.7.1 so that it does not need to be installed. The image documentation has more details about how its various tags related to microsoft/windowsservercore if you intend to target a specific version of Windows Server Core.

The examples have also been updated to better collect and copy setup logs from a container should an error occur using a separate script file to handle install failures.


V2 App Registration is missing an “Add Owner” button

$
0
0

Problem:

Customer registers an application in the app registration portal (https://app.dev.microsoft.com). He is not able  share the application with other users since the “Add Owner” button is missing.

Root cause:

This problem can happen if the user registers the application in the app registration portal (V2 portal) under his personal MSA account. The behavior is documented in the following link:

https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-limitations

clip_image001

Resolution:

Re-register the application under an Azure AD account (*.onmicrosoft.com account).

Query String is not allowed in redirect_uri for Azure AD

$
0
0

Problem:

Customer configures the following redirect URLs for his registered application in Azure AD

clip_image001[6]

and issues the following request to authenticate to Azure AD:

GET https://login.microsoftonline.com/<tenant id>/oauth2/authorize?client_id=<app id>&redirect_uri=https%3a%2f%2flocalhost%3a44396%2fbac%2faad%3freqId%3dA123&response_mode=form_post&….

After logging in he is redirected to https://localhost:44396/bac/aad instead of https://localhost:44396/bac/aad?reqId=A123.

The redirected URL does not have anything after the query string.

Root Cause:

The behavior is by design.  This is an Azure AD’s security feature to prevent Covert Redirect attack.

Resolution:

We recommend customer to make use of the ‘state’ parameter instead of using query string to preserve the state of the request.

Docker Blog Series Part 6 – How to use Service Fabric Reverse Proxy for container services

$
0
0

Learn about the container orchestrator, Service Fabric, and how to use Service Fabric Reverse Proxy for container services in Monu’s latest post. Monu Bambroo is a Consultant on the Premier Developer team.


Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers. Service Fabric also addresses the significant challenges in developing and managing cloud native applications. It is also an orchestrator of services across a cluster of machines and it is continuing to invest heavily in container orchestration and management. In this blog post, we will check out how to use Service Fabric Reverse Proxy for container services.

Container services are encapsulated, individually deployable components that run as isolated instances on the same kernel to take advantage of virtualization that an operating system provides. Thus, each application and its runtime, dependencies, and system libraries run inside a container with full, private access to the container's own isolated view of operating system constructs. Microservices running inside containers in Service Fabric run on a subset of nodes. Service Fabric orchestration is responsible for service discovery, resolution and routing. As a result, the endpoint for the services running inside the container can change dynamically.

Read more of Monu’s post here.

Azure IoT Edge で透過的なゲートウェイ デバイスを作成する手順の一例

$
0
0

IoT Edge は現在プレビューですが、「透過的なゲートウェイとして動作する IoT Edge デバイスを作成する - プレビュー」のドキュメントの具体的な手順に関するご質問をいただいておりますので、私の手元で確認した手順を一つの構成例としてご案内します。

 

弊社が公開している構成例として、すでに以下のようなページがありますが、今回ご案内する手順では、両方を参考にして、後者の、IoT Edge のゲートウェイをLinux (ローカルPC 上の仮想マシン)IoT デバイスをRaspberry Pi 3 としています。

 

IoT Edge のゲートウェイがWindowsIoT デバイスがRaspberry Pi 3 の例

 

IOT EDGE PYTHON RASPBERRYPI CONNET TRANSPARENT GATEWAY

< https://github.com/Azure-Samples/iot-edge-python-raspberrypi-connect-transparent-gateway >

 

IoT Edge のゲートウェイとIoT デバイスをともにLinux Azure VM 1 つの中に持つ例

 

Azure IoT Edge Hands On Labs

< https://github.com/AzureIoTGBB/azure-iot-edge-hol-linux >

 

Lab 1,2 をご参照ください。

 

 

    1. 今回前提とする構成

 

今回は、最小構成として、以下の図のような、IoT Hub に、IoT Edge のゲートウェイ1 つがつながっていて、そのゲートウェイを通じて IoT Hub と透過的(transparent) に通信するデバイス(リーフ デバイス, leaf device) 1 つがつながっていることを前提とします。

 

clip_image002

 

IoT Edge のゲートウェイには、今回、(作業用 PC ) Windows 10 Enterprise x64 1709 上のHyper-V の仮想マシン(最大メモリ4GB、外部ネットワークに接続) として、Ubuntu 16.04 LTS をインストールしています。

 

リーフデバイスには、Raspberry Pi 3 を用意し、以下の手順でインストールしています。(「Raspberry Pi の Azure IoT Hub への接続 (Python)」のPi の Raspbian オペレーティング システムのインストール」の手順と同様です。)

 

Windows 10 Enterprise x64 1709 上で以下を実施します。

(a)         < https://www.raspberrypi.org/downloads/raspbian/ > のサイトから、Raspbian Stretch with desktop zip ファイルをダウンロードし、解凍します。

(b)         Etcher SD カード書き込みユーティリティをダウンロードしてインストールします。

(c)          micro SD カード(私の手元では64GB のものを使用しています) Windows PC に接続します。

(d)         Etcher を実行し、[Select image] をクリックして、(a) で解凍したRaspbian イメージ(今回は2018-03-13-raspbian-stretch.img) を選択します。

(e)         micro SD カード ドライブを選択します。 適切なドライブが既に選択されている場合があります。

(f)          [Flash (フラッシュ)] をクリックして、micro SD カードにRaspbian をインストールします。

(g)         インストールが完了したら、コンピューターからmicro SD カードを取り出します。Etcher では完了時にmicro SD カードを自動的に取り出すか、マウント解除するため、micro SD カードを直接取り出しても問題ありません。

(h)         micro SD カードをRaspberry Pi 3 に挿入します。

 

 

    1. SSH の設定と接続確認

 

(a)         リーフデバイス(Raspberry Pi 3) 側で、SSH を有効にします。

    1. Raspberry Pi 3 にモニター、キーボード、マウスを接続し、Raspberry Pi 3 を起動してから、pi をユーザー名として、raspberry をパスワードとして使用してRaspbian にログインします。(起動するだけで、自動的にログインされている場合があります。)

    1. Raspberry アイコン> [Preferences](設定) > [Raspberry Pi Configuration](Raspberry Pi 構成) の順にクリックします。

    1. [Interfaces] (インターフェイス) タブで、[SSH] [Enable] (有効) に設定し、[OK] をクリックします。

(b)         Raspberry Pi 3 にイーサネット ケーブルを使用して有線ネットワークに接続します。

(c)          Terminal を起動し、ifconfig を実行して、eth0 inet に表示されるIP アドレスをメモします。

(d)         Hostname を変更しておきます。

    1. nano /etc/hostname を開きます

$ sudo nano /etc/hostname

    1. raspberrypi から任意の名前に変更します

    1. Ctrl-o Enter で保存します

    1. Ctrl-x nano を終了します

    1. 再起動します。

$ sudo reboot

(e)         passwd でパスワードも変更しておきます。

(f)          Raspbian を更新しておきます。

$ sudo apt update && sudo apt full-upgrade -y

(g)         IoT Edge ゲートウェイ(Ubuntu) 側で、SSH を有効にします。

    1. Terminal を起動します。

    1. パッケージの管理を行うためのaptitudeをインストールします。

$ sudo apt-get install aptitude

    1. aptitudeを使ってsshを導入します。

$ sudo aptitude install ssh

    1. ifconfig を実行して、eth0 inet に表示されるIP アドレスをメモします。

(h)         Windows 10 (作業用PC) Windows 版の PuTTY をダウンロードしてインストールします。

(i)           PuTTY を起動して、(c) IP アドレスでSSH 接続し、ログインできるか確認します。

(j)          PuTTY をもう1 つ起動して、(g) IP アドレスでSSH 接続し、ログインできるか確認します。

 

    1. IoT Hub の作成

 

(a)         Linux または MacOS のシミュレートされたデバイスへの Azure IoT Edge のデプロイ - プレビュー」の「IoT Hub の作成」の手順に従って、[F1 - Free] レベルのIoT Hub を作成します。

(b)         Azure ポータル上で作成されたIoT Hub をクリックし、左側のペインの[共有アクセスポリシー] をクリックし、右側に表示される[iothubowner] をクリックして、表示される「接続文字列 –
プライマリキー」の値をメモします。

 

clip_image004

 

    1. IoT Edge デバイスの登録

 

(a)         Linux または MacOS のシミュレートされたデバイスへの Azure IoT Edge のデプロイ - プレビュー」の「IoT Edge デバイスを登録する」の手順に従って、IoT Edge ゲートウェイの名前を登録します。今回、以下の[デバイスID] IoT Edge ゲートウェイの hostname と同じにしました。

 

clip_image006

 

(b)         登録されたデバイスID をクリックし、表示される「接続文字列 – プライマリキー」の値をメモします。

 

clip_image008

clip_image010

 

 

    1. IoT Edge ゲートウェイ(Ubuntu) 側の準備

 

以下は、IoT Edge ゲートウェイ(Ubuntu) へのSSH 通信を行っている PuTTY 上で実行します。

 

(a)         パッケージを更新します。

$ sudo apt-get update

(b)         pip をインストールします。

$ sudo apt-get install python-pip

(c)          curl がない場合はインストールします。

$ sudo apt install curl

(d)         Docker for Linux をインストールします。(参考:Get Docker CE for Ubuntu)

$ curl -fsSL get.docker.com -o get-docker.sh

$ sudo sh get-docker.sh

(e)         Ubuntu に現在ログインしているユーザーアカウントがルート権限を持たない場合、Docker admin group にそのユーザーアカウントを追加することでルート権限でコンテナを実行できます。以下を実行後、いったんログアウトし、再度ログインすることで有効になります。

$ sudo usermod -aG docker <Ubuntu に現在ログインしているユーザーアカウント>

(f)          Azure IoT Edge コントロールスクリプトをインストールします。

$ sudo pip install -U azure-iot-edge-runtime-ctl

(g)         iothub-explorer tool をインストールします。

$ sudo apt-get install nodejs

$ sudo apt-get install npm

$ sudo npm install -g iothub-explorer

インストールしたバージョンを確認し、動作していることを確認します。

$ iothub-explorer version

/usr/bin/env: node: No such file or directory」のエラーが出る場合は、先に以下を実行します。

$ sudo ln -s /usr/bin/nodejs /usr/bin/node

iothub-explorer version を実行すると、今回は 1.2.1 と表示されました。

(h)         .NET Core をインストールします。(参考:Prerequisites for .NET Core on Linux)

$ curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg

$ sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg

$ sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-xenial-prod xenial main" > /etc/apt/sources.list.d/dotnetdev.list'

$ sudo apt-get update

$ sudo apt-get install dotnet-sdk-2.1.4

インストールしたバージョンを確認し、動作していることを確認します。

$ dotnet --version

インストールしたバージョン通り、2.1.4 と表示されます。

(i)           Azure IoT C SDK の “modules-preview” ブランチをクローンします。

git がインストールされていない場合はインストールします。

$ sudo apt install git

$ git clone -b modules-preview https://github.com/Azure/azure-iot-sdk-c

(j)          リーフデバイスがIoT Edge ゲートウェイ経由で IoT Hub までの通信には TLS 接続を行います。そのための自己署名証明書作成の作業用ディレクトリとして edge ディレクトリを作成し、そこに移動します。

$ mkdir edge

$ cd edge

(k)         証明書作成のスクリプトを上記ディレクトリにコピーします。

$ cp ~/azure-iot-sdk-c/tools/CACertificates/*.cnf .

$ cp ~/azure-iot-sdk-c/tools/CACertificates/*.sh .

(l)           certGen.sh を実行可能にします。

$ chmod 700 certGen.sh

(m)       証明書チェーンを作るためのルート証明書と中間証明書を作成します。

$ ./certGen.sh create_root_and_intermediate

certs ディレクトリに、以下のファイルができることを確認します。

$ ls certs

azure-iot-test-only.chain.ca.cert.pem      azure-iot-test-only.root.ca.cert.pem

azure-iot-test-only.intermediate.cert.pem

(n)         IoT Edge ゲートウェイ自体のデバイス固有の証明書を作成します。

$ ./certGen.sh create_edge_device_certificate myGateway

※ この時、myGateway の部分は変更せず、そのままにしてください。

certs ディレクトリに、new-edge-device.cert.pem new-edge-device.cert.pfx が追加されます。

(o)         これによって公開鍵と秘密鍵が作成されます。certs ディレクトリに移動して、そこで公開鍵の完全なチェーンを作ります。

$ cd certs

$ cat ./new-edge-device.cert.pem ./azure-iot-test-only.intermediate.cert.pem ./azure-iot-test-only.root.ca.cert.pem > ./new-edge-device-full-chain.cert.pem

 

 

    1. リーフデバイス(Raspbian) 側の準備

 

以下は、リーフデバイス(Raspbian) へのSSH 通信を行っているPuTTY 上で実行します。

 

(a)         IoT Edge ゲートウェイのIP アドレス(2-(g) でメモしたもの) を、リーフデバイス側から見えるようにします。

$ sudo nano /etc/hosts

ファイルの最も下の行に以下のように記載します。

<2-(g) でメモしたIoT Edge ゲートウェイのIP アドレス> mygateway.local

※この時、mygateway.local は変更せずそのままにしてください。

Ping してリーフデバイス側から IoT Edge ゲートウェイに通信できることを確認します。

$ ping mygateway.local

(b)         5-(m) で作成したルート証明書を、リーフデバイスにインストールします。

    1. こちらにもedge ディレクトリとその下のcerts ディレクトリを作成します。

$ mkdir edge

$ cd edge

$ mkdir certs

$ cd certs

    1. azure-iot-test-only.root.ca.cert.pem をこのディレクトリにコピーします。

今回は、IoT Edge ゲートウェイ側で sudo nano azure-iot-test-only.root.ca.cert.pem を実行し、内容全部(-----BEGIN CERTIFICATE-----から-----END CERTIFIATE----- まで含めて全部)を、リーフデバイス側でもsudo nano azure-iot-test-only.root.ca.cert.pem を実行してコピー、Ctrl-o Enterで保存しました。

    1. これを以下の手順でインストールします。

$ openssl x509 -in azure-iot-test-only.root.ca.cert.pem -inform PEM -out azure-iot-test-only.root.ca.cert.crt

$ sudo mkdir /usr/share/ca-certificates/extra

$ sudo cp azure-iot-test-only.root.ca.cert.crt /usr/share/ca-certificates/extra/azure-iot-test-only.root.ca.cert.crt

$ sudo dpkg-reconfigure ca-certificates

    1. 最後のコマンドを実行すると、証明書をルートストアにインストールすることについて警告が出ますが、yes をハイライトさせてEnter <Ok>
      します。

 

clip_image012

 

    1. 次の画面では以下の赤いスペース部分のようにextra/azure-iot-test-only.root.ca.cert.crt [ ] がブランクになっています。

 

clip_image014

 

ここでスペースを押し、extra/azure-iot-test-only.root.ca.cert.crt [*] になるようにしてから、Enter <Ok> します。

 

clip_image016

 

    1. 確認のために、以下を実行し、末尾に「extra/azure-iot-test-only.root.ca.cert.crt」があることを確認します。

$ sudo cat /etc/ca-certificates.conf

 

 

    1. IoT Edge ゲートウェイの開始

 

以下は、IoT Edge ゲートウェイ(Ubuntu) へのSSH 通信を行っている PuTTY 上で実行します。

 

    1. 以下を実行して、IoT Edge ゲートウェイとしてのセットアップを行います。

$ cd ~

$ sudo iotedgectl setup --connection-string "<4-(b) Iot Edge ゲートウェイの接続文字列>" --edge-hostname mygateway.local --device-ca-cert-file /home/<Iot Edge ゲートウェイのUbuntu
のログインユーザー名>/edge/certs/new-edge-device.cert.pem --device-ca-chain-cert-file /home/<Iot Edge ゲートウェイのUbuntu のログインユーザー名>/edge/certs/new-edge-device-full-chain.cert.pem --device-ca-private-key-file /home/<Iot Edge ゲートウェイのUbuntu のログインユーザー名>/edge/private/new-edge-device.key.pem --owner-ca-cert-file /home/<Iot Edge ゲートウェイのUbuntu のログインユーザー名
>/edge/certs/azure-iot-test-only.root.ca.cert.pem


      • ここでも、--edge-hostname mygateway.local は変更せず、そのままにしてください。

    1. Please enter the Edge Agent private key passphrase. Length should be >=4 and <= 1023:」と表示されたら12345 と入力し Enter を押します。

    1. 再度入力を求められるので同じようにします。

    1. IoT Edge ランタイムを起動します。

$ sudo iotedgectl start

    1. IoT Edge ランタイムを構成しているコンテナ/モジュール のedgeAgent が動作していることを確認します。

$ sudo docker ps

以下のように表示されます。

CONTAINER ID        IMAGE                                      COMMAND                   CREATED             STATUS              PORTS               NAMES

8d8178dc82f6        microsoft/azureiotedge-agent:1.0-preview   "/usr/bin/dotnet Mic"   2 minutes ago       Up 2 minutes                            edgeAgent

    1. 以下を実行すれば edgeAgent のログも確認できます。

$ sudo docker logs -f edgeAgent

 

 

    1. リーフデバイス側からの通信

 

(a)         今回は例として「Azure IoT Edge Hands On Labs - Module 2」のサンプルを動作させてみます。

    1. この Lab のサンプルをクローンします。

$ cd ~

$ git clone http://github.com/azureiotgbb/azure-iot-edge-hol-linux

    1. Python SDK のプレビューバージョンのライブラリをセットアップ、ビルドします。

$ git clone --recursive -b modules-preview http://github.com/azure/azure-iot-sdk-python

$ cd ~/azure-iot-sdk-python/build_all/linux

$ sudo ./setup.sh

$ sudo ./build.sh

    1. ビルドが終わったら、Lab のサンプルのソリューション ディレクトリにコピーします。

$ cp ../../device/samples/iothub_client.so ~/azure-iot-edge-hol-linux/module2

(b)         IoT Hub にリーフデバイスを登録します。

    1. Azure ポータルのIoT Hub の左側のペインで、[デバイス エクスプローラー] をクリックします。

    1. [追加] をクリックし、[デバイスの追加] [デバイスID] にリーフデバイスの名前を入力し、[保存] します。今回は、2-(d) で設定した hostname と同じにしました。

 

clip_image018

 

(c)          追加されたデバイスID をクリックし、[デバイスの詳細] 画面で「接続文字列 – プライマリ キー」をメモします。

 

clip_image020

 

(d)         この接続文字列の末尾に「;GatewayHostName=mygateway.local(今回は mygateway.local を変更せずそのまま使ってください)を入れ、Python スクリプトに記入します。

$ cd ~/azure-iot-edge-hol-linux/module2

$ nano iotdevice.py

以下の行を見つけ、接続文字列を置き換えます。

connection_string = "<IoT Device connection string here>"

Ctrl-oEnterで保存し、Ctrl-xで終了します。

これを行うことで、指定した IoT Edge ゲートウェイを通じてIoT Hub にリーフデバイス(上のPython スクリプト) が接続しに行きます。

(e)         Azure ポータルのIoT Hub の左側のペインの [IoT Edge (preview)] をクリックし、IoT Edge ゲートウェイのデバイスID をクリックします。

 

clip_image022

 

(f)          画面上部の[Set Modules] をクリックします。

 

clip_image024

 

(g)         [1 Add Modules (optional)] では何もせず画面下部の[Next] をクリックします。

(h)         [2 Specify Routes (optional)] では、以下のようになっていることを確認し、画面下部の[Next] をクリックします。

 

clip_image026

 

これは、$upstream はクラウド上のIoT Hub に送信する、ということを意味します。この route は全てのメッセージ(/*) を受け取り、クラウドに送ります。

 

(i)           [3 Review Template (optional)] では、そのまま [Submit] をクリックします。

(j)          これにより、IoT Edge ゲートウェイ上には edgeHub が起動します。IoT Edge ゲートウェイと通信しているPuTTY 上で以下を実行して確認できます。

$ sudo docker ps

CONTAINER ID        IMAGE                                      COMMAND                  CREATED             STATUS              PORTS                                          NAMES

670d08c9c3c7        microsoft/azureiotedge-hub:1.0-preview     "scripts/linux/start"   3 minutes ago       Up 3 minutes        0.0.0.0:443->443/tcp, 0.0.0.0:8883->8883/tcp   edgeHub

8d8178dc82f6        microsoft/azureiotedge-agent:1.0-preview   "/usr/bin/dotnet Mic"   2 hours ago         Up 2 hours                                                          edgeAgent

(k)         以下のようにedgeHub のログを確認できます。

$ sudo docker logs -f edgeHub

以下のような、ポート8883 (MQTT over SSL, TLS 認証を使ったMQTT) が初期化されたこともわかります。

[INF] - Initializing TLS endpoint on port 8883 for MQTT head.

これで IoT Edge ゲートウェイにリーフデバイスから接続できます。

(l)           ここで、リーフデバイスと通信している PuTTY 上で以下を実行して、証明書チェーンに問題がないことを確認します。

$ openssl s_client -connect mygateway.local:8883 -showcerts

 

以下が表示例です。「verify return:1」と全て1 が返ってきていれば問題ありません。

 

=====表示例:ここから=====

CONNECTED(00000003)

depth=4 CN = Azure IoT Hub CA Cert Test Only

verify return:1

depth=3 CN = Azure IoT Hub Intermediate Cert Test Only

verify return:1

depth=2 CN = myGateway

verify return:1

depth=1 CN = Edge Agent CA

verify return:1

depth=0 CN = mygateway.local

verify return:1

---

Certificate chain

0 s:/CN=mygateway.local

   i:/CN=Edge Agent CA

-----BEGIN CERTIFICATE-----

()

-----END CERTIFICATE-----

1 s:/CN=Azure IoT Hub Intermediate Cert Test Only

   i:/CN=Azure IoT Hub CA Cert Test Only

-----BEGIN CERTIFICATE-----

()

-----END CERTIFICATE-----

2 s:/CN=myGateway

   i:/CN=Azure IoT Hub Intermediate Cert Test Only

-----BEGIN CERTIFICATE-----

()

-----END CERTIFICATE-----

3 s:/CN=Edge Agent CA

   i:/CN=myGateway

-----BEGIN CERTIFICATE-----

()

-----END CERTIFICATE-----

---

Server certificate

subject=/CN=mygateway.local

issuer=/CN=Edge Agent CA

---

No client certificate CA names sent

Peer signing digest: SHA512

Server Temp Key: ECDH, P-256, 256 bits

---

SSL handshake has read 5506 bytes and written 302 bytes

Verification: OK

---

New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384

Server public key is 2048 bit

Secure Renegotiation IS supported

Compression: NONE

Expansion: NONE

No ALPN negotiated

SSL-Session:

    Protocol  : TLSv1.2

    Cipher    : ECDHE-RSA-AES256-GCM-SHA384

    Session-ID: BD0F16D2519A492477F6F3F142398EA9535102843A9136E98FBE3D832D4F4697

    Session-ID-ctx:

    Master-Key: DB3BA5E6A2B391CB27D34C8F58E8ABEBD5375AF78372874E81C32219ADBCE6D40F71E15035D5CF341BA1AD1803B95C52

    PSK identity: None

    PSK identity hint: None

    SRP username: None

    TLS session ticket lifetime hint: 300 (seconds)

    TLS session ticket:

    0000 - 98 95 db cb 68 1b f0 34-ee fb 76 4b e8 fc b5 e5   ....h..4..vK....

    0010 - 23 09 ad 14 06 eb 4c ef-50 3c 5d d2 09 c5 84 62   #.....L.P<]....b

    0020 - 32 d0 c0 28 db a6 68 b5-df d0 93 83 1b ea f8 a6   2..(..h.........

    0030 - b6 ee b9 97 f2 eb 91 08-c4 43 ec ce 37 0f b8 68   .........C..7..h

    0040 - f6 61 67 8e 58 41 b0 bd-10 0c 91 1e 9b ff 6c 18   .ag.XA........l.

    0050 - e3 20 75 f4 23 4e 56 59-52 72 08 24 d6 f0 f4 03   . u.#NVYRr.$....

    0060 - 45 1a 8d 0f b5 ef 05 55-5d 7d 2e 6a 43 3b 2b 7c   E......U]}.jC;+|

    0070 - f0 5b fa 38 31 e6 21 ec-0a b2 fb 17 e5 cb 8b 9d   .[.81.!.........

    0080 - 8d c0 2a 6e 86 08 48 29-35 6d 87 79 3f 68 b3 f1   ..*n..H)5m.y?h..

    0090 - 2d 85 e7 22 83 75 19 0e-d5 3c 42 8a c6 00 6c 59   -..".u...<B...lY

 

    Start Time: 1523165748

    Timeout   : 7200 (sec)

    Verify return code: 0 (ok)

    Extended master secret: no

---

read:errno=0

=====表示例:ここまで=====

 

Certificate chain」の部分について少し補足します。

s:/CN=mygateway.local」のように「s:」で始まるものはsubject (証明対象の名称) を表し、「i:/CN=Edge Agent CA」のように「i:」で始まるものはissuer (証明書の発行者、通常CA) を表します。

この証明書チェーンの検証は、以下の図のように、mygateway.local から、ルート証明書(Azure IoT Hub CA Cert Test Only) まで、「i:」に該当する「s:」があるかをチェックしていきます。

 

clip_image028

 

なお、Raspberry Pi 3 上の信頼できるルート証明書(Azure IoT Hub CA Cert Test Only) にたどり着けばよいため、公開ドキュメントにあるようなAzure ポータルのIoT Hub Certificates
に登録する必要はありません。

 

(m)       IoT Hub 側の様子を見るために、PuTTY をさらにもう一つ起動し、IoT Edge ゲートウェイのIP アドレスに対してSSH 接続して、以下を実行します。

$ iothub-explorer monitor-events <8-(b)-2)IoT Hub に登録したリーフデバイス名> -r --login "<3-(b)IoT Hub iothubowner の接続文字列>"

実行しても、リーフデバイスからの通信が始まらない限りは何も表示されないため、次に進みます。

(n)         リーフデバイスからの通信を開始します。

リーフデバイスに接続している PuTTY から、以下を実行します。

$ cd ~/azure-iot-edge-hol-linux/module2

$ python -u iotdevice.py

以下のように表示されます。

 

clip_image030

 

(o)         (m) のウィンドウには以下のように表示されます。

 

clip_image032

 

 

 

上記の内容がお役に立てば幸いです。

 

Azure IoT 開発サポートチーム 津田

 

Computer Vision API の OCR 関数で日本語 「Ja」を指定した場合の現象と対策

$
0
0

こんにちは。Cognitive Services サポートチームの中山です。

Computer Vision API OCR関数で日本語 「Ja」を指定した場合の現象と対策をご紹介いたします。

 

現象:

Computer Vision API OCR関数を呼び出し時に、日本語ドキュメントを読み込むためlanguageのパラメータに「Ja」を指定した場合は、Response 400 のエラーがリターンさます。

 

Curlでの呼び出し例:

@ECHO OFF curl -v -X POST "https://westus.api.cognitive.microsoft.com/vision/v1.0/ocr?language=Ja&detectOrientation =true" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{body}"

 

Response 400 エラー:

{

"code": "NotSupportedLanguage",

"requestId": "B8D802CF-DD8F-4E61-B15C-9E6C5844CCBC",

"message": "Specified language is not supported."

}

 

対策方法:

languageのパラメータに日本語を指定したい場合は、「Ja」ではなく「ja」に変更してください。

 

なお、Computer Vision API OCR のドキュメントには、サポートされている言語一覧に「Ja (Japanese)」の記載はございますが、これは「ja (Japanese)」の誤りです。

別途ドキュメントの修正依頼を行っておりますので、しばらくお待ちください。

 

Computer Vision API - v1.0

https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fc

 

上記がお役に立てば幸いです。

 

Cognitive Services 開発サポートチーム 中山

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>