Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Use Microsoft Graph API to reach on-premises, cloud users of hybrid Exchange 2016

$
0
0

Le Café Central de DeVa - Microsoft Graph API

As you aware, Office 365 and Exchange Online provide a new way to work with email, calendars, and contacts. The Mail, Calendar, and Contact REST APIs provide a powerful, easy-to-use way to access and manipulate Exchange data.

In this video, you will see Venkat, Principal Program Manager lead will walk you through how to use Microsoft Graph to reach on-premises and cloud users of hybrid Exchange 2016 deployments, in addition to Office 365 and Outlook.com.  It’ll discuss how your application can handle versions of servers on-premises and in the cloud, and how on-premises Exchange 2016 is set up to support Microsoft Graph and OAuth.

For more info about related documentation, you can get it started at http://graph.microsoft.io/en-us/docs/overview/hybrid_rest_support

Hope this helps.


Diagnostic Improvements in Visual Studio 2017 15.3.0

$
0
0

This post as well as described diagnostics significantly benefited from the feedback by Mark, Xiang, Stephan, Marian, Gabriel, Ulzii, Steve and Andrew.

Visual Studio 2017 15.3.0 release comes with a number of improvements to the Microsoft Visual C++ compiler's diagnostics. Most of these improvements are in response to the diagnostics improvements survey we shared with you at the beginning of the 15.3 development cycle. Below you will find some of the new or improved diagnostic messages in the areas of member initialization, enumerations, dealing with precompiled headers, conditionals, and more. We will continue this work throughout VS 2017.

Order of Members Initialization

A constructor won't initialize members in the order their initializers are listed in code, but in the order the members are declared in class. There are multiple potential problems that can stem from assuming that actual initialization order will match the code. A typical scenario is when the member initialized earlier in the list is used in a later initializer, while the actual initialization happens in the reverse order because of the order of declaration of those members.

// Compile with /w15038 to enable the warning
struct B : A
{
    B(int n) : b(n), A(b) {} // warning C5038: data member 'B::b' will be initialized after base class 'A'
    int b;
};

The above warning is off-by-default in the current release due to the amount of code it breaks in numerous projects that treat warnings as errors. We plan to enable the warning by default in a subsequent release, so we recommend to try enabling it early.

Constant Conditionals

There were a few suggestions to adopt the practice popularized by Clang of suppressing certain warnings when extra parentheses are used. We looked at it in the context of one bug report suggesting that we should suppress "warning C4127: conditional expression is constant" when the user puts extra () (note that Clang itself doesn't apply the practice to this case). While we discussed the possibility, we decided this would be a disservice to good programming practices in the context of this warning as the language and our implementation now supports the 'if constexpr' statement. Instead, we now recommend using 'if constexpr'.

    if ((sizeof(T) < sizeof(U))) …
        // warning C4127 : conditional expression is constant
        // note : consider using 'if constexpr' statement instead

Scoped Enumerations

One reason scoped enumerations (aka enum classes) are preferred is because they have stricter type-checking rules than unscoped enumerations and thus provide better type safety.  We were breaking that type safety in switch statements by allowing developers to accidently mix enumeration types. This often resulted in unexpected runtime behavior:

enum class A { a1, a2 };
enum class B { baz, foo, a2 };
int f(A a) {
    switch (a)
    {
    case B::baz: return 1;
    case B::a2:  return 2;
    }
    return 0;
}

In /permissive- mode (again, due to the amount of code this broke) we now emit errors:

error C2440: 'type cast': cannot convert from 'int' to 'A'
note: Conversion to enumeration type requires an explicit cast (static_cast, C-style cast or function-style cast)
error C2046: illegal case

The error will also be emitted on pointer conversions in a switch statement.

Empty Declarations

We used to ignore empty declarations without any diagnostics, assuming they were pretty harmless. Then we came across a couple of usage examples where users used empty declarations on templates in some complicated template-metaprogramming code with the assumption that those would lead to instantiations of the type of the empty declaration. This was never the case and thus was worth notifying about. In this update we reused the warning that was already happening in similar contexts, but in the next update we'll change it to its own warning.

struct A { … };
A; // warning C4091 : '' : ignored on left of 'A' when no variable is declared

Precompiled Headers

We had a number of issues in large projects arising from the use of precompiled headers on very large projects. The issues weren't compiler-specific per se, but rather dependent on processes happening in the operating system. Unfortunately, our one error fits all for this scenario was inadequate for the users to troubleshoot the problem and come up with a suitable workaround. We expanded the information that the errors contained in these cases in order to be better able to identify a specific scenario that could have led to the error and advise users on the ways to address the issue.

error C3859: virtual memory range for PCH exceeded; please recompile with a command line option of '-Zm13' or greater
note: PCH: Unable to get the requested block of memory
note: System returned code 1455: The paging file is too small for this operation to complete
note: please visit https://aka.ms/pch-help for more details
fatal error C1076: compiler limit: internal heap limit reached; use /Zm to specify a higher limit

The broader issue is discussed in greater details in our earlier blog post: Precompiled Header (PCH) issues and recommendations

Conditional Operator

The last group of new diagnostic messages are all related to our improvements to the conformance of the conditional operator ?:. These changes are also opt-in and are guarded by the switch /Zc:ternary (implied by /permissive-) due to the amount of code they broke. In particular, the compiler used to accept arguments in the conditional operator ?: that are considered ambiguous by the standard (see section [expr.cond]). We no longer accept them under /Zc:ternary or /permissive- and you might see new errors appearing in source code that compiles clean without these flags.

The typical code pattern this change breaks is when some class U both provides a constructor from another type T and a conversion operator to type T (both non-explicit). In this case both the conversion of the 2nd argument to the type of the 3rd and the conversion of the 3rd argument to the type of the 2nd are valid conversions, which is ambiguous according to the standard.

struct A
{
	A(int);
	operator int() const;
};

A a(42);
auto x = cond ? 7 : a; // A: old permissive behavior prefers A(7) over (int)a.
                       // The non-permissive behavior issues:
                       //     error C2445: result type of conditional expression is ambiguous: types 'int' and 'A' can be converted to multiple common types
                       //     note: could be 'int'
                       //     note: or       'A'

To fix the code, simply cast one of the arguments explicitly to the type of the other.

There is one important exception to this common pattern when T represents one of the null-terminated string types (e.g. const char*, const char16_t* etc., but you can also reproduce this with array types and the pointer types they decay to) and the actual argument to ?: is a string literal of corresponding type. C++17 has changed the wording, which led to change in semantics from C++14 (see CWG defect 1805). As a result, the code in the following example is accepted under /std:c++14 and rejected under /std:c++17:

struct MyString
{
	MyString(const char* s = "") noexcept; // from const char*
	operator const char*() const noexcept; //   to const char*
};
MyString s;
auto x = cond ? "A" : s; // MyString: permissive behavior prefers MyString("A") over (const char*)s

The fix is again to cast one of the arguments explicitly.

In the original example that triggered our conditional operator conformance work, we were giving an error where the user was not expecting it, without describing why we give an error:

auto p1 = [](int a, int b) { return a > b; };
auto p2 = [](int a, int b) { return a > b; };
auto p3 = x ? p1 : p2; // This line used to emit an obscure error:
error C2446: ':': no conversion from 'foo::<lambda_f6cd18702c42f6cd636bfee362b37033>' to 'foo::<lambda_717fca3fc65510deea10bc47e2b06be4>'
note: No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called

With /Zc:ternary the reason for failure becomes clear even though some people might still not like that we chose not to give preference to any particular (implementation-defined) calling convention on architectures where we support multiple:

error C2593: 'operator ?' is ambiguous
note: could be 'built-in C++ operator?(bool (__cdecl *)(int,int), bool (__cdecl *)(int,int))'
note: or       'built-in C++ operator?(bool (__stdcall *)(int,int), bool (__stdcall *)(int,int))'
note: or       'built-in C++ operator?(bool (__fastcall *)(int,int), bool (__fastcall *)(int,int))'
note: or       'built-in C++ operator?(bool (__vectorcall *)(int,int), bool (__vectorcall *)(int,int))'
note: while trying to match the argument list '(foo::<lambda_717fca3fc65510deea10bc47e2b06be4>, foo::<lambda_f6cd18702c42f6cd636bfee362b37033>)'

Another scenario where one would encounter errors under /Zc:ternary are conditional operators with only one of the arguments being of type void (while the other is not a throw expression). A common use of these in our experience of fixing the source code this change broke was in ASSERT-like macros:

void myassert(const char* text, const char* file, int line);
#define ASSERT(ex) (void)((ex) ? 0 : myassert(#ex, __FILE__, __LINE__))

error C3447: third operand to the conditional operator ?: is of type 'void', but the second operand is neither a throw-expression nor of type 'void'

The typical solution is to simply replace the non-void argument with void().

A bigger source of problems related to /Zc:ternary might be coming from the use of the conditional operator in template meta-programming as some of the result types would change under this switch. The following example demonstrates change of conditional expression's result type in a non-meta-programming context:

      char  a = 'A';
const char  b = 'B';
decltype(auto) x = cond ? a : b; // char without, const char& with /Zc:ternary
const char(&z)[2] = argc > 3 ? "A" : "B"; // const char* without /Zc:ternary

The typical resolution in such cases would be to apply a std::remove_reference trait on top of the result type where needed in order to preserve the old behavior.

In Closing

You can try these improvements today by downloading Visual Studio 2017 15.3.0 Preview. As always, we welcome your feedback – it helps us prioritize our work as well as the rest of the community is resolving similar issues. Feel free to send any comments through e-mail at visualcpp@microsoft.com, Twitter @visualc, or Facebook at Microsoft Visual Cpp. If you haven't done so yet, please check also our previous post in the series documenting our progress on improving compiler diagnostics.

If you encounter other problems with MSVC in VS 2017 please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions, let us know through UserVoice.

Thank you!
Yuriy

Application Insights Planned Maintenance - 07/13 - Final Update

$
0
0
Final Update: Friday, 21 July 2017 23:52 UTC

Maintenance has been completed on infrastructure for Application Insights Availability Web Test feature.
Necessary updates were installed successfully on all nodes, supporting Availability Web Test feature.

-Deepesh

Planned Maintenance: 17:00 UTC, 18 July 2017 – 00:00 UTC, 22 July 2017

The Application Insights team will be performing planned maintenance on Availability Web Test feature. During the maintenance window we will be installing necessary updates on underlying infrastructure.

During this timeframe some customers may experience very short availability data gaps in one test location at a time. We will make every effort to limit the amount of impact to customer availability tests, but customers should ensure their availability tests are running from at least three locations to ensure redundant coverage through maintenance. Please refer to the following article on how to configure availability web tests: https://azure.microsoft.com/en-us/documentation/articles/app-insights-monitor-web-app-availability/

We apologize for any inconvenience


-Deepesh

Top stories from the VSTS community - 2017.07.21

$
0
0

Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics.

image TOP STORIES

image VIDEOS

  • VS Team Services - Test Case Explorer v2 - ALM Rangers
    Anthony Borton provides an overview of the Test Case Explorer v2 extension, and walks through how you can use it. Created by Mathias Olausson and Mattias Skold, the extension enables users to manage their test cases and clone test plans and suites.
  • Advanced MSBuild Extensibility with Nate McMaster - Nate McMaster
    Nate McMaster shows how to write your own MSBuild tasks in C#. Since you're developing your tasks in C#, you can use standard .NET libraries, debug your code, etc.
  • Sharing MSBuild Tasks as NuGet Packages with Nate McMaster - Nate McMaster
    Nate McMaster shows how to publish an MSBuild task as a NuGet package, allowing you to share it and reuse it in your own projects.

TIP: If you want to get your VSTS news in audio form then be sure to subscribe to RadioTFS .

image FEEDBACK

What do you think? How could we do this series better?
Here are some ways to connect with us:

  • Add a comment below
  • Use the #VSTS hashtag if you have articles you would like to see included

Installation of SSDT for BI on SQL 2014

$
0
0

A few days back, I was working with one of our partners who had a requirement of installing SQL Server Data Tools (SSDT) for Business Intelligence on their SQL server 2014 instance. When it comes to the same scenario in SQL 2012, the installation of SSDT is a straight forward procedure by selecting it on the feature selection page.

Rendering a large report in SharePoint mode fails with maximum message size quota exceeded error message

$
0
0

Recently, i was working on a scenario where reporting services 2012 was configured in SharePoint 2013 integrated mode. We were exporting a report that was ~240MB in size. When we do that, the reports fails with the following exception:

07/19/2017 13:37:57.46  w3wp.exe (0x2C80)                        0x0834 SQL Server Reporting Services  Service Application Proxy      00000 Monitorable Notified the load balancer and raising RecoverableException for exception: System.ServiceModel.CommunicationException: The maximum message size quota for incoming messages (115343360) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. ---> System.ServiceModel.QuotaExceededException: The maximum message size quota for incoming messages (115343360) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element.     --- End of inner exception stack trace ---    Server stack trace:      at System.ServiceModel.Channels.MaxMessageSizeStream.PrepareRead(Int32 bytesToRead)     at System.ServiceModel.Channels.MaxMessageSizeStream.Read(Byte[] buffer, Int32 offset, Int32 count)    ... 00c1069e-df14-5005-0000-0ec04d857de3

To resolve the issue, there are few config files that you need to modify the value that was defined as 115343360 which is 115MB. This is a two step process.

1. First, you need to go to the SharePoint app servers that hosts Reporting Services service application. This can be identified by:

SharePoint central administration -> System Settings -> Manage Servers in Farm and make a note of the servers that are hosting "SQL Server Reporting Services Service"

2. In each of the machines that are identified in #1, go to the web.config file located under:

"C:Program FilesCommon Filesmicrosoft sharedWeb Server Extensions15WebServicesReporting"

3. Take a backup of the file.

4. Open it in a text editor and locate the <bindings> section.

5. Replace all the 115343360 values with 2147483647. The value is ~2GB

6. Save and close the file. Here is how the modified version would look like. Please do modify any other values in the below section to reflect what you see below.

<customBinding>
<binding name="http" sendTimeout="01:00:00" receiveTimeout="01:00:00">
<security authenticationMode="IssuedTokenOverTransport" allowInsecureTransport="true" />
<binaryMessageEncoding>
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
</binaryMessageEncoding>
<httpTransport maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" useDefaultWebProxy="false" transferMode="Streamed" authenticationScheme="Anonymous" />
</binding>
<binding name="https" sendTimeout="01:00:00" receiveTimeout="01:00:00">
<security authenticationMode="IssuedTokenOverTransport" />
<binaryMessageEncoding>
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
</binaryMessageEncoding>
<httpsTransport maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" useDefaultWebProxy="false" transferMode="Streamed" authenticationScheme="Anonymous" />
</binding>
</customBinding>

7. Perform an IISRESET.

8. Remember, the steps #2 thro' #7 has to be done on all the machines identified in step #1.

9. Secondly, you need to go to each of the SharePoint WFE server including the app servers (that hosts Reporting services which was identified on step #1) and locate client.config file from the following location:

"C:Program FilesCommon Filesmicrosoft sharedWeb Server Extensions15WebClientsReporting"

10. Take a backup of the file.

11. Open it in a text editor and locate the <bindings> section.

12. Replace all the 115343360 values with 2147483647. The value is ~2GB

13. Save and close the file. Here is how the modified version would look like. Please do modify any other values in the below section to reflect what you see below.

<customBinding>
<!-- These are the HTTP and HTTPS bindings used by all endpoints except the streaming endpoints -->
<binding name="http" sendTimeout="01:00:00" receiveTimeout="01:00:00">
<security authenticationMode="IssuedTokenOverTransport" allowInsecureTransport="true" />
<binaryMessageEncoding>
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
</binaryMessageEncoding>
<httpTransport maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" useDefaultWebProxy="false" transferMode="Streamed" authenticationScheme="Anonymous" />
</binding>
<binding name="https" sendTimeout="01:00:00" receiveTimeout="01:00:00">
<security authenticationMode="IssuedTokenOverTransport" />
<binaryMessageEncoding>
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
</binaryMessageEncoding>
<httpsTransport maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" useDefaultWebProxy="false" transferMode="Streamed" authenticationScheme="Anonymous" />
</binding>
<!--
These are the HTTP and HTTPS bindings used ONLY by the streaming endpoints.

Details:
1) The only difference between these bindings and the ones above is that these include long
running operations causing the security timestamp in the header to become stale.
In order to avoid staleness errors, the maxClockSkew is set to 1 hour.
2) Any changes made to the above bindings should probably be reflected below too.
-->
<binding name="httpStreaming" sendTimeout="01:00:00" receiveTimeout="01:00:00">
<security authenticationMode="IssuedTokenOverTransport" allowInsecureTransport="true">
<localClientSettings maxClockSkew="01:00:00" />
</security>
<binaryMessageEncoding>
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
</binaryMessageEncoding>
<httpTransport maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" useDefaultWebProxy="false" transferMode="Streamed" authenticationScheme="Anonymous" />
</binding>
<binding name="httpsStreaming" sendTimeout="01:00:00" receiveTimeout="01:00:00">
<security authenticationMode="IssuedTokenOverTransport">
<localClientSettings maxClockSkew="01:00:00" />
</security>
<binaryMessageEncoding>
<readerQuotas maxStringContentLength="2147483647" maxArrayLength="2147483647" />
</binaryMessageEncoding>
<httpsTransport maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" useDefaultWebProxy="false" transferMode="Streamed" authenticationScheme="Anonymous" />
</binding>
</customBinding>

14. Perform an IISRESET.

15. Remember, the steps #9 thro' #14 has to be done on all the WFE machines as well as the app servers (that hosts Reporting services which was identified on step #1).

Now the large report that you are trying to export should come up fine as expected be it from within SharePoint or from any custom application.

This article applies to all the reporting services version from SSRS 2012 thro' SSRS 2016 and for SharePoint 2010 thro' SharePoint 2016. The path referenced in steps #2 and #9 varies between 14 / 15 / 16 according to the SharePoint versions 2010 / 2013 / 2016 respectively.

Hope this helps!

Selva.

[All posts are AS-IS with no warranty and support]

Troubleshooting SQL Server Upgrade Issues

$
0
0

Recently, one of my partners, were facing issues upgrading SQL server 2008 from Service Pack 2 to Service Pack 3. 

On checking the summary.txt in the setup bootstrap logs found the following error information :

------------------

Final result: The patch installer has failed to update the shared features. To determine the reason for failure, review the log files.
Exit code (Decimal): -2068578304
Exit facility code: 1204
Exit error code: 0
Exit message: The INSTALLSHAREDWOWDIR command line value is not valid. Please ensure the specified path is valid and different than the INSTALLSHAREDDIR path.

------------------

And on the detail.txt logs could see the below error information :

Slp: Validation for setting 'InstallSharedWowDir' failed. Error message: The INSTALLSHAREDWOWDIR command line value is not valid. Please ensure the specified path is valid and different than the INSTALLSHAREDDIR path.
Slp: Error: Action "Microsoft.SqlServer.Configuration.SetupExtension.ValidateFeatureSettingsAction" threw an exception during execution.
Slp: Microsoft.SqlServer.Setup.Chainer.Workflow.ActionExecutionException: The INSTALLSHAREDWOWDIR command line value is not valid. Please ensure the specified path is valid and different than the INSTALLSHAREDDIR path.

------------------

Looking at it, I could nail the issue down to two possible suspects :- Permissions issue or Registry key entry corruption

++ Checking on permissions, the user was the local administrator and had all the required set of permissions.

++ Thus took a process monitor trace on the next launch of the upgrade to figure out when the process is failing which was the registry key invoked.

On analyzing the trace, I could find out that the setup.exe was checking the registry key HKEY_LOCAL_MACHINESOFTWAREWow6432NodeMicrosoftMicrosoft SQL Server100VerSpecificRootDir and thus could figure out that the WOW32 execution folder of SQL server 2008 R2 was not set correctly.

Checked the value of the string 'B1D55012528AA294F86D6C035CEAC33B' at the registry key path HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindowsCurrentVersionInstallerUserDataS-1-5-18ComponentsC90BFAC020D87EA46811C836AD3C507F and it was found to be "C:Program Files (x86)Microsoft SQL Server"

On a precautionary note, backed up the registry keys.

Later modified the value of the registry key 'VerSpecificRootDir' in the path HKEY_LOCAL_MACHINESOFTWAREWow6432NodeMicrosoftMicrosoft SQL Server100 from "C:Program Files (x86)Microsoft SQL Server100" to "C:Program Files (x86)Microsoft SQL Server".

Rebooted the server once and thus the upgrade went through seamlessly and thus we had the SQL Server upgraded from Service Pack 2 to Service Pack 3.

Hope this helps.. Happy troubleshooting!!

Using Azure Functions, Cosmos DB and Powerapps to build, deploy and consume Serverless Apps

$
0
0

Azure Functions can be used to quickly build Application as Micro Services, complete with turnkey integration with other Azure Services like Cosmos DB, Queues, Blobs, etc , through the use of Input and output Bindings. Inbuilt tools can be used to generate Swagger definitions for these Services, publish them and consume them in Client side Applications running across device platforms.

In this article , an Azure Function App comprising of 2 different Functions that perform CRUD operations on data residing in Azure Cosmos DB, will be created. The Function App would be exposed as a REST callable endpoint that would be consumed by a Microsoft Powerapps Application. This use case does not require an IDE for development. It can be built entirely from the Azure Portal and the browser.

[The Powerapps App file, C# Script files, Yaml file for the Open API Specs created for this article can be downloaded from this Gtihub location here]

  1. Creation of a DocumentDB database in Azure Cosmos DB

Use the Azure portal to create a DocumentDB database. For the use case described in this article, created is a Collection (expensescol) that stores Project Expense details, comprising the attributes shown below.

2. Creation of a Function App that implements the Business Logic in Service

Two Functions are created in this Function App using C# Scripts.

  • GetAllProjectExpenses that returns all the Project Expenses Data from the collection in Cosmos DB
  • CreateProjectExpense that creates a Project Expense Record in Cosmos DB

a) Function GetAllProjectExpenses ->

The Input and output Binding configured for this Function:

Apart from the HTTPTrigger input binding for the incoming request, an additional input binding for Cosmos DB  is configured that retrieves all the Expense records from the database. Due to this binding, all the Expense records are available to the Run Method through the 'documents' input parameter - see screenshot of the C# Script used in this Function, below.

[Note: The scripts provided here are only meant to illustrate the point, and do not handle best practices, Exceptions, etc]

Refer to the Azure Documentation for detailed guidance on configuring Bindings in Azure Functions, for HTTPTriggers and Azure CosmosDB

b) Function CreateProjectExpense ->

The binding configuration used for this Function is:

Notice that there are 2 output bindings here, one for the HttpResponse and the other is the binding to Cosmos DB to insert the expense record into it.

[Note: When the Run method in  a Function is invoked asynchronously, we cannot use an 'out' parameter to the Cosmos DB Binding and an 'out' for the HttpResponse in it. In such cases, we need to add the document meant for insertion into an IAsyncCollector Object reference, 'collector' in this case. Note that the  parameter 'collector' is used in the output binding to Cosmos DB, shown above . Refer to the documentation here for more info pertaining to scenarios with multiple output parameters]

3. Test the Functions created 

Use Postman to ensure both the Functions work without errors. The HttpTrigger Url can be obtained from the C# Script Editor View of the Function

4. Generating an OpenAPI (Swagger) Definition for the Function App

A Function App could contain different Functions, each of which could potentially be written in different programming languages. All of these Functions or individual 'Micro Services' could be exposed through a single base end point that represents the Function App. From the Application Settings, navigate to the 'API Endpoint' Tab.

Click on the button 'Generate API definition template' to generate a base definition of the Swagger. But it lacks all the elements required to fully describe the Functions. The definition, described in Yaml format, has to be manually edited in the editor pane. The Yaml file created for this Function is available along with the other artefacts in this blog Post.

Refer to this , this  and this links that provides guidance on working with Yaml to create the Swagger definitions, or using other options to create it.

[Note: The samples considered in the links above use simple primitive types as parameters in the Method calls. The scenario in this article however deals with Collections, and needs more work to get the Yaml right. Refer to the artefacts download link in this article to view the Yaml that was created for the scenario in this blog post]

[Note: For simplicity in this article, I have considered the option provided by Functions to add the API key in the Request URL, under the key 'code'.  For more secure ways to deal with it, use Azure AD integration or other options]

After the Yaml is created and the definition is complete, test the requests from the Test console on the Web Page, and ensure that the Functions work without errors. Once tested, click on the button 'Export to Power Apps and Flow' to export the Swagger definition and create a Custom connector in the latter.

5. Create a new custom Connection in powerapps.microsoft.com from the connector registered in the previous step. Embed the Security code for the Function App. This gets stored with the connection and automatically included in the request by Powerapps to the REST Services deployed on Azure Functions.

6. Create a new Powerapps App that would consume the REST Services exposed by Azure Functions in the earlier steps

While you could start with a blank Template, it involves some work to create the different Forms required in the App for 'Display All', 'Edit' and 'Browse All' use cases. Powerapps supports the ability to automatically generate all these Forms and provide a complete App, when selecting a Data Source like OneDrive, SharePoint Office 365 Lists, and many others. Since the 'ProjExpensesAPI' Connector we have created is a custom one, this Wizard is not available to create the App automatically.

To work around this, I have created a Custom List in Office 365, that has the same fields as in the Expense data returned by the Function App. I used the wizard to generate a complete App based on the Custom List in Office 365, and then changed all the Data Source references from it to the 'ProjExpensesAPI' Connection.

 

Note in the screenshot above, how the Logged in User context can be passed through 'Excel like' functions to the Search Box. The data is filtered after it is received from the REST APIs. Notice how our custom API is invoked below, and the data returned is filtered using the expression shown

The screen shots of the App with each of the Forms is shown below. This App can be run on any of Windows, Android or iOS Mobile Devices.

Test the App to ensure that all the REST API operations like GetAllExpenses and CreateProjectExpense requests work from the App. It can then be published by the user and shared with others in the Organization.

The powerapps App file is also provided along with the other artefacts in this article.

 


Microsoft Azure - Artifical Intelligence Data Science Stack

$
0
0

Microsoft now has an amazing offering for building and hosting your AI/Data Science solution.

AI Stack Overview

Infrastructure

Azure Batch https://azure.microsoft.com/en-us/services/batch/

Docker Images for AI/Data Science  https://github.com/Azure/batch-shipyard/tree/master/recipes

Data Platforms

SQL Database/SQL Server https://docs.microsoft.com/en-us/sql/sql-server/what-s-new-in-sql-server-2017

Azure Datalake https://azure.microsoft.com/en-us/solutions/data-lake/

Azure Analysis Services https://azure.microsoft.com/en-us/services/analysis-services/

Azure Cosmos DB https://azure.microsoft.com/en-us/services/cosmos-db/

Hardware

FPGA https://azure.microsoft.com/en-gb/resources/videos/build-2017-inside-the-microsoft-fpga-based-configurable-cloud/

GPU http://gpu.azure.com/

Processing

Process data in Azure DataScience VM https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-data-science-virtual-machine-overview#whats-included-in-the-data-science-vm

Azure Batch AI Training https://batchaitraining.azure.com/

Azure ML Experimentation & Management https://docs.microsoft.com/en-us/azure/machine-learning/

Azure Jupyter Notebooks http://notebooks.azure.com

Frameworks

All these come as standard on the Azure Data Science VM which is available on Windows, Ubuntu or Centos see https://docs.microsoft.com/en-us/azure/machine-learning/machine-learning-data-science-virtual-machine-overview or can be run under containers using the Azure Batch Shipyard Images https://github.com/Azure/batch-shipyard/tree/master/recipes

CNTK https://docs.microsoft.com/en-us/cognitive-toolkit/cntk-on-azure
Tensorflow
R Server
Torch
Theano
Scikit
Keras
Nvidia Digits
CUDA, CUDNN
Spark
Hadoop
 

Services

Machine Learning and Toolkits https://docs.microsoft.com/en-us/azure/machine-learning/

Cognitive Services https://azure.microsoft.com/en-gb/services/cognitive-services/

Bot Framework https://dev.botframework.com

REST APIs intelligence in the cloud http://aka.ms/cognitive

 

Resources

Microsoft AI News

Azure DataScience VM

Container images https://github.com/Azure/batch-shipyard/tree/master/recipes

Announcing AI for Earth: Microsoft’s new program to put AI to work for the future of our planet

Microsoft Build 2017: Microsoft AI – Amplify human ingenuity

Microsoft AI products and services

Project Prague: What is it and why should you care?

$
0
0

Guest blog from Charlie Crisp, Microsoft Student Partner at the University of Cambridge

image

Charlie has been a Microsoft Student Partner at the University of Cambridge

This year at Build, Microsoft announced a ton of cool new stuff, including the new Cognitive Services Labs – a service which allows developers to get their hands on the newest and most experimental tools which are being developed by Microsoft.

Particularly exciting was the announcement Project Prague which aims to empower developers to make use of advanced gesture recognition within their applications without even needing to write a single line of code.

image

And why should you care? Well aside from being ridiculously cool, this is the sort of stuff that even your non-techie friends will want to hear about. So let me set the scene…

The use of keyboards dates back to the 1870’s where they were used to type and transmit stock market text data across telegraph lines which was then immediately printed onto ticker tape. Mice, on the other hand, took a lot longer to come about, and it wasn’t until 1946 that the world was first introduced to the ‘Trackball’ – a pointing device used as an input for a radar system developed by the British Royal Navy.

Ever since computers have been used primarily with a keyboard and mouse (or trackpad) and advances in technologies such as intelligent pens and gesture control have done little to change this. It is a fact, however, that navigating through different right-click menus and keyboard shortcuts can be very cumbersome and time-consuming.

Gesture control can provide a great alternative way of interacting with a computer in a natural and intuitive way. Whether it’s moving and rotating pictures, navigating through tabs, or inserting emojis, Project Prague allows developers to recognise and react to any gesture which a user can make with their hands.

But the coolest part of this is not how easy this technology is for users, but how easy it is for developers.

Gestures are defined as a series of different hand positions, and this can be done either in code or by using Microsoft’s visual interface. Then it is as simple as adding an event listener which will be triggered whenever that gesture is recognized. Microsoft will even automatically generate visual graphics which will show the user what gestures are available to use in any given program, and what the effects of these gestures are.

If you are even half excited as I am about this, then I would urge you to check out aka.ms/gestures which has more information and a series of awesome demos which are well worth a watch. You can even sign up to test out the technology for yourself thanks to the wonders of Cognitive Services Labs! At the very least, it’s a great way of really freaking out your grandparents!

If you have found this interesting and want to learn more, then I strongly suggest that you check out https://labs.cognitive.microsoft.com/en-us/project-prague which has documentation, code samples and the SDK.

CIMOL Goes to Seattle: Perjalanan ke Seattle

$
0
0

Sesuai rencana, tim CIMOL hari ini melakukan perjalanan ke Seattle dari Jakarta. Karena salah satu anggota tim (Tifani) harus mengikuti acara wisuda, maka rombongan kami dipecah menjadi 2 (dua): rombongan pertama terdiri dari Adi, Fery dan bu Ayu (mentor) berangkat tanggal 22 Juli, sedangkan Tifani berangkat tanggal 23 Juli.

Hari ini, rombongan Adi, Fery dan bu Ayu berangkat dengan maskapai penerbangan ANA pukul 06.15 pagi dari Jakarta menuju Tokyo Narita. Semua proses check-in dan imigrasi berjalan dengan lancar.

image

Setelah menjalani penerbangan selama 7,5 jam ke Tokyo Narita, tim CIMOL mendarart pukul 15.50 waktu setempat dan transit selama 2 jam di sana.

image

Setelah itu, tim CIMOL kembali terbang dengan maskapai ANA pada pukul 18.05 waktu setempat.

image

Setelah menjalani penerbangan selama 9 jam, tim CIMOL mendarat dengan selamat di Seattle pada pukul 11.25 waktu setempat, pada tanggal 22 Juli 2017.

Setelah menjalani proses imigrasi dan bea cukai, tim CIMOL langsung meluncur ke Alder Hall di kampus University of Washington dan check-in di sana. Hari ini dijalani dengan makan siang bersama, keliling kota Seattle dan makan malam bersama. Kami dengan sengaja tidak langsung istirahat agar tidak jet-lag dan langsung menyesuaikan dengan waktu setempat.

Sebelum kami beristirahat, kami mendapat kabar bahwa Tifani sudah boarding menuju ke Tokyo Narita dari Jakarta.

image

Semoga perjalanan Tifani juga lancar seperti yang kami alami.

Besok adalah hari pendaftaran bagi semua peserta Imagine Cup 2017 World Finals. Sore harinya juga ada briefing bagi peserta, makan malam, dan foto bersama. Mohon doanya agar semua lancar!

Reset lost admin account password

$
0
0

Symptom:

If you lost your admin account password, or you need to change the password for any reason follow this article to reset your admin account password.

Resolution:

Option 1: Using Azure Portal

  1. Using Azure Portal open your Azure SQL Server blade.
  2. Make sure you are in the Overview blade.
  3. Click on "Reset password" at the top of the overview blade.
  4. Set the new password and click save.

Figure 1 – reset password using Azure Portal.

Option 2 – Using Azure CLI

  1. Open Azure CLI 2.0 – choose the right option for you
    1. On your workstation (Installation instructions here)
    2. On Azure Portal click the CLI button

      Figure 2 – Azure CLI using the Portal

  2. Run the following command, change the names to match your environment.

    az sql server update --resource-group <ResourceGroupName> --name <Servername> --admin-password <NewAdminAccountPassword>

Figure 3 – Output of the CLI on Azure Portal – Blurred

Option 3 – PowerShell

  1. Make sure you have AzureRM PowerShell module installed (installation instructions here)
  2. Run the following PowerShell cmdlets

    Login-AzureRmAccount

    Set-AzureRmSqlServer -ResourceGroupName <ResourceGroupName> -ServerName <ServerName> -SqlAdministratorPassword (ConvertTo-SecureString "<NewAdminAccountPassword>" -AsPlainText -Force)

Figure 4 – PowerShell output – Blurred

Option 4 – Using T-SQL

This is the not common option – as if you are connected to SQL you have the password to another admin account

  1. Using any client (SSMS / sqlcmd / PowerShell invoke-SQLcmd cmdlet or any other client application)
  2. Connect to the master database
  3. Run the following T-SQL command

ALTER LOGIN <AdminAccountName> WITH Password='NewAdminAccountPassword';

IP Address Mapping in Power BI

$
0
0

Sam Lester Power BI Blog

I recently assisted in troubleshooting an issue where the error logs contained several unknown IP addresses. During this process, I created a quick dashboard in Power BI to display the location of these IP addresses on a map to get a better understanding of where the machines were located. I used a free service from IPInfoDB, which requires registration to obtain your API key, but is very straightforward and worked very well for this project.

The basis of this solution is in calling the free web service that returns JSON, then parsing this through M code to obtain the individual fields (country, latitude, longitude, etc.). Doing this manually through Get Data -> Web and passing the full URL, we see an example of the data returned by the lookup service.

IP Address Lookup

Assuming that your Power BI report contains a column called “IP Address”, the following steps will allow you to create the map of IP address locations.

1. Create a new column that contains the full URL used to lookup each IP address.

= Table.AddColumn(#"Changed Type","FullIPURLCity", each "http://api.ipinfodb.com/v3/ip-city/?key=[URL_Key]&ip="&[IP Address]&"&format=json")

2. Replace the string [URL_Key] with the key obtained during registration (link above).

Power BI Sam Lester

3. Create the lookup function in M (create a blank query, open Advanced Editor, paste the following code, and rename the function as GetAllFromIP).

let
Source = (FullURL) =>
let
Source = Json.Document(Web.Contents(FullURL)),
#"Converted to Table" = Record.ToTable(Source),
#"Transposed Table" = Table.Transpose(#"Converted to Table"),
#"Promoted Headers" = Table.PromoteHeaders(#"Transposed Table", [PromoteAllScalars=true])
in
#"Promoted Headers"
in
Source

4. Click "Close & Apply" to run the lookup function for each of the IP addresses in your report.

IP Address Mapping in Power BI

The sample .pbix file can be downloaded here.

Thanks,
Sam Lester (MSFT)

Using Eclipse and Java to build and host a web app on Azure

$
0
0

Guest blog by David Farkas Microsoft Student Partner at the University of Cambridge

clip_image002_thumb

About Me

I have started immersing myself in the creative side of technology just a few years ago. I’ll be returning to Cambridge in the fall, to further my education in Computer Science.

My LinkedIn: https://www.linkedin.com/in/david-farkas-3b00a8a1/

My git: https://github.com/Veraghin

Introduction

Nowadays, the Cloud is increasingly part of the life of all programmers, as most applications are dealing with or running on the internet. Microsoft Azure is one of these Cloud platforms. I’ve spent my first year in Cambridge mostly programming in Java, thus it was a given that I would try and see how Java and Azure could be used together.

The starting point is https://azure.microsoft.com/en-us/develop/java/ , which has a collection of links to detailed tutorials about using Java on Azure. After a bit of research, I decided to build on the guide about creating a basic Azure Web app, using Eclipse.

This tutorial, which can be found at https://docs.microsoft.com/en-us/azure/app-service-web/app-service-web-eclipse-create-hello-world-web-app, gave the backbone to my project.

In this post, I’ll be going over the basics of how to convert an existing Java desktop application to an Applet and my experience on hosting it on Azure.

Project setup

 

I’m assuming you have Eclipse set up already on your computer. You will need the Azure Toolkit for Eclipse installed, which can be found in the Eclipse marketplace. By doing this, you can deploy and manage applications running on azure from the Eclipse IDE itself.

clip_image004_thumb[2]

You’ll also need to make sure you have the Eclipse Java EE Developer Tools installed, also from the Eclipse marketplace, it helps you create web applications in Eclipse.

My code can be downloaded from https://github.com/Veraghin/GameOfLifeApplet

To set things up, create a Dynamic Web Project in Eclipse, then copy the source code to the new Web Content folder. clip_image006_thumb[2]

The process of converting from a desktop application to an Applet

The original project was the implementation of Conway’s Game of Life in Java. Converting an existing Java desktop application which utilizes Java Swing for the GUI is relatively straightforward, the top class, extending JFrame can be changed to extend JPanel, and after removing some functions from the constructor which are not needed for an applet, the JPanel can be included as the content pane of a new skeleton applet class.

In this case, my original top class was GUILife, extending the JFrame swing class, which had to change to JPanel, as the applet will be embedded in a website and it can’t have its own window. Also, the constructor had to change from:

   1: public GUILife(PatternStore ps) throws IOException {
   2:     super("Game of Life");
   3:     mStore=ps;
   4:     setDefaultCloseOperation(EXIT_ON_CLOSE);
   5:     setSize(1024,768);
   6:     setLayout(new BorderLayout());
   7:     add(createPatternsPanel(),BorderLayout.WEST);
   8:     add(createControlPanel(),BorderLayout.SOUTH);
   9:     add(createGamePanel(),BorderLayout.CENTER);
  10: }

 



To just:

   1: public GUILife(PatternStore ps) throws IOException {
   2:     mStore=ps;
   3:     setLayout(new BorderLayout());
   4:     add(createPatternsPanel(),BorderLayout.WEST);
   5:     add(createControlPanel(),BorderLayout.SOUTH);
   6:     add(createGamePanel(),BorderLayout.CENTER);


The super(“Game of Life”) call, which would set the window name of the JFrame is redundant, the html will take care of it along with setting the applet size. The setDefaultCloseOperation is uncalled for, it will be handled by the browser running the application.

The following code is the skeleton applet, into which the former desktop application is inserted as its content panel:

 

   1: package gameOfLife;
   2:     import java.io.IOException;
   3:
   4:     import javax.swing.JApplet;
   5: public class GameOfLifeApplet extends JApplet {
   6:     public void init(){
   7:         try{
   8:                 PatternStore starter = new PatternStore("patterns");
   9:                 GUILife gui = new GUILife(starter);
  10:                 gui.setOpaque(true);
  11:                 setContentPane(gui);
  12:             }catch(IOException e){
  13:                  e.printStackTrace();
  14:         }
  15:     }
  16: }

 



This could be run as an applet on its own, but I set out to embed it in a website. To achieve that, I’ll use the Dynamic Web project in Eclipse, following the tutorial linked at the top.

The Eclipse provided Dynamic Web project template automatically creates an index.jsp file, which is our homepage on the website, this is where the applet is 

   1: <object type="application/x-java-applet"
   2:         classid="clsid:8AD9C840-044E-11D1-B3E9-00805F499D93"
   3:         width="1024" height="768">
   4:         <param name="code" value="gameOfLifeGameOfLifeApplet.class">
   5:         <param name="archive" value="GameOfLife.jar">
   6:         <param name="permissions" value="sandbox" />
   7: </object>


To set things up this far, create a Dynamic Web Project in Eclipse, then copy the source code from GitHub to the new Web Content folder.

Project Deployment

From here, the project can be deployed to Azure straight away. Left clicking on the project name and selecting Azure -> Publish as Azure Web App… takes you to the Azure login screen, and then you can create or select the App service you want to deploy to. The whole process is really straightforward and well documented, it makes deploying straight to Azure an easy process.

When creating a new App service, the Azure portal provides more information, but you can customize all the important parts, such as location, pricing tier, Java and Web container versions straight from Eclipse.

image

The Final Implementation

This is an unsigned application, so getting it to run requires jumping through a few hoops, but it is all done for the sake of security. Starting with Java 7 Update 51, applets that are not signed by a trusted authority are not allowed to run in the browser, but this can be circumvented by adding the applet’s URL to the exception list in the Java Control Panel, which can be done by following this guide to adding an URL to the Exception list: https://java.com/en/download/faq/exception_sitelist.xml

More info on the security impact of applets:

https://java.com/en/download/help/jcp_security.xml

https://docs.oracle.com/javase/tutorial/deployment/applet/security.html

The website itself can be found under the following URL, it works under Internet Explorer:

https://webapp-170717105204-gameoflife.azurewebsites.net/

For testing purposes, to see the applet running you can call “appletviewer index.jsp” in the command line in a folder containing the source files. The appletviewer command is part of the Java SDK.

Some pictures of the final version:

clip_image002 clip_image004

.NET Framework のコンソール アプリケーションで STA スレッドから XmlSerializer クラスを利用する場合の注意事項

$
0
0

こんにちは、Visual Studio サポート チームです。

今回は、.NET Framework のコンソール アプリケーションで STA 属性を指定したスレッドから、XmlSerializer クラスを利用するなどして直接、または間接的に COM コンポーネントが利用される場合の注意事項についてご案内します。

 

注意事項

STA に属する COM コンポーネントを作成した場合は、COM のガイドラインに則り、対象の STA スレッドでは定期的にメッセージ ポンプを動作させてウィンドウ メッセージを処理する必要があります。メッセージ ポンプの処理を実装していないと、対象 STA の外部からは COM コンポーネントにアクセスすることができません。このため、特に .NET Framework のコンソール アプリケーションでは、ファイナライザー スレッドが対象の STA スレッドと通信できずにハング アップしてしまい、メモリ リークなどの問題を引き起こす可能性があります。

STA スレッドがメッセージ ポンプを実装する必要がある点については以下のドキュメントに解説がありますのでご参照ください。

[OLE] OLE スレッド モデルの概要としくみ
https://support.microsoft.com/ja-jp/help/150777/info-descriptions-and-workings-of-ole-threading-models

 

具体例

.NET Framework の XmlSerializer クラスでは、コンストラクタの特定のオーバーロード (*1) を利用すると、内部でアセンブリを生成してキャッシュする処理が行われます。この処理では内部的に COM を利用しており、STA スレッドから生成すれば STA に、MTA スレッドから生成すれば MTA に属するオブジェクトが生成されます。

 

(*1)  XmlSerializer クラスの特定のコンストラクタのオーバーロードでアセンブリ生成が行われる動作については以下のドキュメントに記載があります。

XmlSerializer Class
https://msdn.microsoft.com/en-us/library/system.xml.serialization.xmlserializer(v=vs.110).aspx
----
Dynamically Generated Assemblies
To increase performance, the XML serialization infrastructure dynamically generates assemblies to serialize and deserialize specified types. The infrastructure finds and reuses those assemblies. This behavior occurs only when using the following constructors:

XmlSerializer.XmlSerializer(Type)
XmlSerializer.XmlSerializer(Type,String)

If you use any of the other constructors, multiple versions of the same assembly are generated and never unloaded, which results in a memory leak and poor performance. The easiest solution is to use one of the previously mentioned two constructors.
----

 

ここで、ある程度長い期間動作するコンソール アプリケーションにおいて、Main メソッドで [STAThread] 属性を指定して XmlSerializer を使用しているケースを想像してください。

XmlSerializer を特定のコンストラクタで生成すると、前述のとおり STA に属する COM オブジェクトが生成されます。このアプリケーションは長期間動作し続けますが、いずれ、ガベージ コレクション (GC) が実行され、GC の機能の一部であるファイナライザー スレッドはファイナライズ可能なオブジェクトのファイナライズ処理を実行します。

このファイナライズ可能なオブジェクトの中には、前述の XmlSerializer がアセンブリのキャッシュに利用する COM オブジェクト (厳密にはそのマネージ ラッパーである RCW オブジェクト) も含まれますが、このオブジェクトは STA で生成された場合、当該 STA スレッド上でのみ動作するよう COM 基盤によって制御されていることから、ファイナライザー スレッドなど STA の外部のスレッドから直接アクセスすることはできません。STA の外部からのアクセスは COM 基盤によって制御され、ウィンドウ メッセージを介して STA スレッドに処理がディスパッチされます。

STA スレッドはファイナライザー スレッドからウィンドウ メッセージで通知される処理要求を受信できる必要がありますが、コンソール アプリケーションなどでメッセージ ポンプを実装していない場合、この処理要求を受け付けることができず、ファイナライザー スレッドは応答を待ち続けてハング アップした状態となります。

結果として、ファイナライズ可能オブジェクトの終了処理が進まないため、メモリ リーク等の問題につながる場合があります。

 

再現コード

上述の現象は、以下のようなサンプル コードで確認することができます。

[STAThread]
static void Main(string[] args)
{

XmlSerializer serializer = new XmlSerializer(typeof(MyClass), "http://www.microsoft.com");
serializer = null;

GC.Collect();
for (;;)
{

System.Threading.Thread.Sleep(100);

}

}

サンプル コードをビルドして実行し、WinDbg などのデバッガを使用してファイナライザー スレッドの状態を確認してみます。
マネージ スレッドの一覧から、ファイナライザー スレッドを確認します。5 番スレッドがファイナライザー スレッドです。

0:009> !sos.threads
ID OSID ThreadOBJ State GC Mode GC Alloc Context Domain Count Apt Exception
0 1 83c 0062f7b0 26020 Preemptive 0259CB5C:00000000 0062aa58 1 STA
5 2 2ef4 0063eb28 2b220 Preemptive 00000000:00000000 0062aa58 0 MTA (Finalizer) 

 

ファイナライザー スレッドの状態を、コール スタックから確認します。
以下では、左端からフレーム番号、ベース ポインター、リターン アドレス、モジュール名!関数名 を表示しています。
関数は下から上に向かってコールされています。
15 番フレームから始まる RCW のクリーンアップ処理の過程で、03 番フレームにみられるような、アパートメントを跨いだ処理の呼び出しを行っており、応答を待っている状態であることがわかります。
# 以下は .NET Framework 4.7 を利用している場合の例ですが、.NET Framework のバージョンによって使用される関数が異なる場合があります。

0:009> ~5k
# ChildEBP RetAddr
00 0463ee10 77362bf3 ntdll!NtWaitForMultipleObjects+0xc
01 0463efa4 770195bb KERNELBASE!WaitForMultipleObjectsEx+0x103
02 0463effc 76feec6d combase!MTAThreadWaitForCall+0xdb
03 (Inline) -------- combase!MTAThreadDispatchCrossApartmentCall+0xaf5
04 (Inline) -------- combase!CSyncClientCall::SwitchAptAndDispatchCall+0xbd4
05 0463f1a8 76fef80b combase!CSyncClientCall::SendReceive2+0xcbd
06 (Inline) -------- combase!SyncClientCallRetryContext::SendReceiveWithRetry+0x29
07 (Inline) -------- combase!CSyncClientCall::SendReceiveInRetryContext+0x29
08 0463f204 76feda65 combase!DefaultSendReceive+0x8b
09 0463f2fc 76f344b5 combase!CSyncClientCall::SendReceive+0x3a5
0a (Inline) -------- combase!CClientChannel::SendReceive+0x7c
0b 0463f328 777067e2 combase!NdrExtpProxySendReceive+0xd5
0c (Inline) -------- RPCRT4!NdrpProxySendReceive+0x21
0d 0463f570 76f35e20 RPCRT4!NdrClientCall2+0x4a2
0e 0463f590 7703120f combase!ObjectStublessClient+0x70
0f 0463f5a0 76f989e1 combase!ObjectStubless+0xf
10 0463f630 76f98a99 combase!CObjectContext::InternalContextCallback+0x1e1
11 0463f684 73bfeff6 combase!CObjectContext::ContextCallback+0x69
12 0463f784 73bff0ca clr!CtxEntry::EnterContext+0x252
13 0463f7bc 73bff10b clr!RCW::EnterContext+0x3a
14 0463f7e0 73bfeed3 clr!RCWCleanupList::ReleaseRCWListInCorrectCtx+0xbc
15 0463f83c 73bfd7f8 clr!RCWCleanupList::CleanupAllWrappers+0x14d
16 0463f88c 73bfdac8 clr!SyncBlockCache::CleanupSyncBlocks+0xd0
17 0463f89c 73bfd7e7 clr!Thread::DoExtraWorkForFinalizer+0x75
18 0463f8cc 73bd1e09 clr!FinalizerThread::FinalizerThreadWorker+0xba
19 0463f8e0 73bd1e73 clr!ManagedThreadBase_DispatchInner+0x71
1a 0463f984 73bd1f40 clr!ManagedThreadBase_DispatchMiddle+0x7e
1b 0463f9e0 73cba825 clr!ManagedThreadBase_DispatchOuter+0x5b
1c (Inline) -------- clr!ManagedThreadBase_NoADTransition+0x2a
1d 0463fa08 73cba8ef clr!ManagedThreadBase::FinalizerBase+0x33
1e 0463fa44 73be5dc1 clr!FinalizerThread::FinalizerThreadStart+0xd4
1f 0463fadc 75a68744 clr!Thread::intermediateThreadProc+0x55
20 0463faf0 779e582d KERNEL32!BaseThreadInitThunk+0x24
21 0463fb38 779e57fd ntdll!__RtlUserThreadStart+0x2f
22 0463fb48 00000000 ntdll!_RtlUserThreadStart+0x1b

 

対応方法

STA スレッドに、外部からの処理要求に応答できるようメッセージ ポンプを実装します。
当該スレッドのループ処理の内部などで、定期的に以下の処理を実行することでメッセージ ポンプを動作させることができます。


System.Threading.Thread.CurrentThread.Join(0);


この他、前述のサンプル コードのように特に STA を指定しなくても問題ない場合は、[STAThread] を指定せずに、既定の MTA スレッドとして利用することでも問題を回避することができます。
なお、Windows Form アプリケーションなど UI を持つものは既定でメッセージ ポンプが組み込まれているため、通常、本稿で取り上げたような問題が起こることはありません。


Docker で SQL Server 2017 を走らせよう

$
0
0

Microsoft Japan Data Platform Tech Sales Team

阪本 真悟

 

はじめに

SQL Server 2017 から Windows 環境だけではなく、Linux 環境でも SQL Server のデータベースエンジンが動作するようになりました。これまで本ブログでも裏側のアーキテクチャや、Linux 環境での可用性構成(AlwaysOn 可用性グループ)、SQL Server に包含される ETL ツールである SQL Server Integration Services の Linux 対応についてご紹介してきました。

 

Linux 版 SQL Server は Docker コンテナにも対応しています。 SQL Server のような標準的なワークロードは Docker コンテナを活用することで高いポータビリティや柔軟性の恩恵を十分に得ることが出来ます。今回は Docker コンテナ環境での SQL Server 活用についてご紹介します。

まず Azure 上に Docker コンテナの環境を構築しましょう。 Azure の Marketplace に Docker on Ubuntu Server というインスタンスがありますので、これを使って Docker コンテナ環境を立ち上げることにします。

 

システム要件

まずシステム要件に合わせて VM のサイジングをして下さい。 Docker コンテナ環境のシステム要件は以下の通りです。このブログを書くために VM を作成したのですが、当初デフォルトの「Standard A1 (1core, 1.75GB メモリ)」サイズの VM を起動してしまい、SQL Server のステータスがUPしなくて困った状況に陥りました。システム要件の確認って大事ですね。

・Docker エンジン: 1.8 以降
・ディスクの空き容量:最低 4GB 必要
・メモリーのサイズ:最低 4GB 必要
・SQL on Linux 環境のシステム要件に従う

 

最後に SQL on Linux のシステム要件に従うことと書いていますので、SQL on Linux のシステム要件もチェックしておきます。

・メモリーサイズ: 3.25GB
・ファイルシステム: XFS or EXT4
・ディスクの空き容量: 6GB
・プロセッサースピード: 2GHz
・プロセッサーコア数: 2cores
・プロセッサータイプ: x64 互換のみ

準備

Marketplace では様々なサイズの VM が用意されていますので、上記要件に合わせて今回は DS2_V2 を選択しました。

 

サーバ構築が完了したら PuTTY などの SSH クライアントを使ってサーバにログインして、最初に SQL Server の Docker コンテナを Docker Hub からダウンロードします。以下の Bash コマンドで簡単に実行可能です。

 

docker pull microsoft/mssql-server-linux

 

 

※Azure の Marketplace で作成した環境では特に必要ありませんでしたが、設定によっては sudo コマンドなどで権限を与えてやる必要があるかもしれません。

 

Docker コンテナイメージの入手と実行

次に入手した SQL Server の Docker コンテナイメージを使って、それを走らせてみましょう。以下のコマンドを実行します。

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1433:1433
-d microsoft/mssql-server-linux

 

それぞれのパラメータの意味をご説明します。

 

-e 'ACCEPT_EULA=Y': SQL Server イメージを使うにあたって必要な利用許諾 (End-User Licensing Agreement) の回答をします。利用許諾の詳細を確認後、Y でご回答ください。

-e 'SA_PASSWORD=<YourStrong!Passw0rd>': <YourStrong!Passw0rd> の部分はパスワードポリシーに合わせて置き換えてご利用下さい。

-p 1433:1433: サーバの TCP ポート番号(1番目の数字)とコンテナの TCP ポート番号(2番目の数字)を紐づけています。今回はサーバとコンテナで同じ TCP ポート番号を使用しています。

microsoft/mssql-server-linux: SQL Server コンテナのイメージを指定します。バージョンを指定しない場合は常に最新版が使用されます。

 

上記コマンドで SQL Server Docker コンテナを走らせたら、以下のコマンドで確認してみましょう。

docker ps -a

 

改行が入ってしまって少し見にくいのですが、例のように STATUS が "UP" で表示されていれば問題なしです。STATUS が "Exited" で表示されてしまう場合はメモリサイズ、ディスクの空き容量などシステム要件を見直してみてください。特にメモリサイズ 1.75GB の Standard A1 で起動していないか確認してみて下さい(笑)

 

SQL Server on Docker への接続

SQL Server 2017 CTP 2.0 では SQL Server コマンドラインツールがコンテナイメージに含まれていますので使ってみましょう。以下のコマンドを使って Docker コンテナに接続します。

docker exec -it ‘Container ID’ “bash”

 

‘Container ID’ は先程の docker ps コマンドで確認したものを指定してください。

 

コンテナに接続した後は、コンテナイメージにあらかじめ含まれている sqlcmd ツールを使って操作をすることが出来るようになります。この時、sqlcmd コマンドに対してパスは通っていませんので、コマンドはフルパスで以下のように指定する必要があります。

/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<YourPassword>'

 

SQL Server で SELECT 文を発行してデータベースの一覧表示をしてみます。

システム データベースの一覧が表示されることが確認出来ました。終了する場合は「exit」コマンドを入力して下さい。

 

コマンドラインで操作するのではなく、使い慣れた SQL Server Management Studio (SSMS) を使って Docker コンテナ上の SQL Server に接続することも出来ます。そのためには事前に Azure Portal から Endpoint の追加をして 1433 ポートへの通信を許可するように設定しましょう。SSH 通信は初期設定で 許可されているのですが、SSMS の通信はファイアウォールで遮断されてしまうからです。

 

Endpoint の追加後に SSMS を起動して、以下のように入力して Docker コンテナ上の SQL Server に接続します。
サーバ名とパスワードはお手元の環境設定に従って置き換えてご入力下さい。

 

見慣れた画面で管理出来るので安心しますね。

 

複数の SQL Server コンテナ起動

Docker コンテナを活用すると同一サーバ上に、簡単に複数の SQL Server を起動することが出来るようになります。以下のコマンドを使って複数の SQL Server を起動してみます。以下の例ではポート番号 1401 と 1402 に 紐づく 2 つの SQL Server Docker コンテナを起動してみました。

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1401:1433
 -d microsoft/mssql-server-linux
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1402:1433
 -d microsoft/mssql-server-linux

 

アッという間に複数の SQL Server コンテナが立ち上がりました。利用するときは IP アドレスとポート番号を指定することで個別環境に接続することが出来ます。複数の環境を立ち上げて管理するのは大変な作業になりがちですが、SQL Server on Docker コンテナを活用することでエンタープライズ領域での複数の環境でのデータベース管理が簡単に、かつ柔軟になります。

データの永続化

Docker コンテナの環境ではデータの永続化が課題になります。docker コンテナの開始・停止(docker stop・dcoker start)でデータが消えることはありませんが、コンテナ削除のコマンド(docker rm)を入力するとデータベースも含めたコンテナの全データが消えてしまいます。Docker の Data Volume を使ってデータベースを永続化しましょう。

 

Data Volume を使用する方法

Docker コンテナ上の Data Volume に ホストのディレクトリをマウントさせて起動するようにすると、個別のコンテナを削除したときも Data Volume のデータを残すことが出来ます。

 

次のコマンドで Docker コンテナをスタートさせます。

docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<YourStrong!Passw0rd>' -p 1433:1433 -v <Data Volume>:/var/opt/mssql
 -d microsoft/mssql-server-linux

上記コマンドの -v 以降に<Data Volume> : と指定されている部分がポイントです。

<Data Volume> を指定しない方法ではコンテナを起動するタイミングでディレクトリを作成し Data Volume としてマウントしますが、コンテナを削除するとそのボリュームも一緒に削除されてしまいます。

<Data Volume> を明示的に指定するとマウントされるディレクトリはホスト上のファイルシステムに作成されるため、コンテナを削除しても影響を受けずデータは残りますし、/var/lib/docker 以下に隔離されているのでホストに干渉することもありません。
以下のコマンドで作成された Data Volume の情報を確認しましょう。作成した <Data Volume> = "mssql_data" が指定のマウントポイントにマウントされていることが確認出来ます。

docker volume ls

docker inspect <Data Volume>

 

 

永続化した Data Volume を削除したい場合は以下のコマンドで削除も可能です。

docker volume rm <Data Volume>

バックアップとリストア

バックアップとリストアは Linux 環境と同様に sqlcmd コマンドで実行することも出来ますし、SSMS で実行することも出来ます。

また以下のコマンドを使って Docker コンテナ環境上のファイルをコンテナ外にコピーすることも出来るのでファイル単位でのバックアップも可能です。

docker cp 'Container ID':/var/opt/mssql/data <host directory>

 

以下の例では Container ID 'c88794bfad57' の SQL Server データベース ファイルを ホストサーバの /tmp 以下にコピーしています。

まとめ

Docker コンテナ環境での SQL Server のセットアップから、運用フェーズで使える機能まで、順を追ってご紹介しました。

エンタープライズ領域でもコンテナ技術を適用する機会はどんどん増えてきています。特に開発環境を複数(大量に)準備しないといけないような場合、SQL Server on Docker を活用いただくことで、開発者とシステム管理者の双方にメリットがある柔軟な対応が可能なシステムを実現することが出来ます。

是非、SQL Server 2017 から提供されるこの新機能をご活用下さい。

 

関連記事

SQL Server on Linux って?(第 1 回目)

SQL Server on Linux って?(第 2 回目)

SQL Server on Linux でも AlwaysOn!

SQL Server Integration Services ( SSIS ) on Linux とは

Microsoft 365 announced at Inspire 2017 | Top 4 things for partners to know

$
0
0

At Inspire 2017, Satya introduced Microsoft 365, which brings together Office 365, Windows 10, and Enterprise Mobility + Security, delivering a complete, intelligent, and secure solution to empower employees, below are the 4 Top things partners in Australia need to know about the announcement.

1. Microsoft 365 provides two commercial offerings to support the needs of the largest enterprise to the smallest business;

  • Microsoft 365 Enterprise is designed for large organisations and integrates Office 365 Enterprise, Windows 10 Enterprise, and Enterprise Mobility + Security to empower employees to be creative and work together, securely.  Microsoft 365 Enterprise replaces Secure Productive Enterprise to double-down on the new customer promise of empowering employees to be creative and work together, securely.
  • Microsoft 365 Business is designed for small-to-medium sized businesses with up to 300 users and integrates Office 365 Business Premium with tailored security and management features from Windows 10 and Enterprise Mobility + Security.  It offers services to empower employees, safeguard the business, and simplify IT management.  Microsoft 365 Business will be available in public preview on August 2.|

2. Microsoft 365 Enterprise is offered in two plans—Microsoft 365 E3 and Microsoft 365 E5. Both are available for purchase on August 1, 2017. Microsoft 365 Business will be available in public preview on August 2, 2017. It will become generally available on a worldwide basis in Spring of 2017.

3. As a part of our commitment to small-to-medium sized customers, we also announced three tailored applications that are coming to Office 365 Business Premium and Microsoft 365 Business. These new applications are rolling out in preview over the next few weeks to Office 365 Business Premium subscribers in the U.S., U.K. and Canada, starting with those in the first release program. General availability outside of these markets has not yet been announced.

  • Microsoft Connections —A simple-to-use email marketing service.
  • Microsoft Listings—An easy way to publish your business information on top sites.
  • Microsoft Invoicing—A new way to create professional invoices and get paid fast.

4. Microsoft 365 represents a significant opportunity for partners to grow your businesses through differentiation of offerings, simplification of sales processes and incremental revenue. For more information please refer to the resources listed below;

Skype for Business サーバーのイベントログに エラー 32042 が記録される

$
0
0

Skype for Business サポートチームの松本です。

[Lync Server] イベントログに エラー 32042 が記録されている場合の確認箇所をご紹介させていただきます。

エラー 32042 は、お客様からの問い合わせで、SfB サーバーの管理者が意図しない状況で比較的発生している状況が多いように見受けられるため、こちらで紹介させていただきます。
(ポリシーで配布されていた。ソフトウェアのインストール時に入ってしまう。等)

Skype for Business Server 2015 および、Lync Server 2013 では、サーバー証明書を使用して経路上の通信を TLS 暗号化し、安心してご利用いただけるよう実装がおこなわれています。
そのため、定期的に証明書の状態を確認しています。その際に、[信頼されたルート証明書] にルート証明書以外の証明書があることを確認するとイベントログに エラー 32042 が記録されます。

ログの名前: Lync Server
ソース: LS User Services
イベント ID: 32042
イベント 内容:
無効な HTTPS の証明書を受信しました。

サブジェクト名: <FE サーバー証明書のサブジェクト名> 発行元: <CA 局>
原因: この問題は、HTTPS の証明書の有効期限が切れていたり、証明書が信頼されていない場合に発生する可能性があります。参照のために証明書のシリアル番号が添付されています。
解決策:
リモート サーバーを確認して、証明書が有効であることを確認してください。また、ローカル コンピューターに発行元の完全な証明書チェーンがあることを確認してください。

Event Error 32042

事象
エラー 32042 が記録された状態では次のような動作が事象として現れます。

- フロントエンド サーバーで、フロントエンド サービスが起動しない
- フロントエンド サーバー間の TLS の通信が使用できなくなる

なお、KB で、フロントエンド サービスが起動しない事象が公開されています。

TITLE: Lync Server 2013 Front-End service cannot start in Windows Server 2012
URL: https://support.microsoft.com/ja-jp/help/2795828/lync-server-2013-front-end-service-cannot-start-in-windows-server-2012

 

確認および、対処
フロントエンド サーバーの証明書を確認します。

フロントエンド サーバーに管理者権限のあるユーザーでログインします。
1. 検索で「mmc.exe」と入力し、起動します。
2. コンソール画面が開きましたら、[ファイル] - [スナップインの追加と削除] を開きます。
3. スナップインの追加と削除画面で、利用できるスナップイン 項目から [証明書] を選択し、[追加] をクリックします。
4. 証明書スナップイン画面で、[コンピューター アカウント] を選択し、[次へ]、[完了] の順でクリックします。
5. スナップインの追加と削除画面で、[OK] をクリックします。
6. 左ペインより、[コンソール ルート] - [証明書 (ローカル コンピューター)] - [信頼されたルート証明書] - [証明書] を開きます。
7. 表示されるルート証明書の一覧で、[発行先] と [発行者] がそれぞれ一致していることを確認します。
※ 画像の場合、赤枠の部分が一致していない。

RootCA

以下、一致していない証明書がある場合 (ルート証明書以外がある場合)
8. 現状の状態を画面キャプチャーで取得します。(後で戻すことを考えて念のためです。)
9. 一致していない証明書がある場合、[中間証明書] - [証明書] へドラグアンドドロップで移動します。
10. ルート証明書以外をすべて移動しましたら、FE サーバーを再起動します。
11. 再起動後、フロントエンド サービスが実行中となること、起動時にエラー 32042 が記録されないことを確認します。

※ [信頼されたルート証明書] にルート証明書以外の証明書が置かれる状況が誤った状況となります。
    そのため、本件の対処は、該当の証明書を [信頼されたルート証明書] から移動 (削除) する必要があります。
引き続き、Microsoft Unified Communication 製品をよろしくお願いいたします。

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

So What Is WebAssembly All About?

$
0
0

Every now and then people get excited about a new feature that is being developed as a web standard. One such technology that has been garnering excitement lately is called WebAssembly. Its a new way of running code on the web. Think about it as a new sort of application runtime that is available in the browser. Its a subset of normal JavaScript that can be optimized to run extremely quickly (smaller download, much faster parsing in browsers). The browser can also verify that its safe extremely quickly. The new binary format for delivering that kind of code to the browser is called WebAssembly. Its a low level binary code format that is not meant to be read/written by humans. The idea is that you can compile to this format from some other languages.

The best way to understand such a technology is to see it working. For this demo I will be using emcc to compile c code to WebAssembly. In this example I am creating the fibonacci series for a given positive number.

I used Emscripten which is an open source LLVM-based compiler from C and C++ to JavaScript (C => LLVM => Emscripten => JS). The following command produces the .js and .js.map and the .WASM files:
emcc fibonacci.c -s WASM=1 -o fibonacci.js
Next, I loaded the js file in the browser which produced the following results:
As you saw in the example above, I was able to write my code in C and compile it to run in the browser. Here is the browser support for WebAssembly as of June 2017:
At the time of publishing this post there is work being done on other languages to be compiled to WebAssembly like Rust and Swift. Steve Sanderson also has an experimental project called Blazor that shows .Net being compiled to WebAssembly.

Introducing: Analytical Workspaces in Dynamics 365 for Operations

$
0
0

***ANNOUNCING*** the General Availability of Analytical Workspaces & Reports in Dynamics 365 for Operations. Built-in Analytical Applications are now available standard as part of the Spring '17 Release. The following article offers insights into the Power BI service integration with direct links to walk-thru guides and Best Practices published by the Dynamics 365 product group.

What's important to know…?

  • VALUE PROP - To get a general overview of the advantages in using the Power BI service to deliver embedded analytics throughout the organization review the article here.
  • PORTFORLIO - Usage details on the collection of Analytical applications delivered as part of the Dynamics 365 for Operations Spring '17 Release (aka v7.2) is available here.
  • AUTHORING - Learn how to use Power BI Desktop to author Analytical solutions in a local development environment using instructions here.
  • CUSTOM SOLUTIONS - To extend the application to include custom solutions, use the developer walk-thru which includes X++ code samples and form control properties available here.

Business Intelligence for the Entire Organization

The following image offers a sneak peak into the built-in visualizations delivered standard as part of the Dynamics 365 for Operations service as of July '17.

Note:  Based on the speed of innovation, this list is subject to change as we continue to deliver advanced analytics directly in the application empowering every level of your organization.

Frequently Asked Questions (FAQ)

Q:   Can I customize the Power BI embedded reports?

A:    Yes, simply install Power BI Desktop onto a 1Box to get started using steps described here.

Q:   Do customers need to purchase a separate Power BI license to use the new embedded analytics?

A:    No, however, a Power BI Pro license is required to connect to Entity Store using Direct Query from PowerBI.com

Q:   Can I perform data mashups using external data in the Embedded Reports?

A:    Not at this time.  Data mashups can be authored on PowerBI.com that include data sourced from the Entity Store.

Q:   Can I secure data to only those companies I have access to?

A:    Yes, the single company view prevents users from accessing data from companies they don't have access to.  For more information on securing custom solutions, follow guidance provided here.

Q:   How is currency displayed across multiple companies?

A:    As a system currency. (System administration > Setup > System parameters)

Q:   Can I drill on summary balances back into Dynamics 365?

A:    You are able to drill into the details within a Power BI report. There is limited support for drill down into Dynamics 365.

Q:   What languages are currently supported?

A:    English only however the PBI team has additional planned.

Q:   Can I access Analytical Workspaces & Reports in Local Business Data?

A:    Not at this time.  Systems of Intelligence services are available on Cloud hosted solutions.

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>