Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Using SMO on Azure App Service (Web Apps) with Azure SQL DB

$
0
0

SQL server management object SMO) is one of the widely used way to interact with SQL server due to its feature for directly able to execute mass operation as well as management operation on SQL server, instead of old school way of using queries.

However when using it , it should also need to be consider that on the environment where we are using it is compatible in various aspects or not.

If you have been using existing SMO based project and moving on to Azure App services & facing any issue post deployment in SMO calls.

Or

You are building your application from scratch and planning to deploy it on Azure App services, there here are few things which you should do :

1. Correct Way of Using SMO DLL/Objects :

You should refer to this link, and use the SMO Nuget package to get the SMO dll and create object out of it :

https://www.nuget.org/packages/Microsoft.SqlServer.SqlManagementObjects

 

clip_image002

 

2. Connecting to on-Prem SQL database :

To Connect to on-Prem database , you don’t need to do anything explicit in code .By using any of the Network connection method like hybrid connection or Vnet , you will be able to connect your web application on On-Prem SQL database and should be able execute most of the SMO operations.

 

3. Connecting to Azure SQL database :

If you are making a connection with Azure SQL database , then you are not explicitly creating any Vnet or Network connection to connect to Azure SQL database , thus here are few things which you should remember :

(1) Try running same code and application from local machine (visual studio) and see if you are able to perform desired operations , if you can execute the same on local machine but having issue in App service , then move further , else try making your code to work with Azure SQL database.

(2) If your code is working fine in local but having issue on Azure App service , there here are few scenarios which can happen :

· Make sure that You have allowed Azure Service in Firewall setting of Azure SQL database:

image

 

· The Calls which you are making may be restricted by Azure App service due to Sandbox environment :

To test it , try changing SQL connection to On-Prem SQL database and create hybrid connection with On-Prem machine and see now if the App service is running fine and executing SQL operation.

Interesting things happening here :

The very interesting thing here can bee is that App services being a PaaS environment, runs in a security restricted mode named SandBox and thus it banes some calls for various system , graphics or security related operations.

Read more about sandbox restrictions here

https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#app-service-plans-and-sandbox-limits

For example , below is a sample code which runs fine when connecting to on-Prem SQL database form Azure App service.

It also runs fine when connecting to Azure SQL database while running code locally.

So you might endup in a confusion , that why the code is not working with Azure SQL db & Azure App service combination .

 

image

Here is the piece of code emitting above behavior :

In above code, if you look at the line :

 

All of them , internally make calls to fetch the current running server name & IP , and this is a security related call and thus due to this 3 lines of code , the application will break as the call is being restricted by Sandbox , as per sandbox article :

Connection attempts to local addresses (e.g. localhost, 127.0.0.1) and the machine's own IP will fail, except if another process in the same sandbox has created a listening socket on the destination port.

Rejected connection attempts, such as the following example which attempts to connect to 127.0.0.1:80, from .NET will result in the following exception:

 

However in case you are using , On-Prem Database , then same piece of code will work as in case of Vnet , Sandbox treats differently to  such calls and thus those calls doesn’t get banned in case of Vnet and works , as mentioned here :

 

Azure Web Apps may set up their virtual networks, or VNets in order to facilitate connectivity between Azure and on-premise intranets. This special case of network connectivity is also handled differently in the sandbox. In particular, the aforementioned restrictions on private and local addresses are ignored if the target network interface belongs to the app. Other VNet adapters on the same machine cannot be accessed, and all other network limitations are still apply.

Additionally, the sandbox automatically affinitizes connect and send operations destined for VNet addresses to the correct VNet interface, in order to improve ease-of-use of the VNet feature.

 

Removing those 3 lines from code, will make your code running fine and you shall be able to create table or any such SMO operation.

So above are few most happening scenarios which you need to take care.


Extend Microsoft.AspNetCore.Authentication.OAuth for Reverse Proxy

$
0
0

Recently, one customer said his Asp.net Core application used to work very well in OAuth authentication all the time, but after he placed a reverse proxy in front of the web application, the authentication fails because of invalid callback path.

Normally speaking, a callback path is needed when registering an application to the OAuth server. The application will send a callback path to the OAuth server during the authentication workflow as well, OAuth server will verify the client id, client secret and the call back url to make sure the application is a registered application.

The following is the snippet code to use Microsoft.AspNetCore.Authentication.OAuth module for OAuth authentication in Asp.net Core application.

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();

    services.AddAuthentication(options =>
        {
            options.DefaultAuthenticateScheme = CookieAuthenticationDefaults.AuthenticationScheme;
            options.DefaultChallengeScheme = "oauth";
        })
        .AddCookie()
        .AddOAuth("oauth", options =>
        {
            options.ClientId = Configuration["ClientId"];
            options.ClientSecret = Configuration["ClientSecret"];
            options.CallbackPath = new PathString(Configuration["CallbackPage"]);

            options.AuthorizationEndpoint = Configuration["AuthorizationEndpoint"];
            options.TokenEndpoint = Configuration["TokenEndpoint"];
            options.UserInformationEndpoint = Configuration["UserInformationEndpoint"];

            ...
        });
}

The options.CallbackPath in the above snippet code is PathString type which only accepts relative path begin with '/'. The Microsoft.AspNetCore.Authentication.OAuth module will call the BuildRedirectUri shown below in AuthenticationHandler.cs to build the callback url based on the relative callback path provided in options.CallbackPath.

protected string BuildRedirectUri(string targetPath)
=> Request.Scheme + "://" + Request.Host + OriginalPathBase + targetPath;

While in a reverse proxy scenario, let's suppose the web application's raw domain name is internal.com, the callback url registered into OAuth Server is different such as contoso.com, the domain name of the reverse proxy. Depending on different proxy, some proxy could pass raw domain (internal.com) in the http request's host name instead of contoso.com to the application. Furthermore, if SSL offloading is enabled, the http request schema will be http though end user could use https. That means, the callback registered in OAuth server is https://contoso.com while the application is sending http://internal.com for verification.

Luckily, most reverse proxy has provided several header to record original information such as X-Forwarded-Host and X-Forwarded-Proto or equivalent identifier. In order to support reverse proxy, we can extend Microsoft.AspNetCore.Authentication.OAuth to use X-Forwarded-Host and X-Forwarded-Proto to build the callback path. Lastly, make sure call AddMyOAuth instead of AddOAuth in ConfigureServices.

public static class MyOAuthExtensions
{
        public static AuthenticationBuilder AddMyOAuth(this AuthenticationBuilder builder, string authenticationScheme, Action configureOptions)
            => builder.AddOAuth<OAuthOptions, MyOAuthHandler>(authenticationScheme, configureOptions);
}


public class MyOAuthHandler : OAuthHandler where TOptions : OAuthOptions, new()
{
        public MyOAuthHandler(IOptionsMonitor options, ILoggerFactory logger, UrlEncoder encoder, ISystemClock clock)
            : base(options, logger, encoder, clock)
        { }

        // Copy from https://github.com/aspnet/Security/blob/dev/src/Microsoft.AspNetCore.Authentication.OAuth/OAuthHandler.cs
        protected override async Task HandleChallengeAsync(AuthenticationProperties properties)
        {
            if (string.IsNullOrEmpty(properties.RedirectUri))
            {
                properties.RedirectUri = CurrentUri;
            }

            // OAuth2 10.12 CSRF
            GenerateCorrelationId(properties);

            var authorizationEndpoint = BuildChallengeUrl(properties, BuildRedirectUri(Options.CallbackPath));
            var redirectContext = new RedirectContext(
                Context, Scheme, Options,
                properties, authorizationEndpoint);
            await Events.RedirectToAuthorizationEndpoint(redirectContext);
        }

        protected new string BuildRedirectUri(string targetPath)
        {
            var schema = Request.Headers["X-Forwarded-Proto"].Count > 0 ? Request.Headers["X-Forwarded-Proto"][0] : Request.Scheme;
            var host = = Request.Headers["X-Forwarded-Host"].Count > 0 ? Request.Headers["X-Forwarded-Host"][0] : Request.Scheme;
            return schema + "://" + host + OriginalPathBase + targetPath;         
        }
}

AdalException : authentication_ui_failed with ADAL and Xamarin Forms Android

$
0
0

In this post, Premier Developer Consultant Joe Healy identifies a possible error you may get when working with an Android project on Xamarin Forms. Read about his discovery and eventual solution to fix the SSL/TLS implementation issue.


Recently, I was helping a client with an Azure Active Directory integrated project (ADAL not MSAL for some various reasons). All was going well as I plugged away with the various integration pieces. We were targeting Xamarin Forms on this project. Luckily, there was a great blog post by Mayur here for us to follow.

Typically, I will start with UWP as a 'sanity check' to make sure things are working right. I provisioned up my Xamarin UWP project and had it running smoothly with no problems. My iOS project also worked smoothly, authenticating against my test O365 environment like a champ.

Read more of Joe’s post here.

Making file type associations enterprise ready

$
0
0

You may have noticed already that file type association has changed fundamentally in Windows 10. It is no longer possible for Administrators to set the default application for a certain file type dynamically. The only remaining option is, to create an XML file containing the desired FTA’s and import it using DISM (only valid for new users) or apply it using GPO (valid for all users on a PC). Here’s a good technical explanation about how FTA works in Windows 10.

The solution I’m providing here adds a bit of dynamics to the file type association topic (FTA). It is still a workaround but it lets you modify file type associations by using a Powershell script. Make sure to configure your environment as mentioned below.

Prepare your environment

  • Create a default file type association XML, which works with your standard client (click here to see how to export FTA's). Make sure that your FTA XML file only contains extensions you want to modify. Simply remove the lines with extensions you want to leave untouched. You can also deploy an empty XML if you don’t want to assign any file types yet. An empty file would look like this:
    <?xml version="1.0" encoding="UTF-8"?>
    <DefaultAssociations>
    </DefaultAssociations>
  • Use your preferred deployment method to push the file to your clients or add it to your master image. Select a local folder, which is readable but not changeable for users. The C:ProgramDataYourCompany folder would be a good place. You can refer to it in the GPO with the %ProgramData% variable.
  • Create a GPO which affects all Windows 10 clients and change the setting as shown in the picture. Point the path to the location, where you copied your XML file.

On a client, which is in scope of the GPO, do a GPUPDATE and log off/on to apply the changes. The settings of you FTA XML should now take effect.

You can check the registry for verification. If you changed, for example, the assignment for .AVI files in your config file, you should find a registry value HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionExplorerFileExts.aviUserChoiceProgId. The data of this value should be the same as the ProgId specified in your XML. And of course .AVI files should open with the application you assigned it to. If this is not the case, check the event log Microsoft-Windows-Shell-Core/AppDefaults for errors.

Add script to application deployments

Let’s assume you deploy an application to a set of clients, which handles a file type you already assigned to an application in your XML. You want to change the default app for this file type. To accomplish this, simply run the Powershell script attached to this article after the installation of the new application. You can also make this part of your app deployment.

Modify-AppAssocXml.ps1 -Path “C:ProgramData\AppAssoc.xml” -Extension ".avi" -ProgId "VLC.avi" -AppName "VLC Media Player"

This modifies the local application association XML file. It changes .AVI to the new AppId or adds a new line, if AVI is not yet assigned. To use this script, you should understand the values the command accepts.

Path is straightforward. It’s the path to your local FTA config file.
Extension is the file type(s) you want to modify or set. You can either specify a single extension or multiple extensions, separated by comma (i.e. ".avi,.mp4,.mpeg")
ProgID is the path to the open handler in the registry, relative from HKEY_CLASSES_ROOT to the key name before ShellOpenCommand. An Example: The open handler for “notepad.exe” can be found in HKEY_CLASSES_ROOTApplicationsnotepad.exeshellopencommand. The ProgID for Notepad would be Applicationsnotepad.exe.
AppName is the file description property of the corresponding EXE file. However, this value does not affect the file type association but it must be set.

Unfortunately, it requires GPO foreground processing for the settings to take effect. This means that users need to log off/on after the FTA XML has changed.


Using Graph API version 2

$
0
0

Microsoft introduced Graph API v2 to make the authentication more user friendly

  1. You don't need an azure account for registering your app.
  2. With one API interface you can login using office (outlook/live) and work accounts
  3. You can control your app/ user permissions directly from your app

Please follow the below process to register your apps.

App Registration

The Graph API changed its authentication protocol for supporting Microsoft and work accounts in a single API call.

      Create a new app in the below link

https://apps.dev.microsoft.com/?referrer=https%3a%2f%2fdeveloper.microsoft.com%2fen-us%2fgraph%2fgraph-explorer#/appList 

      Add redirect Url from new platform, please select the “Allow Implicit Flow” while creating new platform .

       Choose appropriate permission required by your app to communicate with AD tenant.

      Generate New Password and copy it because it won’t be visible again.

      ***Your Application Id will be the client Id and password will be the client secret for all the graph/ AD communications

      Save the changes.

      We should send the admin consent for approving the app permissions to your AD tenant, AD tenant admin has to open the below URL for approving the app permissions.

https://login.microsoftonline.com/{TenantName}/adminconsent?client_id={ApplicationId}&state=12345&redirect_uri={RedirectURI}

Tenenetname example: Contoso.onmicrosoft.com

Application Id : your appId

Redirect Uri : it is the same redirect your of your app under platform section.

Please update the above fields in the Uri before open it in browser.

 

For the programmers who are familiar with Graph API v1:

  1. Client Id will be the application Id
  2. Client secret will be the password
  3. Tenant will be same but you have to give the appropriate permissions to your app by sending and approving the admin consent

Requesting & approving the permissions for your app can be done by using the below URL

https://login.microsoftonline.com/{TenantName}/adminconsent?client_id={ApplicationId}&state=12345&redirect_uri={RedirectURI}

Building Headers and Footers that work on Classic and Modern sites

$
0
0

One of the partners I consult for is migrating a Fortune 500 financial services company to SharePoint Online. The company wants to take advantage of modern team and communications sites, yet where they need features that aren't available in modern SharePoint, they've decided to stick with classic Publishing sites.

The challenge is: how to build global navigation and footers that will work on both classic and modern sites. There are a few reasons this is important:

  • It provides common navigation across all kinds of sites, making the Intranet easier to use
  • It provides a common footer across all kinds of sites, ensuring compliance messages are delivered consistently
  • It reduces coding and maintenance, because one set of code is used across old and new sites

So I undertook a little Proof of Concept, and here are the results. The solution is usable as-is if your needs are simple. The real intent, however is to prove out a pattern for developing any header and footer that will work on both modern and classic sites.


Figure 1

Figure 1 shows how it looks on a classic publishing site. A simple navigation menu is added on the top of the page, and a footer containing a message and a set of links appears at the bottom.

The menu and footer look the same in a modern team site, as shown in Figure 2. It also works on modern list pages in classic Publishing sites, making the experience less jarring when users browse between modern and classic pages. This could also be used in hybrid environments, as the classic solution should work the same on premises as online (though you will probably need a separate set of files to avoid cross-site scripting issues.)


Figure 2

When the screen becomes narrow, the top menu switches to a hamburger style menu that pushes the screen down with a hierarchy of options. Figure 3 shows this on the classic publishing page, which is clearly not responsive.


Figure 3

The hamburger menu looks right at home on the modern site, viewed here in an iPhone emulator.


Figure 4

The menu and footer content are stored in a simple JSON file in SharePoint. While this doesn't provide for security trimming, it is fast, and if there's someone who understands JSON, they can easily keep it up-to-date. (The partner I'm working with has already developed a much cooler and more advanced solution for SPFx using the Term Store).

NOTE: This sample uses features (fetch and ES6 Promises) which are not available in Internet Explorer. If you want to use it in IE, you need to add polyfills for these features. As it happens, these are the same as the ones needed by PnP JS Core; their documentation does a good job of explaining.

Where to get it

Full source code and build/installation instructions are available in the FutureProofHeadings repo on Github.

Approach

My friend and colleague Julie Turner recently published a series of articles called Conquer your dev toolchain in ‘Classic’ SharePoint. She shows how to use SharePoint Framework style tooling such as typescript and webpack on classic sites. This was the starting point for the project, and I recommend anyone who hasn't got this working to go through these articles. This was key to having common code between the classic solution and SharePoint Framework.

From there, the approach was to push as much logic as possible into common code (in the "common" directory in both Classic and Modern folders). A quick run of Beyond Compare confirms they are identical. Only the files shown in blue are unique to their environment.


Figure 5

On the SharePoint Framework side, I generated an Application Customizer extension using React. I developed the header and footer there, then ported them over to a project based on Julie's article. A few tweaks to the configuration files were necessary; they're all in the git repo.

Bootstrapping

In the SharePoint Framework version of the solution, the UI is "bootstrapped" in the CustomHeaderFooterApplicationCustomizer.ts file, which was initially generated by the Yeoman generator. All references to React have been removed from there, however; all it needs to do is find the DOM elements that for header and footer, use a (common) service to read in the data, and common rendering code takes care of the rest.

@override
public onInit(): Promise<void> {

  const promise = new Promise<void>((resolve, reject) => {

    HeaderFooterDataService.get(url)
    .then ((data: IHeaderFooterData) => {

        const header: PlaceholderContent = this.context.placeholderProvider.tryCreateContent(
        PlaceholderName.Top,
        { onDispose : this._onDispose }
        );
        const footer: PlaceholderContent = this.context.placeholderProvider.tryCreateContent(
        PlaceholderName.Bottom,
        { onDispose : this._onDispose }
        );

        if (header || footer) {
        ComponentManager.render(header ? header.domElement : null,
            footer ? footer.domElement : null, data);
        }

        resolve();
    })
    // (exception handling removed for brevity)
  });

  return promise;
}

In the Classic version of the solution, the UI is "bootstrapped" in bootHeaderFooter.ts. It generates its own DOM elements for the header and footer, and connects them above and below a well-known HTML element, s4-workspace. While there's no guarantee Microsoft will always provide an element with that ID, it's a lot less likely to break than a custom master page.

export class bootstrapper {

  public onInit(): void {

    const header = document.createElement("div");
    const footer = document.createElement("div");

    const workspace = document.getElementById('s4-workspace');

    if (workspace) {

      workspace.parentElement.insertBefore(header,workspace);
      workspace.appendChild(footer);

      const url = // (your JSON file URL)  '
      HeaderFooterDataService.get(url)
        .then ((data: IHeaderFooterData) => {
          ComponentManager.render(header, footer, data);
        })
        .catch ((error: string) => {
          // (omitted for brevity)
        });
    }
  }
}

Then some inline code uses the old Scripting On Demand library to run the bootstrapper.

(<any>window).ExecuteOrDelayUntilBodyLoaded(() => {
  if (window.location.search.indexOf('IsDlg=1') < 0) {
    let b = new bootstrapper();
    b.onInit();  
  }
})

Everything else in the solution is common.

Installation

Installation on the SharePoint Framework side is pretty standard ... package the solution, upload it to the app catalog, and add it to a site. On the Classic side, I used PnP PowerShell with JavaScript injection.

Write-Output "`n`nAdding script links"
Add-PnPJavaScriptLink -Name React -Url "https://cdnjs.cloudflare.com/ajax/libs/react/15.6.2/react.js" -Sequence 100
Add-PnPJavaScriptLink -Name ReactDom -Url "https://cdnjs.cloudflare.com/ajax/libs/react-dom/15.6.2/react-dom.js" -Sequence 200
Add-PnPJavaScriptLink -Name HeaderFooter -Url "https://<tenant>.sharepoint.com/sites/scripts/scripts/bundleClassic.js" -Sequence 300

Detailed build and installation instructions are in the readme file on Github.

User Interface

The UI is in React and 100% common to the Classic and SPFx versions. There's a component each for the header and footer, plus a small class called ComponentManager that renders them into two DOM elements provided by the bootstrapper.

public static render(headerDomElement: HTMLElement, footerDomElement: HTMLElement, data: IHeaderFooterData): void {

    if (headerDomElement) {
        const reactElt: React.ReactElement<IHeaderProps> = React.createElement(Header, {
            links: data.headerLinks
        });
        ReactDOM.render(reactElt, headerDomElement);
    }

    if (footerDomElement) {
        const reactElt: React.ReactElement<IFooterProps> = React.createElement(Footer, {
            message: data.footerMessage,
            links: data.footerLinks
        });
        ReactDOM.render(reactElt, footerDomElement);
    }
}

You can check the code to see the menus and footer components in React. I think the coolest part is the top menu, which is implemented entirely in CSS based on this brilliant example from Tony Thomas.

Getting the data

Both header and footer data are stored in a single JSON file; there's a sample in the common/sample folder. Here's an excerpt to give you the idea:

{
	"headerLinks": [
		{
			"name": "Home",
			"url": "/sites/pubsite",
			"children": []
		},
		{
			"name": "Companies",
			"url": "#",
			"children": [
				{
					"name": "Contoso",
					"url": "#"
				},
				{
					"name": "Fabrikam",
					"url": "#"
                }
            ]
		}
	],
	"footerMessage": "Contoso corporation, all rights reserved",
	"footerLinks": [
		{
			"name": "Office Developer Home",
			"url": "#"
		}
	]
}

This approach is very simple and fast. The JSON is described by two interfaces in the common/model directory, ILink.ts and IHeaderFooter.ts.

export interface ILink {
    name: string;
    url: string;
    children: ILink[];
}

export interface IHeaderFooterData {
    headerLinks: ILink[];
    footerMessage: string;
    footerLinks: ILink[];
}

A simple service, common/services/HeaderFooterDataService.ts, reads in the JSON using Fetch.

It's worth noticing how the Promise returned by this service interacts with the SPFx application customer's promise. SPFx expects a promise to be returned by the onInit method; this is used to tell SPFx that the extension is done rendering. onInit creates this promise and hangs onto it, then it gets a new promise from the HeaderFooterDataService.

If the HeaderFooterDataService succeeds, it resolves its promise, and onInit renders the UI and resolves the promise it gave to SPFx. If the service fails, it rejects its promise, and a catch block in onInit logs the error and rejects the promise it gave to SPFx. Here's an excerpt that just shows the promise interaction.

@override
public onInit(): Promise<void> {

  const promise = new Promise<void>((resolve, reject) => {

  HeaderFooterDataService.get(url)
    .then ((data: IHeaderFooterData) => {
        // (render the UI)
        resolve();
    })
    .catch ((error: string) => {
        // (log the error)
        reject();
    });
  });

  return promise;
}

Conclusion

SPFx is based on open source technology which can be used to target any web site, even a classic SharePoint site. By leveraging these same tools outside of SharePoint Framework, developers can reuse their work and provide consistency between classic and modern SharePoint pages.

Thanks for reading, and please let me know if you use this approach in your project, or if you have any feedback or suggestions!

Setting up CI/CD targeting Red Hat OpenShift Kubernetes Using VSTS

$
0
0

In this post, Premier Developer Consultant Najib Zarrari demonstrates how to deploy a containerized ASP.NET Core Web API app into an OpenShift Kubernetes cluster.


The first part of this blog will go over how to create a sample ASP.NET Core web application with Docker support. We will use this as our demo app to deploy to the Kubernetes cluster. Then we will go over how VSTS can be used to create a CI build that will build the application, package the build output into a Docker image, and push the image to Docker Hub.  After that, we will point you to resources that will show how you can create a test OpenShift Kubernetes cluster.  Finally, we will go over how VSTS Release Management can be used to continuously deploy to the OpenShift Kubernetes cluster.  As you might have guessed, this might not be easy to setup.  Luckily, the Continuous Integration (CI) and Continuous Deploy (CD) aspect of this is greatly simplified by VSTS as you will see later.  Let’s get to work.

You can read more of Najib’s post here.

Guest Post Karen Drewitt, The Missing Link: Why I’m returning to Microsoft Inspire in 2018!

$
0
0

 


Karen Drewitt
General Manager
The Missing Link

 

Last year, as General Manager of The Missing Link, I attended Microsoft Inspire in Washington DC. Although we have been a premium provider of information technology solutions for 20 years and a Microsoft Partner for all that time, 2017 was the first time I attended the Microsoft global conference.

Why attend in the first place?

At The Missing Link we deal with a lot of vendors but when we find one that is truly aligned to our ‘go-to-markets’, it means we can decide how much time we should invest in that specific vendor. This decision is made, not just in terms of attending conferences, but also around resources for training and making sure we’re engaged at different levels of the business. Last year we saw how closely Microsoft’s product solutions resonated with our client base and, given we are so closely aligned to Microsoft, the decision was made to attend Microsoft Inspire in 2017 thus developing and further investing in the partnership.

I am a firm believer in goal setting. In order to get the most out of Microsoft Inspire, the team at The Missing Link sat down and discussed direction and strategy. We came up with a couple of practices we were interested in developing, and from there we could focus on what we wanted to achieve at the conference. I then set as a goal for Microsoft Inspire to understand who it was that headed that particular division, what their plans were and what they looked for in a partner. (For me the idea of a successful partnership is based on equal contribution so it’s not just about what you can get from the vendor, it’s also what they want from you and that has to align for it to be a successful partnership).

Microsoft Inspire also provides unmatched opportunities for partners looking to grow their business and so the decision to attend was an easy one to make.

The benefits!

  1. It provided an amazing opportunity to engage with a diverse range of members of the Microsoft team. I got to speak to, and spend time with, key Microsoft personnel outside of the team I usually deal with. The face-to-face contact was invaluable and new relationships were developed while old ones were reinforced and strengthened.
  2. Attending Microsoft Inspire gave me the opportunity to take time out from the busyness of the day-to-day to find out what Microsoft’s roadmap is and think about how we can align our business strategies with that. It also allowed me to immerse myself in the Microsoft story for a couple of days so that I could take full advantage of the Vision Keynotes, sessions, workshops, panels and networking events.
  3. There were a lot of business style sessions that didn’t focus on the Microsoft story and their solution set. It was clear Microsoft want to invest in their partners and support them by giving them sessions that are going to actively help them grow their business and do things like develop their marketing. From a global perspective that was really interesting; there were some renowned speakers who gave amazing speeches and I took away a lot from them.
  4. From a personal development perspective it is always good to put yourself in a position of learning. I have been with The Missing Link for 15 years so I found it particularly useful and invigorating to be a part of something different where I could pick up new ideas and learn about ground-breaking developments and the acceleration of digital transformation. Being in a completely different environment took me out of my comfort zone and helped me learn which was an incredible bonus for my personal development.

The icing on the cake!

Looking back at Microsoft Inspire 2017 the one thing that stands out was the time I got to spend with other Australian and New Zealand Partners. Learning best practices and seeing how other businesses are doing things is incredibly valuable and Inspire provides the kind of environment that encourages such sharing.

Even though some of us are competitors in the same space, people were incredibly open about their learnings and sharing what has worked and what has failed. This kind of learning extended above and beyond how they leveraged the Microsoft partnership; it also included their solutions and their challenges, making the learning experience relatable and significant.

It is these partner-to-partner connections, the wealth and depth of learning and the opportunity to drive true business transformation for our business and our customers that has me committed to attending Microsoft Inspire 2018.


Don't miss out register for Microsoft Inspire today!

ATTENTION GOLD CLOUD COMPETENCY PARTNERS

By attaining Gold status in a cloud competency, you’ve shown your commitment to providing your customers with the best solutions built on Microsoft cloud technology. We’d like to thank you for your achievements by extending to you the discounted price of USD1,995 for a Microsoft Inspire All Access pass when you register before March 31, 2018. This is a savings of USD300 off the current price, only for partners with a gold cloud competency like you. If you have not received your Gold Cloud Competency discount code via email, drop me an email sarahar@microsoft.com


Episode 3: IT IQ Series – How the cloud multiplies Australian schools’ resources

$
0
0

Summary: Adopting the cloud can help educational institutions slash costs, increase level of security and more importantly, provide future-ready learning outcomes.

Can Australian schools keep up with the costs of and demand for digital learning? Today's educators and staff are incorporating more digital solutions into their lessons and workloads than ever before. That’s putting rising pressure on existing server infrastructure, and existing storage, compute, network resources are already starting to feel the squeeze.

But with school budgets already stretched thin, the economics of adding new servers or replacing existing ones doesn’t add up for many Australian schools. Like their counterparts in other industries, many educators have turned to cloud platforms like Microsoft Azure for a more cost-effective and flexible way of supporting digital education.

“The fact is, schools would prioritise investments in areas that will help improve student grades or better learning outcomes, because those are the metrics that determine the effectiveness of most Australian schools,” says Mark Tigwell, an Azure technologist at Microsoft. “The adoption of the cloud is a cost-effective solution that would allow schools to cope with rising demand for computing power, without compromising the quality and depth of digital experiences in the classroom. You’re effectively multiplying what you can do with your existing budget, both in terms of volume and depth of the digital experiences you offer your students.”

Power that Comes with Reduced Costs

Porting data and operations onto the cloud frees schools from needing on-site server racks or rooms. IT managers no longer need to procure new equipment every few years or maintain as much existing hardware, meaning less expenditure in the long run. On a monthly basis, schools stand save on utility bills by reducing their fleet of servers, along with the air-conditioning units used to cool them.

“The absence of servers also means one less thing for IT administrators to worry about. They can devote their attention toward supporting the tech needs of educators and students, ensuring lessons aren’t disrupted and thus helping improve learning outcomes,” says Tigwell. “The removal of bulky servers from premises also frees up room that schools can use for storage of additional teaching resources or equipment.”

But the greater advantage that the cloud presents to schools is the ability to adjust usage to correspond with peak periods in the academic calendar. With public clouds like Azure, schools can buy more computing power at the start of the academic year, where there is usually an influx of registrations and documentation, scale up as the school term progresses, and dial usage down to a minimum at the end of the school term.

“Disruptions to lessons often happen when a school or institution’s server infrastructure buckles under the demands of multiple programs, devices and learning management systems. This doesn't just frustrate educators, but also generates a ‘chilling effect’ that limits their confidence in using technology for future lessons,” says Tigwell. “But with Azure, IT managers can anticipate the rise and dips in usage, and plan ahead to the daily needs of the school’s ecosystem. This sort of flexibility just isn’t possible with typical server setups, where procurement and setup takes a considerable amount of time.”

Making Learning Digital and Secure

But the cloud can be so much more. Most cloud services for education come bundled with learning resources and certifications that educators can take advantage of. In Azure’s case, collaborative class activities can be formed around learning material on Microsoft Imagine Academy, while vocational and cloud-based skills can be honed in students through certification on the Microsoft Virtual Academy platform.

“Since the cloud is becoming a fundamental for many businesses, it makes sense to give Australia’s future generations the right start, by introducing them to skills that will be in high demand by the time they graduate,” Tigwell says. “The breadth of courseware on the Microsoft Azure platform can be immediately utilised by educators to train basic skills used in vocations such as computer science, IT administration and data science.”

Data security is as critical for schools as it is for businesses: a breach can compromise school records, cripple services, and potentially disrupt digital learning lessons for students. But instead of committing substantial amounts toward data protection software or services, schools can enjoy the same enterprise-level of security through adoption of the cloud at a far more palatable price-point.

“The thing to keep in mind is that Microsoft has far more investment allocated toward security compared to your average school,” Tigwell explains. “Under our terms in Azure Security, any data belonging to an individual or institution will remain theirs regardless of circumstance, and we keep stringent standards to ensure this continues to be the case. That allows us to offer schools and tertiary institutions the same level of security as we do for some of Australia’s largest enterprises, while maintaining a price tag that’s competitive for most of the education industry.

“What we at Microsoft Azure aim to do is to reduce dependence on existing server infrastructure, along with all the headaches associated with it. That way, every resource and focus within an educational institution can be effectively channeled towards creating the best learning experience for Australian students.”

Watch Mark Tigwell answer your questions about Microsoft Azure and how it can help alleviate the challenges of Australia’s schools on our YouTube channel.

Get started on Microsoft Azure, and learn how the Microsoft Imagine Academy and Microsoft Virtual Academy can help your students prepare for their future. Learn more about Microsoft Azure’s security policies here.

Our mission at Microsoft is to equip and empower educators to shape and assure the success of every student. Any teacher can join our effort with free Office 365 Education, find affordable Windows devices and connect with others on the Educator Community for free training and classroom resources. Follow us on Facebook and Twitter for our latest updates.

Deploy your first smart contract on azure ethereum consortium blockchain

$
0
0

As we know Blockchain like Bitcoin and Ethereum, ledgers for recording virtual currency transactions, is booming, however, it is much more than just a virtual currency. It’s a transformational technology with the potential to extend digital transformation beyond a company’s walls and into the processes it shares with suppliers, customers and partners. Microsoft Azure provide a strong backbone for this transformational technology implementation. Azure as an open, flexible, and scalable platform, supports a rapidly growing number of distributed ledger technologies that address specific business and technical requirements for security, performance, and operational processes.

Although, Financial use cases and implementation of blockchain technology are in the highlight because of substantial interest of influential parties such as financial institutes, banks, Financial tech start-ups and investors. At the same time non-financial use cases of Blockchain are getting momentum and equally significant importance to a number of industries like supply chain, manufacturing, healthcare and many more.

Today, Let’s take a scenario of non-financial sector i. e. an Automaker which releases a product containing defective parts resulting in costly recalls and repairs. They can use blockchain to trace the supplier of the faulty parts more efficiently. Finally reducing time and labour costs.

Let’s scope the scenario

We will create Automaker Scenario on Azure Blockchain. In this blog we will limit our scope for entry raw material and finding details of raw material in our Blockchain Network

Let’s get started.

To get this implemented, at high level we will follow below steps:

  • Create azure ethereum consortium blockchain
  • Create smart contracts
  • Deploy smart contracts on azure ethereum consortium blockchain
  • Test our contract on azure ethereum consortium blockchain.

Prerequisite

You will need:

  • A desktop computer (Host) running on Windows 10 or Mac OSX
  • An active Azure subscription
  • An Active internet connection

Step 1: Create azure ethereum consortium blockchain

Go to https://azure.microsoft.com/en-in/resources/templates/ethereum-consortium-blockchain-network/ and click on 

It will redirect you to Azure portal. Fill in the required information and click on “Purchase”

Alternatively you can go to https://portal.azure.com click on  and select Ethereum Consortium Blockchain.

On next few screens put all required information and click on create. Deployment should take 5-10 minutes to complete.

Now we have created Ethereum Consortium Blockchain.

We would need RPC details of our blockchain network in further steps. We will save these values at some place for later use. To get those values

 

Click on  from left side. Select your resource group. From “overview” tab find details on “Deployments”.

It will provide you details of the deployments. Like shown below

Select first deployment, which looks like “microsoft-azure-blockchain.azure-blockchain-servi-<timestamp>”.

In next window copy value of “ethereum-rpc-endpoint” from output section. We would need this value in next steps to deploy contract on  Ethereum Consortium Blockchain  network.

It’s time to move to next step to create smart contracts

Step 2: Create smart contracts

We will use Solidity to write smart contract. To get more details on Solidity, please go through solidity documentation https://solidity.readthedocs.io/en/develop/. To create a solidity smart contract you can use Visual studio code with Solidity extension or Remix. You can download Solidity extension for Visual studio code from https://marketplace.visualstudio.com/items?itemName=ConsenSys.Solidity

You can get more details about Remix from github location

https://github.com/ethereum/remix

 

Today we will use remix to write smart contract. We will write 1 smart contract to push Raw material details and search for it in blockchain.

Open https://remix.ethereum.org/

in your browser. Click on symbol located on top left side of the window. Write file name as Manufacturing_RM.sol and click on OK.

Copy past below code in Code window

pragma solidity ^0.4.18;

contract manufacturing_RM_Contract {

address owner;

struct Raw_Material

{

string PartNumber;

uint Quantity;

string Supplier;

}

mapping (string => Raw_Material) Raw_Materials;

address[] public Address_Raw_MaterialsAccts;

string[] public Raw_MaterialsAccts;

function setRaw_Material(address _creater,  string  _PartNumber, uint  _Quantity, string  _Supplier) public {

var RM = Raw_Materials[_PartNumber];

RM.PartNumber=_PartNumber;

RM.Quantity=_Quantity;

RM.Supplier=_Supplier;

Address_Raw_MaterialsAccts.push(_creater) -1;

}

function getRaw_Materials() constant public returns (address[]) {

return Address_Raw_MaterialsAccts;

}

 

function getRaw_Materials(string _PartNumber) constant public returns (string,uint,string ) {

return (Raw_Materials[_PartNumber].PartNumber,Raw_Materials[_PartNumber].Quantity,

Raw_Materials[_PartNumber].Supplier);

}

function countRaw_Materialss() constant public returns (uint) {

return Address_Raw_MaterialsAccts.length;

}

}

 

In Above code snippet, we have 1 struct with name “Raw_Material”. Which holds basic information of raw material like PartNumber, Quantity and Supplier details. You can add more details as per your requirements.

Remix will auto compile the code if you have selected “Auto Compile” else click on  to compile you code. Now we will deploy this smart contract in Ethereum Consortium Blockchain, which we had created in Step 1.

 

Step 3: Deploy smart contracts on azure ethereum consortium blockchain

To deploy smart contract, we will use Remix only. In remix browser click on “Run” tab, which is located on top right side of screen

From “Environment” drop down select “Web3 Provider”. It will show a pop up like shown below.

Click on OK. Now you need to provide “Web3 Provider Endpoint” for you Ethereum blockchain network. It is the “ethereum-rpc-endpoint”  value which we had saved from Azure portal as part of step1. Paste this value in popup window and click on “OK”.

Now It will show you some values in “Account” dropdown, which was earlier blank. Now we have connected remix IDE with in Ethereum Consortium Blockchain. Lets deploy smart contract. Click on “Create”

Now it will show 1 Pending transaction.

It will take few minutes to deploy contract and returns address on contact and interface for contract functions. It will look like

And in you console window you can see details on transaction. If you want to get more information, Click on “Details” button.

Now it’s time to test our smart contract.  First, we will test “setRaw_Material” function. In our contact this function accepts 4 inputs, address on transaction creator, part Number, quantity and Supplier details. You can get address on creator from “Accounts” drop down. Click on Copy icon  next to Accounts dropdown.

There is a textbox next to “setRaw_Material” button. Write your inputs in same text box. It will be something like below.

"<AccountAddress>", "Tyre",4,"Supplier1"

Replace <AccountAddress> with the value you had copied from “Accounts” drop down. Now click on “setRat_Metrial” button. 

In console window It will return Block number and few details about transaction once it’s done.

Now Let’s try to get details on transaction which we had inserted in our Blockchain network.

Click on “countRaw_Materialss” button. It will return count 1, as we had inserted only 1 Raw material detail

Let’s try to execute another function “getRaw_Materials”. In input box type “Tyre” and click on “getRaw_Materials “ button. It will show you all values that you had inserted using “setRaw_Material” function.

Recap:

We had created a ethereum consortium blockchain on Azure. Then we wrote a smart contract using remix. We deployed smart contract on our ethereum consortium blockchain. Finally, we were able to test it successfully.

Windows 10 で ISequentialStream インターフェースからファイルを操作する際のパフォーマンスについて

$
0
0

こんにちは、Platform SDK (Windows SDK) サポートチームです。
今回は、Windows 10 における ISequentialStream インターフェースの動作についてご案内いたします。

 

現象
Windows 10 では以前のバージョンの Windows と比べて ISequentialStream インターフェースからファイルを操作する際の処理時間が長くなる場合があります。

 

これは Windows 10 で導入された UWP に対応するため ISequentialStream の内部実装に変更が加えられたことにより、 内部的なメソッド呼び出しのオーバーヘッドが増加したことに起因します。
例えば SHCreateStreamOnFileEx 関数で開いたファイルを Read メソッドで読み出す場合が該当します。特に繰り返し何度も呼び出すようなケースでは、前述のオーバーヘッドの影響を大きく受けます。

 

パフォーマンスを重視される場合は、メソッドを呼び出す回数が少なくなるようにアプリケーションの実装を変更するか、 ReadFile 関数や SetFilePointer 関数を直接利用する方法をご検討ください。

App Center Errors: Monitoring and Keeping Your Xamarin Apps Healthy

$
0
0

Building and shipping a successful app is a challenge. Monitoring and keeping your app healthy is even more challenging and time-consuming. Once you ship your app into the wild, unexpected errors often occur as real users start engaging with it; staying on top of them is crucial to the success of your app and business. As a developer, you need to have insight into the cause of these issues and how frequently they're happening. Visual Studio App Center recently shipped a feature that gives you just that awareness. We're excited to announce the release of the Errors feature for iOS and Android apps built using Xamarin.

In this post, you'll learn how you can use and make the most of App Center Errors to deal with the errors happening in your Xamarin apps, leading to a better experience for your end users.

Errors: What You Need to Know

Exception handling in C# helps you deal with unexpected or exceptional situations that happen while a program is running, indicating that an error has occurred. To handle these failures, you can use a try/catch block.

Exceptions in C# are defined by type and properties such as the stack trace and message. The caught exception object contains information about the error, such as the state of the call stack and a text description of the error.

The example below shows how to catch and throw an exception of type IndexOutOfRangeException within your app when the index in an array is out of range:
 

int GetInt(int[] array, int index) 
{ 
    try 
    { 
        return array[index]; 
    } 
    catch(System.IndexOutOfRangeException ex) 
    { 
        throw new System.ArgumentOutOfRangeException( 
            "The parameter index is out of range", ex); 
    }
}

 
When an error occurs outside of a try/catch block, it's said to be an uncaught error. These are crashes and will lead your application to exit. By using a try/catch block, you can enclose your code and handle failures as you need. This bring you the following benefits:

  • Improved app reliability and stability.
  • Reduced lag times.
  • Ensures users can access all app functionality.

To learn more about how and when to use exceptions in C#, take a look at the official documentation.

App Center Service for Errors

Visual Studio App Center Diagnostics is divided into Crashes and Errors for Xamarin apps. The Crashes sections include the uncaught errors, which cause the application to exit and are automatically captured when integrating the App Center SDK. The Errors section in App Center corresponds to the handled errors (known as exceptions in C#). These are reported where defined by the app developer. When running the App Center Crashes SDK module in an application, the service will report the tracked errors during the lifetime of the application. These errors are sent to the server when they occur, provided there is a network connection, or the next time the application is started.

When an app is running, a high amount of error instances are generated, and it can get overwhelming to fix all of them. To make the process more manageable, as well as get quick insights, App Center intelligently groups errors based on the similarity of their stack traces to make it easier and faster for you to diagnose and troubleshoot. By grouping them, you'll instantly receive information about the most common root cause of your errors, helping you prioritize which errors need attention first. Also, by tracking the status of each group, you can easily manage which errors have already been fixed, which ones you decide are not relevant and can ignore, and which are still open.

The following image shows the error group Overview page in App Center, where the different generated groups are listed, with counts on number of reports and users affected, status, and time.
 
Error Groups in Visual Studio App Center Errors

Fig. 1. Error groups overview in App Center.

 
On top of the error groups, App Center provides you with information on the most affected devices and OS, as seen below.
 
Visual Studio App Center Errors Overview

Fig. 2. Statistics for a generated error group in App Center.

 
To get to the root cause of your error and understand why and where it happened in your code, you can easily drill down a detailed stack trace and get information about the device properties, such as model, OS, country, language, etc.
 
Visual Studio App Center Errors Error Instance

Fig. 3. Error Instance Detail Page in App Center.

 
For further debugging, you can attach custom properties to these errors, such as "WiFi status", "File name", "Category", and more.

You can find more details on the feature set available in the Errors documentation.

How to get started with Errors in App Center

Errors in Xamarin apps are now available in preview in App Center.

To start tracking errors, simply follow a few steps to integrate the App Center Crashes SDK. Learn more in the SDK Documentation.

Once you integrate the Crashes SDK, you can use the TrackError method. Here's an example for a common scenario where you divide a number by 0, which would result in a DivideByZeroException error.
 

static void Main() 
{ 
   double X = 15, Y = 0; 
   double output = 0; 
   try 
   { 
       output = X / int.Parse(Y); 
   } 
   catch (DivideByZeroException ex) 
   { 
       Crahes.TrackError(ex); 
    } 
}

 
You can also add custom properties to your errors to get more insights into what's happening. Simply pass a Dictionary of strings key/value pairs to the TrackError method that we defined earlier. In the example code below, the custom properties will be used to track the filename, location, and the type of issue when the error occurred. Try:
 

{
    using (var text = File.OpenText("saved_game001.txt")) 
    { 
        Console.WriteLine("{0}", text.ReadLine()); 
        ... 
    } 
} 
catch (FileNotFoundException ex) 
{ 
    Crashes.TrackError(ex, new Dictionary{ 
        { "Filename", "saved_game001.txt" }, 
        { "Where", "Reload game" }, 
        { "Issue", "Index of available games is corrupted" } 
    }); 
}

 
As you can see, errors can happen at any point when customers are using your app, and it’s important that you find out about and fix them right away. App Center Errors will enable you to track your app and provide you with key information about issues and errors in your app in an organized, visual, and concise way, so you can easily diagnose and fix problems before more of your customers run into them.

App Center Errors is completely free. Sign up today to integrate the App Center Crashes SDK and start tracking errors in your Xamarin apps! If you have any questions or feedback, please reach out to us via the in-portal support system.
 
 
Get started now button
 
 

Giving feedback

$
0
0

Six months ago I wrote a post on Taking Feedback.  Several people asked me to write a follow up on giving feedback.  Amazing how time flies and somehow I just haven’t gotten around to it – so I’m doing it now.

Here's a key snippet from the Taking Feedback post if you don't want to go read the whole thing...

At some level, all feedback is valid. It is the perception of another person based on some interaction with us. As such it’s important that we listen, understand and think about how we can improve. Yet, not all feedback is to be taken as given – meaning the person giving the feedback may have heard something that wasn’t true, misinterpreted something, or may simply not have the perspective we have. In the end we are the ones to decide what to do with the feedback. We may decide that the feedback is valid and provides clear ideas for improvement. Or we may decide that we disagree with the feedback but it provides insights into how we could do differently to prevent misperceptions. Or we may decide that the we simply don’t agree with the feedback and we are going to file it away and keep an eye out for future feedback that might make us revisit that conclusion.

Giving someone feedback is a wonderful thing but it’s also a very hard thing – partly because taking feedback can be so difficult that it makes giving it very stressful.  There are some things I’ve learned over the years about giving feedback that have made it a little but easier.

There are two kinds of feedback

This is probably the one I fail the most on.  We usually think of feedback as a negative thing – here’s something you can do better.  But positive feedback is equally important – here’s something you did particularly well.  I tend to be so focused on how I and the people around me can do better that I, too often, forget to point out when someone has done something well – or they have some attribute that I really admire.  It’s not that I don’t know it at some subconscious level; it’s just that I’m caught up in the next challenge to tackle and it just doesn’t occur to me to say anything about it.

So, my first piece of advice is try to be very conscious about positive feedback.  When you see something you like, say so.  Be on the lookout for things to complement people for.  Do it privately; do it publicly.  Thank people for something you appreciate.  Whether they admit it to themselves or not, everyone likes appreciation and they tend to gravitate to doing things that will earn them more appreciation.  Developing a pattern of recognizing good things will encourage people to do more good things.

At the same time, be careful not to overdo it.  There can be too much of a good thing.  By that, I mean, don’t complement people for superficial things or things they didn’t really do.  A complement is most valued when a person feels like they invested energy.  If you complement people for just anything, then you “cheapen” the feedback and make it mean less when it’s really deserved.

If you are good at giving positive feedback, negative feedback is also easier to give.  People are more likely to respond well to negative feedback if it’s given in an environment where, overall, they feel valued than it is if they feel like they are just always criticized for everything and not valued for anything.

There’s a time and a place for everything

When and where you give feedback is *super* important.  There’s a saying “Public praise and private criticism.”  It’s a good rule to follow.  People really appreciate having their successes publicly celebrated and no one likes being publicly berated.  Beyond that, some other important rules, particularly for negative feedback, are:

  1. Find a time when they are ready to hear it – Unless the feedback is urgently required to avoid a disaster, don’t try to give it when someone is under a great deal of stress (maybe rushing to meet a deadline), frustrated, angry, etc.  Feedback is going to be heard and processed best when the person is relaxed and reflective.  Make sure you have enough uninterrupted time to fully discuss the feedback.  It's a good idea to ask them if they are ready to for you to give feedback.
  2. Make sure you are ready to give it – Similarly to #1, don’t try to give feedback when you are angry or frustrated.  Taken the time to digest what you need to say – to separate your frustration from an objective assessment of what happened.  Have a calm conversation about what you observed and what could be done differently.
  3. If at all possible, give it in person – Feedback is generally best processed face to face.  It is very easy to read unintended tone in written feedback.  By giving it in person, you can watch for body language to see if the person is hearing something you aren’t intending to say.  Sometimes, of course, it isn’t possible and when it isn’t, you have to be doubly thoughtful about how you say it.  Sometimes I give some initial, very light feedback in writing, with an offer to discuss it at length in person (or via video conference, for remote people).
  4. Give it to the person – It’s amazing to me how often someone will “give feedback” to someone else.  By that, I mean, complain about what someone did to a third person without ever following up with the person themselves.  That’s never going to work and will, in the long run, only create a hostile environment.  Always focus your feedback on the person or people directly involved.  Sometimes it’s necessary and appropriate to share feedback with a broader audience so that everyone can learn from something.  Be careful how you do that because, done wrong that looks a lot like public criticism and never do it without talking to the people directly involved first.

Focus on what you can directly observe

It’s very important to focus on what you can directly observe.  Try very hard to avoid “I’ve been heard…” or even “Susan told me…”.  The problem with relaying feedback from someone else is that you don’t really know what has happened and it’s very hard for you to be constructive.  That said, you will, particularly as a manager, get feedback from 3rd parties and it’s not irrelevant.  I generally try to use it, carefully, as supporting evidence when giving my own feedback.  It helps me understand when things I’ve observed are a pattern vs an anomaly.  If someone comes to you with feedback about someone else, try as hard as you can to find a way to facilitate the feedback being given directly between the people involved, even if you need to participate in the discussion to facilitate it.

I’ve observed that humans have an inherent tendency to want to ascribe motive – to determine why someone did something.  “Joe left me out of that important conversation because he was trying to undermine me.”  Any time you find yourself filling in the because clause, stop.  You don’t know why someone does anything.  That is locked up securely in their head.  When filling in that blank people insert some negative reason that’s worse than reality.  So, when giving feedback, stick to what you can see.  “Joe, you left me out of that important conversion.  I felt undermined by that.  Why did you do it?”  In this example, I articulate exactly what I saw happen, how it made me feel and ask Joe to explain to me why.  Joe may dispute that he left me out – that’s fairly factual and we can discuss evidence and Joe can’t dispute how I felt, at least not credibly.  Try as hard as you can to stick to things you personally observed and stay away from asserting motive.  Have a genuine conversation designed to help you better understand each other’s perspective and what each of you can do better in the future.

Consider your relationship

Your relationship with the recipient of your feedback can make a big difference.  You need to be careful about how it colors what you say.  For instance, as a manager, I always try to be one who is connected to what's going on in the team and give feedback to anyone and everyone on what I see.  Early in my career, I found this can go terribly wrong.  An off hand comment to someone several levels below me in the company can be interpreted as a directive to be followed.  I may have been musing out loud and somehow, accidentally, countermanded several levels of managers.  Try that and see how fast a manager shows up at your door to complain 😊.  Now, I try to be clear when I'm just giving and offhand opinion and when I'm giving direction.  I also tell them to go talk with their manager before acting on what I told them and, often, go tell the manager myself what I said.

This is just one example of how a relationship can affect how feedback is taken.  Feedback from a spouse is different than that from a friend is differ than that from a parent is different than that from a co-worker, etc.

Acknowledge your role

Often, when giving feedback, it’s about some interaction your were party to – and, as they say, it takes two to tango.  There may have been things you did that contributed to whatever happened.  Be prepared to acknowledge them and to talk about them.  Don’t refuse to acknowledge that you may have had a role.  At the same time, don’t allow the person to make it all about you.  You have feedback for them.  Don’t let the conversation become only about you.  Make sure you are able to deliver your feedback too.  You may need to offer to set aside time in the future for the other person to give you feedback so that, for now, you can focus on your feedback.

Retrospectives can be powerful

While most of what I’ve written here focuses on how to give feedback to someone, a great technique to drive improvement is to create an environment where people can critique themselves.  Retrospectives are an awesome tool to get one or more people to reflect on something and make their own suggestions for improvements.  Done right, it is a non-threatening and collaborative environment where ideas and alternate ways of handling things can be explored.  Retrospectives, like all feedback, should focus on what happened and what can be better and avoid accusations, blame, and recrimination.  You can participate in it and contribute your feedback or you can discuss the outcome and help process it for future actions.

Beware of feedback landmines

  1. The feedback sandwich - This is probably one of the hardest ones to get right and depends a lot on you and the person you are talking to.  A feedback sandwich is when you tell someone how good they are, then you tell them something you think they need to improve, then you tell them how good they are again.  There are legitimate reasons to mix both positive and negative feedback, for example, it helps establish the scope of the feedback.  If you only give negative feedback, people can read more into it than you mean.  I often use a mix of positive and negative feedback so that I am clear about the scope of the negative feedback.  “I’m not talking about everything you do, I’m just taking about this specific issue”.  Or, "Here's example of where you handled something similar well".  However, when it is primarily used to blunt the emotional impact of the feedback, it is dangerous.  Taken too far, it can completely dilute your point and make your feedback irrelevant.
  2. Examples – When giving feedback, it’s often useful to use examples.  Examples help make the feedback concrete.  But, don’t allow the conversation to turn into a refutation of every example.  I’ve been in conversations where the person I’m talking with wants to go through every example I have and explain why my interpretation is wrong.  Be open to being wrong but don’t let it turn into point/counter point.  Examples are only examples to support your feedback.
  3. Comparisons – Be *very* careful about comparing one person to others.  While it’s often useful to suggest better ways of handling something, it’s very dangerous to do it by saying “You should just do it like Sam.”  It creates resentment, among other things.  Sometimes it is appropriate to talk about examples of how you’ve seen something handled well before but don’t let it become a “Sam is better than you” discussion.

Summary

Ironically, just this last weekend, I was having dinner with a friend that I used to work with (she was on my team).  We haven't worked together in many years but we've stayed in touch.  While we were having dinner, she told her husband a story about me.  She said she remembered a time when she had done a review of her project for me and it had not gone well.  After the review, I approached her and asked if she was feeling bad about the review.  She said "Yes" and I said "Good, you should be".  We then went on to discuss what was bad about it and what she could do to improve it.  On the retelling, it sounded harsh.  While I remember the discussion, I don't remember many details but it got me thinking.  On the positive side, it was good for me to approach her separately after the meeting.  It was good for me to start with a question of how she was feeling about it.  I probably could have come up with a better reply than "Good, you should be".  And I do recall we had a good conversation afterwards about how to improve.  If nothing else, this example is proof of how much emotional impact feedback, particularly when not done carefully enough, can have - she has remembered this incident for almost 10 years and I have long forgotten it.

Giving feedback is hard.  There’s no simple rule for it.  It is stressful and can lead to conflict.  The best advice I can give you is:

  1. Give feedback regularly – both positive and negative.
  2. Be careful about when and where you give feedback so you can have a calm and thoughtful conversation.
  3. Focus on things you directly observe and the effects they had on you.  Don’t ascribe motives and make it a personal attack.
  4. Consider your relationship and how it will affect how feedback is heard.
  5. Be aware of your own role and be prepared to discuss it appropriately.
  6. Use retrospectives as a tool for collecting/processing feedback in a non-threatening way.

Lastly, I’ll say, always remember that the purpose of feedback is to help the other person.  If you are giving feedback to make yourself feel better (for example feeling vindicated or superior), you are doomed.  Stop and rethink what you are doing.

As always, I hope this is helpful and feedback is welcome 😊

Brian

#HOWTO – Continuous delivery from GitHub to Azure AppService

$
0
0

Few months ago I have spent a couple of evening in implementing an AppService able to delivery events from Visual Studio Online to a Lametric smart clock. I have also shared all the solution code on GitHub.

Well, yesterday, while I was working on other stuff on a customer's AppService, the following icon have grabbed my attention:

It allows to enable a continuous deployment from various sources to the AppService.

In the list of available sources I have found also GitHub... 2 seconds and I have quickly realized that I already have a perfect playground where to test this automagic workflow!

so I went back on my subscription > Lametric AppService and after a click on [deployment options] > [GitHub] and the corresponding authorization workflow, I has been able to select my GitHub solution and preferred branch to use for the deploy.

Important note: in order to work the Visual Studio Solution file (.sln) MUST BE the GitHub repository root folder!!

Aftes a click on [OK] everything is DONE: AppService connects to GitHub, download the solution code, build it and deploy in PRODUCTION in a couple of minutes!

When the deploy finishes an active deployment is shown under [Deployment Options] so you know what you have live.

...and if you click on it you can also see what Azure have done under the hood:

Interesting and USEFUL when you have to troubleshoot the process in case of an error.

From now every time I push an update to GitHub, well, the magic happen… in a couple of minutes a new version of the app is globally available 🙂 Yes, scary as it looks...

Luckily I can easily rollback to a previous version if I need: I just have to select a previous deployment and click on [redeploy].

Skype for Business 2015 (Lync 2013) の 2018 年 3 月のセキュリティ 更新プログラム (Sec Patch) がリリースされました。

$
0
0

Japan Skype/Lync サポート チームです。

Skype for Business 2015 (Lync 2013) の 2018 年 3 月のセキュリティ 更新プログラム (Sec Patch) がリリースされました。

TITLE: Description of the security update for Skype for Business 2015 (Lync 2013): March 6, 2018
URL: https://support.microsoft.com/ja-jp/help/4011678

適用後のファイル バージョンは、15.0.5015.1000 となります。

本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。


Skype for Business 2016 の 2018 年 3 月のセキュリティ 更新プログラム (Sec Patch) がリリースされました。

$
0
0

Japan Skype/Lync サポート チームです。

Skype for Business 2016 の 2018 年 3 月のセキュリティ 更新プログラム (Sec Patch) がリリースされました。

TITLE: Description of the security update for Skype for Business 2016: March 6, 2018
URL: https://support.microsoft.com/ja-jp/help/4011725

適用後のファイル バージョンは、16.0.4666.1000 となります。

本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

How VSTS is Accelerating the Engineering Group Behind Windows

$
0
0

As part of our engineering processes in Microsoft, we often share best practices and stories of change across the engineering teams in the company. At our latest internal engineering conference as I listened in to sessions, I was struck by the sheer scale of the effort the Windows and Devices Group (WDG) undertook and the problems they've solved using Visual Studio Team Services (VSTS) and wanted to write up some of my key takeaways here.

People talking in the exhibition area during 1ES Day 2018

WDG here at Microsoft powers the operating systems of computing devices across the planet. It looks after not only the Windows operating system, but also Xbox, Surface, HoloLens, the Microsoft Store and much more. With over 22,000 employees and 7,000 software developers in the group, it’s larger than many companies.

WDG was formed from divisions across Microsoft. When that many engineers came together from different areas, there were lots of ways of working and lots of different systems and ways to build and deploy software. There was duplication of effort and logistical difficulties in sharing code, processes and learnings. How do you get everyone to work together and across all the other disciplines within the team?

Four years ago, WDG started to adopt VSTS as part of the ‘One Engineering System’ (1ES) effort in Microsoft – an effort to bring together our engineering people, processes and tools across the whole company. The WDG team has been leading much of the performance work we’ve done to migrate teams to Git hosted in VSTS. The Windows core repo contains over 270Gb of source for Windows in a massive mono-repo. That repo alone has over 4000 engineers working on it (meaning around 400 are actively making changes at any one time).

While Windows core is the largest Git repo, WDG has a lot of other repos and they try to make code more modular and self-contained when it makes sense. Large mono-repos can be good for developer productivity but they come with the cost of some additional complexity along with process and tooling limitations. In total the WDG team edits, reviews, builds and deploys from around 6,000 Git repos across the entire group. When you have that many people working in that many repos, pull requests become essential.

The recent Fall Creators Update to Windows 10 consisted of around 4 million individual commits grouped into around 500,000 pull requests. All changes to the Windows code are reviewed via pull requests which has proven to be the best way to work in any Git team in Microsoft. It has taken significant engineering work to build a pull request system in VSTS that is able to handle pull requests at the scale of a group like WDG. There are many improvements we have made over the years based on our learning such as running a PR queue in the background when you press ‘Merge’ on your pull request to prevent the merge race conditions and collisions that can otherwise happen. The WDG team has also published an extension to the VSTS marketplace to allow merge conflicts to be resolved directly in a pull request from VSTS rather than having to merge them locally and then push back to the server.

WDG also puts a similar level of demand on the VSTS work tracking system. They currently have over 10 million work items tracking bugs, features and tasks etc. Note: If you ever get asked to send a crash report to Microsoft please do! - there is a high probability that data from that crash report will end up in a work item for the WDG team to triage, allocate to an engineer to investigate and create a fix. Those crash reports and the detailed diagnostic information they contain are great for helping us improve our products – just know when you send one of those in you’ve made an engineer’s day easier by helping her track something down that might have been previously reported anecdotally but without detailed diagnostics.

Bringing WDG together in VSTS wasn’t all plain sailing – but like most organizational changes, it’s the people and the processes that are the hardest bits to adjust. We like teams to have a high degree of autonomy here in Microsoft. But when WDG came together from separate groups, those groups would often have different names for very similar things for no good reason – just because that is how it had always been for them. To give just a couple of examples, one team's ‘bug’ was another’s ‘defect’ or ‘issue’. Depending on the team, ‘done’ could mean the work item had a status of ‘Complete’, ‘Completed’ or ‘Closed’ depending on where you worked in the org. When you start scaling that up you get a lot of complexity making it hard for anyone to know how to log a bug and be able to have it flow through the group across responsible teams where necessary. After bringing together engineering leads to drive a process rationalization and bring some alignment across the group, the team was able to dramatically reduce the number of work item types, fields and states. This not only brought greater simplicity to the process and made it much easier for engineers to use – it also helped improve the performance of their VSTS account as the forms no longer had to render hundreds of fields that were rarely (if ever) used. Rationalizing work items also allowed for better communication and reporting about what was happening within the group.

The WDG experience led to direct improvements in VSTS for things like tag support. More subtle changes include how you assign a work item to someone. (A regular drop-down combo works fine for a small team, but not when you could potentially assign the work item to one of the 80,000 VSTS users in Microsoft.)

The WDG team has seen massive improvements in moving to VSTS, not just in pure throughput, but also in the satisfaction levels of engineers on the team. Engineer satisfaction is the one of the most important management metrics for WDG. This also reflects the change of culture not just in their group but also across Microsoft as a whole towards rewarding sharing and reuse. In turn, the WDG team has helped improve VSTS for all our other customers, either by making direct feature contributions as a pull request to the VSTS codebase, by creating extensions and making them publicly available in the VSTS marketplace, or by leading the way so that feature gaps and performance issues are identified well before customers outside Microsoft run into them.

As a best practice, WDG has also released a number of their tools and extensions as open source projects so that customers outside of Microsoft can make use of them. My personal favorites are the Work Item Migrator (a way to copy work item content from one VSTS account into another) and also Mohit Bagra’s work item form extensions which adds one-click power user commands into the VSTS work item form.

It’s been an incredible four years; we've come further than we could have imagined together through the power of DevOps and the benefits of encouraging a culture of sharing inside and outside the company.

The Seven Habits of Highly Effective Developers

$
0
0

If you are looking to maximize your productivity and impact as a modern developer, consider these seven habits shared by App Dev Manager Ketuan Baldwin.


Reaching your full potential as a developer requires you to be highly effective. In this blog, I will discuss some principles that are important for modern developers to be successful. The ideas generated from these principles are based on Steve Covey best-selling book, The 7 Habits of Highly Effective People. As developers, our primary goals is to make things easier and/or to create more engaging experiences for users. These seven principles look at a modern approach to achieving that primary goal.

1. Be Proactive with DevOps

a. Being reactive doesn't allow you to be innovative. DevOps gives control over the process and tools for building, testing and releasing software applications. For many years and even today, some teams only release software on the weekend or in the middle of the night. This can be because developers and technology operations resources haven't integrated or accepted a DevOps culture to embrace automating software delivery through continuous integration and deployment.  Being Proactive with DevOps increases reliability of environment resources and can be helpful when automating repeatable task.

Check out Visual Studio Team Services for more information around Microsoft's DevOps tools
clip_image002

2. Begin with Open Source in Mind

a. For a long time at Microsoft, we've believed that we could create all the products and tools that would solve any problem leveraging great dev teams and driving a widely adopted products.  More recently, we have changed this type of thinking and embraced Open Source technologies and services as an integrated part in developing solutions for customers.  Today's effective developers realize that the .NET platform and Windows Server can work well for many solutions, but they are open to using Linux Server and other development platforms to solve problems.

Check out some of the story on How Microsoft has embraced Open Source tools and services like Kubernetes, Node.js,Chef, etc.. 

3. Put the Cloud First

a. It's extremely important to consider a Cloud First approach in your app development. The Cloud helps remove barriers and creates flexibility, scalability, and availability for your applications' services. A cloud first approach allows developers to focus on innovation and not managing networks, operating systems, and storage needs-- allowing technology resources to focus on more strategic responsibilities and outcomes.

Check out this Developers Guide to help get you started in the Azure Cloud.
clip_image004

4. Think Containers/Serverless in your Architecture

a. Rethinking the way we design and architecture applications for a variety of platforms, devices, services and consumers can be challenging. When we understand the benefits of Containers and closely examine opportunities for serverless computing, we can transform monolithic legacy applications. Serverless applications helps to reduce code and speeds up the development process for scale. Effective developers use containers to maximize deployment flexibility and serverless as an option for integrate scaling, hosting, and monitoring easily.

Find out more about the Containers and Serverless feature in Azure.

5. Use Mobile to Understand, and AI to be Understood

a. Now that Mobile is a part of most users digital experience, it has an opportunity to empathically listen and understand customer needs. This can lead to a more powerful and engaging experience with Artificial Intelligence.  Cognitive, Machine Learning and Bot Services give developers new, exciting, and unexpected way of understanding and interacting with data through voice, video, images and text.

See how to building these engaging Mobile and AI experiences with Microsoft platforms and tools

6. Synergize through Insights

a. We can gain Insights through varies forms of telemetry. Independently each area will have limited value, but the synergy of all insights will lead to opportunities of new services and a deeper understanding of the customer experience for developers.  When developers have complete visibility into applications this means they can monitor events, app performance, exceptions and session details to help diagnose issues for users across the entire solution stack.

Effective developers build, measure and learn with Application Insights.
clip_image006

7. Continue to Sharpen the Saw

a. Continue to invest in yourself and gain new relevant skills. Checkout some of the digital learning experiences for developers.
Azure Cloud .NET Developers
Azure Cloud AI Developer
Microsoft Professional Program - Front End Development
Microsoft Virtual Academy - DevOps for Developers

These new habits give us something to think about and works towards as we become more effective in our daily activities. Almost 10 years ago, a former colleague of mines, John Powell, wrote a blog on The 7 Habits of Highly Effective Developers that made sense for developers in 2008. While these principles can still be effective, there are a lot of new capabilities and opportunities to consider as a developer today.

Stay #Winning and keep developing amazing experiences my friends…


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Where to get EWSEditor

$
0
0

A while back EWSEditor check-ins were moved to GitHub. Below are the links for download:

Here is the main page:

    https://github.com/dseph/EwsEditor

You can click "Clone or Download" to get the code.

On the main page there is also link with the text "release" (it has what looks like a price tag to the left). This link will to a page where you can download zip files of the code and built application. The built application zip file will have "Bin" in its name. Below is the link to "releases".

    https://github.com/dseph/EwsEditor/releases

Install SQL 2017 CU4 non-English locale without internet access(離線安裝SQL 2017 CU4)

$
0
0

Install SQL 2017 CU4 without internet access (離線安裝SQL 2017 CU4)


週日接到微軟一位Machine Learning專家James Chang訊息,詢問如何離線安裝SQL 2017 CU4

了解狀況後,原來是SQL 2017 CU4在安裝過程中,需要下載Machine Learning(R Server and Python Server)相關cab檔,因為沒有網際網路,所以一直無法成功。

Error log出現要下載下列幾個檔案:
SRO_3.3.3.300_1028.cab
SRS_9.2.0.400_1028.cab
SPS_9.2.0.400_1028.cab
SPO_9.2.0.24_1028.cab

經過來回確認與research,最後的解決方法是

1.download Cab file

透過這篇官方文件,下載需要的cab檔

Installing machine learning components without internet access

2.rename cab file

由於這些cab檔案名稱是1033(English語言環境),如果SQL Server是繁體中文版,在CU4的安裝過程中指定這些cab檔案的所在目錄會無法辨識,需手動將cab檔案名稱更名,將1033改為1028

3.provide the location of the correct version of the CAB files

然後在CU4的安裝過程中指定這些cab檔案的所在目錄,這樣才能順利安裝成功。

 

Refenece:

Installing machine learning components without internet access

Do it right! Deploying SQL Server R Services on computers without Internet access

SQL Server 2016: patching CU with R Services

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>