Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Driving industry transformation with Azure

$
0
0

industryMarty Donovan is both an alumni of the Premier Developer team and a Senior Program Manager for Industry Experiences.  Not long ago, we featured a post about new content to help customers build solutions for common industry use cases using Azure.

Building on this momentum, check out Marty’s most recent post that goes into more detail on Azure solutions in Banking and Capital Markets, Health and Life Sciences, Insurance, Manufacturing, Retail and Consumer goods.

Microsoft is highly focused on solving industry challenges, creating new opportunities and driving digital transformation for all organizations. Through dedicated industry resources from Microsoft and premier partners, new solutions, content, and guidance are released almost daily. This monthly blog aggregates a series of the newest resources focused on how Azure can address common challenges and drive new opportunities in a variety of industries.

Check out the full article here.


Installing SQL Server 2017 for Linux on Ubuntu 18.04 LTS

$
0
0

Prior to SQL Server for Linux 2017 CU10 the package dependencies prevented install on Ubuntu 18.04 LTS. The SQL Server 2017 installation packages have updated use the libssl1.0.0 package, allowing installation to occur on Ubuntu 18.04 LTS installations.

Ubuntu 18.04 LTS was updated and ships with version 1.1 of the openssl package. SQL Server 2017 for Linux had version a 1.0 openssl dependency. The correct dependency is the libssl1.0.0 package which CU10 corrects.

There may be additional actions required on some systems. If libcurl4 is installed libcurl4 should be removed and libcurl3 installed as shown in these example commands.

Ø sudo apt-get remove libcurl4

Ø sudo apt-get install libcurl3

After this step, following standard set of instructions should allow you to install SQL Server.

Upgrade from earlier Ubuntu versions:

With the release of SQL Server 2017 CU10 (build 14.0.3037.1), you can install it on a new Ubuntu 18.04 server. Upgrade from Ubuntu 16.04 to 18.04 still results in some issues. As noted earlier, SQL Server 2017 depends on libcurl3. Our testing indicates that upgrade to Ubuntu 18.04 may result into upgrade of libcurl library to libcurl4. There is no currently known way to install that library side by side. Because of the way dependencies work in Linux, during the distribution upgrade libcurl library would be upgraded to libcurl 4 and libcurl3 as well as its dependent packages, which includes SQL Server 2017 would be uninstalled. While the databases would typically not be removed as /var/opt/mssql and other data related folders are not removed during uninstall of SQL Server 2017, additional modifying the libcurl library to version 3 after distribution upgrade and install as well as configuration of SQL Server 2017 may result in additional unintended outage. We are currently working on ways to correct this situation.

Support state:

While we have unblocked the installation and use of SQL Server 2017 on Ubuntu 18.04, it has not been fully tested for production use. So, we recommend that you use SQL Server 2017 on Ubuntu 18.04 for non-production purpose only. We will update the support for Ubuntu 18.04 for full production use once thorough testing has been completed and full support has been documented.

Additional Note:

A few systems may require version 1.0 of the OpenSSL libraries to connect to SQL Server. Using OpenSSQL 1.0 can be done as follows:

  1. Stop SQL Server
    1. sudo systemctl stop mssql-server
  2. Open the editor for the service configuration
    1. sudo systemctl edit mssql-server
  3. In the editor, add the following lines to the file and save it:
    1. [Service]
    2. Environment="LD_LIBRARY_PATH=/opt/mssql/lib"
  4. Create symbolic links to OpenSSL 1.0 for SQL Server to use
    1. sudo ln -s /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 /opt/mssql/lib/libssl.so
    2. sudo ln -s /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 /opt/mssql/lib/libcrypto.so
  5. Start SQL Server
    1. sudo systemctl start mssql-server

 

Mitchell Sternke | Software Engineer

Dylan Gray | Software Engineer

Tejas Shah | Senior Program Manager

Vin Yu | Program Manager

Q&A: How to specialize std::sort by binding the comparison function

$
0
0

This post is part of a regular series of posts where the C++ product team here at Microsoft and other guests answer questions we have received from customers. The questions can be about anything C++ related: Visual C++, the standard language and library, the C++ standards committee, isocpp.org, CppCon, etc. Today's Q&A is by Herb Sutter.

Question

A reader recently asked: I am trying to specialize std::sort by binding the comparison function.
I first tried:

auto sort_down = bind(sort<>,_1,_2,[](int x, int y){return x > y;});

It couldn’t infer the parameter types. So then I tried:

auto sort_down = bind(sort<vector<int>::iterator,function<int(int)>>,
                      _1,_2,[](int x, int y){return x > y;});

Is there a straightforward way to do this?
Another example:

auto f = bind(plus<>(), _1, 1)

Here bind has no trouble deducing the template arguments in this case, but when I use a function template for the original callable, it's not so happy. Just wanting to be consistent with this usage.

Answer

First, the last sentence is excellent: We should definitely be aiming for a general consistent answer where possible so we can spell the same thing the same way throughout our code.

In questions about binders, the usual answer is to use a lambda function directly instead – and usually a generic lambda is the simplest and most flexible. A lambda additionally lets you more directly express how to take its parameters when it’s invoked – by value, by reference, by const, and so forth, instead of resorting to things like std::ref as we do when we use binders.

For the second example, you can write f as a named lambda this way:

auto f = [](const auto& x){ return x+1; };

For the first example, you can write sort_down as a named lambda like this:

auto sort_down = [](auto a, auto b){ return sort(a, b, [](int x, int y){return x > y;}); };

Note the way to give a name to a lambda: assign it to an auto variable, which you can give any name you like. In this case I take a and b by value because we know they’re intended to be iterators which are supposed to be cheap to copy.

The nice thing about lambdas is they allow exactly what you asked for: consistency. To be consistent, code should use lambdas exclusively, never bind. As of C++14, which added generic lambdas, lambdas can now do everything binders can do and more, so there is never a reason to use the binders anymore.

Note that the old binders bind1st and bind2nd were deprecated in C++11 and removed in C++17. Granted, we have not yet deprecated or removed std::bind itself, but I wouldn’t be surprised to see that removed too. Although bind can be convenient and it’s not wrong to use it, there is no reason I know of to use it in new code that is not now covered by lambdas, and since lambdas can do things that binders cannot, we should encourage and use lambdas consistently.

As a side point, notice that the “greater than” comparison lambda

[](int x, int y){return x > y;}

expects integer values only, and because of the glories of the C integer types it can give the wrong results because of truncating (e.g., if passed a long long) and/or sign conversion (e.g., a 32-bit unsigned 3,000,000,000 is greater than 5, but when converted to signed is less than 5). It would be better written as

[](const auto& x, const auto& y){return x > y;}

or in this case

std::greater<>{}

Thanks to Stephan Lavavej for comments on this answer.

Your questions?

If you have any question about C++ in general, please comment about it below. Someone in the community may answer it, or someone on our team may consider it for a future blog post. If instead your question is about support for a Microsoft product, you can provide feedback via Help > Report A Problem in the product, or via Developer Community.

Testing regulation storms with the good ship BDD specflow

$
0
0

Behavioral Driven Development is becoming more critical as the expectations for software quality and fit for purpose are increasingly regulated by State, Federal and International entities.  I was introduced to SpecFlow while working with customers who need to meet SOX compliance, remain highly agile and prove that they have automated tests enforcing the compliance.  This has led to a series of blog articles centered around BDD.

 

As an example of SpecFlow Unit Tests the following could be actual source code…

 

Feature: Force Product Owners and Development to collaborate through process

Scenario: Describe a specific requirement with a structured language

Given A product owner wants a specific feature

And there is an established scrum team who might deliver it

When the PO writes the first draft of the BDD Gherkin scenario

And gets feedback from the scrum team

Then the scrum team will review it

And rewrite it with the PO until approved

 

BDD is a huge topic and I have a very short attention span as I assume you do too so I wrote a very short introduction to BDD with SpecFlow at: http://darrenrich.blogspot.com/2018/02/bdd-specflow-vsts-cicd-oh-my.html

 

Next I wanted to ensure that you could get your hands on working code quickly, under 15 minutes actually, and begin to play with BDD driving Selenium browser automation at: http://darrenrich.blogspot.com/2018/03/bdd-specflow-vsts-cicd-fnc15.html

 

If it doesn't work with VisualStudio.com release definitions I don't see the value in testing frameworks.  In this article I go deeper into the ways that BDD SpecFlow can be integrated into your CICD pipeline at: http://darrenrich.blogspot.com/2018/04/bdd-specflow-vsts-cicd-digging-deeper.html

Build and Release failures when using BitBucket and OAuth in all regions. 08/29 – Mitigating

$
0
0

Update: Wednesday, August 29th 2018 17:20 UTC

The configuration change to mitigate the issue on the VSTS side has been successfully deployed. We believe that for most affected users, the issue should now be resolved.
We are keeping the issue as “Mitigating” as we believe some users may still see intermittent build failures which we currently suspect is due to a dependency on Bitbucket.
We have engaged Bitbucket in this regard and will be working with them to resolve any remaining issues.

  • Next Update: Before Thursday, August 30th 2018 17:30 UTC

Sincerely,
Niall


Initial notification: Wednesday, August 29th 2018 15:49 UTC

We are investigating Build and Release failures when using BitBucket and OAuth in all regions.

The issue from yesterday (August 28th) reoccurred due to a configuration change that reverted the mitigation steps that were taken yesterday.
The mitigation steps from yesterday are currently being re-run.

  • Next Update: Before Wednesday, August 29th 2018 16:20 UTC

Sincerely,
Niall

APIs now available for complete Driver lifecycle management

$
0
0

One of the consistent themes the Hardware Dev Center team hears from the Partner Community is the need for an easy way to automate the lifecycle of submitting and publishing drivers. This is especially true for partners with large volumes, because they need a way to build, sign, package and publish drivers within their existing build and release management processes. To address this feedback, we had published APIs for submission in April. Continuing with that story, we have now added APIs for publishing drivers in Hardware Dev Center. This is now available to all partners and allows you to create Shipping Labels.

What can I do with APIs?

Microsoft Hardware API are now available for Hardware Dev Center. You can use these REST APIs to

  • Submit drivers
  • Download signed drivers
  • Create/upload derived submissions
  • Check status of an existing submission
  • Check status of a shipping label
  • Create a shipping label
  • Edit a shipping label

How to onboard/start using it?

Read through the documentation to understand the methods available, request response types for each of these and how to call them. The documentation also contains sample code on how to use the API. Since these are REST APIs, you should be able to easily onboard to them without the need to change the technology you already use in-house.

How do I get access?

The APIs can be accessed using your existing Azure AD account by associating an Azure AD application with your Windows Dev Center account. If you are already using Microsoft Store analytics API or Microsoft Store submission API, you could reuse the same credentials to access the Microsoft Hardware API as well.

What are early adopters saying?

Early adopters of the APIs have been able to onboard, test and start using the APIs. They have been able to save time and increase productivity. Some snippets of feedback

“Prompt and superfast. Status poll is prompt. Able to publish multiple OSes with 2700 HWIDs in one shot. No timeouts or any other issues noticed in the perf”

“We were able to automatically upload and then download our signed package in just 10 mins, and everything was smooth and straightforward. This API will definitely save our Cert Engineers meaningful cycles pretty much every day now that we no longer have to do this manually.”

“The APIs have reduced the cycle time for our end to end signing process from days to 75 minutes.”

What's next?

In the coming months, we will be releasing APIs for advanced driver search. Stay tuned for more updates.

Happy automating!

Azure Functions and Serverless Computing

$
0
0

In my previous blog post, WebJobs in Azure with .NET Core 2.1, I briefly mentioned Azure Functions. Azure Functions are usually small (or somewhat larger) bits of code that run in Azure and are triggered by some event. Azure takes complete care of the entire infrastructure of your Functions making it a so-called serverless solution. The only thing you need to worry about is your code.

The code samples for this post can be found on my GitHub profile. You shouldn't really need it because I'm only using default templates, but I've included them because I needed them in source control for CI/CD anyway.

Serverless computing

Before we get into Azure Functions I'd like to explain a bit about serverless computing. Serverless computing isn't completely serverless, your code still needs somewhere to run after all. However, the servers are completely managed by your cloud provider, which is Azure in our case.

With most App Services, like a Web App or an API App, you need an App Service Plan, which is basically a server that has x CPU cores and y memory. While you can run your Functions on an App Service Plan it's far more interesting to run them in a so-called "Consumption Plan". With a consumption plan, your resources are completely dynamic, meaning you're only actually using a server when your code is running.

It's cheap

Let me repeat that, you're only actually using a server when your code is running. This may seem trivial, but has some huge implications! With an App Service Plan, you always have a server even when no code is running. This means you pay a monthly fee just to keep the server up and running. See where this is heading? That's right, with a consumption plan you pay only when your code is running because that's the only time you're actually using resources!

There's a pretty complicated formula using running time, memory usage and executions, but believe me, Functions are cheap. Your first million executions and 400,000 GB/sec (whatever that means) are free. Microsoft has a pricing example which just verifies that it's cheap.

It's dynamic

Price is cool, but we ain't cheap! Probably the coolest feature about Functions is that Azure scales servers up and down depending on how busy your function is. So let's say you're doing some heavy computing which takes a while to run and is pretty resource intensive, but you need to do that 100 times in parallel... Azure just spins up some extra servers dynamically (and, of course, you pay x times more, but that's still pretty cheap). When your calculations are complete your servers are released and you stop paying. There is a limit to this behavior. Azure will spin up a maximum of 200 server instances per Function App and a new instance will only be allocated at most once every 10 seconds. One instance can still handle multiple executions though, so 200 servers usually does not mean 200 concurrent executions.

That said, you can't really configure a dynamically created server. You'll have to trust Microsoft that they're going to temporarily give you a server that meets your needs. And, as you can guess, those needs better be limited. Basically, your need should be that your language environment, such as the .NET Framework (or .NET Core) for C#, is present. One cool thing though, Next to C#, Functions can be written in F# and JavaScript, and Java is coming in v2. There are some other languages that are currently running in v2, like Python, PHP, and TypeScript, but it looks like Microsoft isn't planning on fully supporting these languages in the near future.

Azure Functions in the Azure Portal

Let's create our first Azure Functions. Go to App Services and create a new one. When you create an App Service you should get an option to create a Function App (next to Web App, API App, etc.). The creation screen looks pretty much the same as for a regular web app, except that Function Apps have a Consumption Plan as a Hosting Plan (which you should take) and you get the option to create a Storage Account, which is needed for Azure to store your Functions.

Create a new Function App

Create a new Function App

This will create a new Function App, a new Hosting Plan, and a new Storage Account. Once Azure created your resources look up your Function Apps in your Azure resources. You should see your version of "myfunctionsblog" (which should be a unique name). Now, hover over "Functions" and click the big plus to add a Function.

Azure Functions

Azure Functions

In the next form, you can pick a scenario. Our options are "Webhook + API", "Timer" or "Data processing". You can also pick a language, "CSharp", "JavaScript", "FSharp" or "Java". Leave the defaults, which are "Webhook + API" and "CSharp" and click the "Create this function" button.

Editing and running your Azure Functions

You now get your function, which is a default template. You can simply run it directly in the Azure portal and you can see a log window, errors and warnings, a console, as well as the request and response bodies. It's like a very lightweight IDE in Azure right there. I'm not going over the actual code because it's pretty basic stuff. You can play around with it if you like.

Next to the "Save" and "Run" buttons you see a "</> Get function URL" link. Click it to get the URL to your current function. There are one or more function keys, which give access to just this function); there are one or more host keys, which give access to all functions in the current host; and there's a master key, which you should never share with third parties. You can manage your keys from the "Manage" item in the menu on the left (with your Functions, Proxies, and Slots).

Consuming Azure Functions

The Function we created is HTTP triggered, meaning we must do an HTTP call ourselves. You can, of course, use a client such as Postman or SoapUI, but let's look at some C# code to consume our Function. Create a (.NET Core) Console App in Visual Studio (or use my example from GitHub) and paste the following code in Program.cs.

using System;
using System.Net.Http;

namespace ConsoleApp1
{
    class Program
    {
        static void Main(string[] args)
        {
            using (var client = new HttpClient())
            {
                // Replace with your own URL.
                string url = "https://myfunctionsblog.azurewebsites.net/api/HttpTriggerCSharp1?code=CJnVDV8gELVapT7gNoMexKLI1j7zFNX6FJBOP7lMBC71vkvuAT9J4A==";
                var content = new StringContent("{"name": "Function"}", Encoding.UTF8, "application/json");
                client.PostAsync(url, content).ContinueWith(async t =>;
                {
                    var result = await t.Result.Content.ReadAsStringAsync();
                    Console.WriteLine("The Function result is:");
                    Console.WriteLine(result);
                    Console.WriteLine("Press any key to exit.");
                    Console.ReadKey();
                }).Wait();
            }
        }
    }
}

If your Function returns an error (for whatever reason) your best logging tool is Application Insights, but that is not in the scope of this blog post.

Of course, if you have a function that's triggered by a timer or a queue or something else you don't need to trigger it manually.

Azure Functions in Visual Studio 2017

Creating a Function in the Azure Portal is cool, but we want an IDE, source control, and CI/CD.  So open up VS2017 and look for the "Azure Functions" template (under Visual C# -> Cloud). You can choose between "Azure Functions v1 (.NET Framework)" and "Azure Functions v2 Preview (.NET Standard)". We like .NET Standard (which is the common denominator between the .NET Framework, .NET Core, UWP, Mono, and Xamarin) so we pick the v2 Preview.

Again, go with the HTTP trigger and find your Storage Account. Leave the access rights on "Function". The generated function template is a little bit different than the one generated in the Azure portal, but the result is the same. One thing to note here is that Functions actually use the Microsoft.Azure.WebJobs namespace for triggers, which once again shows the two can (sometimes) be interchanged.

using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using System.IO;

namespace FunctionApp
{
    public static class Function1
    {
        [FunctionName("Function1")]
        public static IActionResult Run([HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)]HttpRequest req, ILogger log)
        {
            log.LogInformation("C# HTTP trigger function processed a request.");

            string name = req.Query["name"];

            string requestBody = new StreamReader(req.Body).ReadToEnd();
            dynamic data = JsonConvert.DeserializeObject(requestBody);
            name = name ?? data?.name;

            return name != null
                ? (ActionResult)new OkObjectResult($"Hello, {name}")
                : new BadRequestObjectResult("Please pass a name on the query string or in the request body");
        }
    }
}

When you run the template you get a console which shows the Azure Functions logo in some nice ASCII art as well as some startup logging. Windows Defender might ask you to allow access for Functions to run, which you should grant.

Now if you run the Console App we created in the previous section and change the functions URL to "http://localhost:7071/api/Function1" (port may vary) you should get the same result as before.

Deployment using Visual Studio

If you've read my previous blog posts you should be pretty familiar with this step. Right-click on your Functions project and select "Publish..." from the menu. Select "Select Existing" and enable "Run from ZIP (recommended)". Deploying from ZIP will put your Function in a read-only state, but it most closely matches your release using CI/CD. Besides, all changes should be made in VS2017 so they're in source control. In the next form find your Azure App and deploy to Azure.

If you've selected Azure Functions v2 earlier you'll probably get a dialog telling you to update your Functions SDK on Azure. A little warning here, this could mess up already existing functions (in fact, it simply deleted my previous function). As far as I understand this only affects your current Function App, but still use at your own risk.

Once again, find your Functions URL, paste it in the Console App, and check if you get the desired output.

Deployment using VSTS

Now for the good parts, Continuous Integration and Deployment. Open up Visual Studio Team Services and create a new build pipeline. You can pick the default .NET Core template for your pipeline, but you should change the "Publish" task so it publishes "**/FunctionApp.csproj" and not "Publish Web Projects". You can optionally enable continuous integration in the "Triggers" tab. Once you're done you can save and queue a build.

The next step is to create a new release pipeline. Select the Azure "App Service deployment" template. Change the name of your pipeline to something obvious, like "Function App CD". You must now first add your artifact and you may want to enable the continuous deployment trigger. You can also rename your environment to "Dev" or something.

Next, you need to fill out some parameters in your environment tasks. The first thing you need to set is your Azure subscription and authorize it. For "App type" pick the "Function App". If you've successfully authorized your subscription you can select your App Service name from the drop-down. Also, and this one is a little tricky, in your "Azure App Service Deploy" task, disable "Take App Offline" which is hidden under the "Additional Deployment Options". Once you're done, and the build is finished, create a new release.

When all goes well, test your changes with the Console App.

If everything works as expected try changing the output of your Function App, push it to source control, and see your build and release pipelines do their jobs. Once again, test the change with your Console App.

Working around some bugs

So... A header that mentions bugs and workarounds is never a good thing. For some reason, my release kept failing due to an "invalid access to memory location". Probably because I had already deployed the app using VS2017. I also couldn't delete my function because the trash can icon was disabled. Google revealed I wasn't the only one with that problem. So anyway, I am currently using a preview of Azure Functions v2 and I'm sure Microsoft will figure this stuff out before it goes out of preview.

Here's the deal, you probably have to delete your Function App completely (you can still delete it through your App Services). Recreate it, go to your Function App (not the function itself, but the app hosting it, also see the next section on "Additional settings"), and find the "Function app settings". Over here you can find a switch "~1" and "beta" (which are v1 and v2 respectively). Set it on "beta" here. Now deploy using VSTS. Publishing from VSTS will cause your release to fail again.

Bottom line: don't use VS2017 to deploy your Function App!

Additional settings

There's just one more thing I'd like to point out to you. While Azure Functions look different from regular Web Apps and Web APIs they're still App Services with a Hosting Plan. If you click on your Function App you land on an "Overview" page. From here you can go to your Function app settings (which includes the keys) and your Application settings (which look a lot like Web App settings). You'll find your Application settings, like "APPINSIGHTS_INSTRUMENTATIONKEY", "AzureWebJobsDashboard", "AzureWebJobsStorage" and "FUNCTIONS_EXTENSIONS_VERSION".

Another tab is the "Platform features" tab, which has properties, settings and code deployment options (see my post Azure Deployment using Visual Studio Team Services (VSTS) and .NET Core for more information on deployment options).

Functions Platform features

Functions Platform features

Wrap up

Azure Functions are pretty cool and I can't wait for v2 to get out of preview and fully support .NET Standard as well as fix the bugs I mentioned. Now, while Functions may solve some issues, like dynamic scaling, it may introduce some problems as well.

It is possible to create a complete web app using only Azure Functions. Whether you'd want that is another question. Maybe you've heard of micro-services. Well, with Functions, think nano-services. A nano-service is often seen as an anti-pattern where the overhead of maintaining a piece of code outweighs the code's utility. Still, when used wisely, and what's wise is up to you, Functions can be a powerful, serverless, asset to your toolbox. If you want to know more about the concepts of serverless computing I recommend a blog post by my good friend Sander Knape, who wrote about the AWS equivalent, AWS Lambda, The hidden challenges of Serverless: from VM to function.

Don't forget to delete your resources if you're not using them anymore or your credit card will be charged.

Happy coding!

Installing SQL Server 2017 for Linux on Ubuntu 18.04 LTS

$
0
0

Prior to SQL Server for Linux 2017 CU10 the package dependencies prevented install on Ubuntu 18.04 LTS. The SQL Server 2017 installation packages have updated use the libssl1.0.0 package, allowing installation to occur on Ubuntu 18.04 LTS installations.

Ubuntu 18.04 LTS was updated and ships with version 1.1 of the openssl package. SQL Server 2017 for Linux had version a 1.0 openssl dependency. The correct dependency is the libssl1.0.0 package which CU10 corrects.

There may be additional actions required on some systems. If libcurl4 is installed libcurl4 should be removed and libcurl3 installed as shown in these example commands.

Ø sudo apt-get remove libcurl4

Ø sudo apt-get install libcurl3

After this step, following standard set of instructions should allow you to install SQL Server.

Upgrade from earlier Ubuntu versions:

With the release of SQL Server 2017 CU10 (build 14.0.3037.1), you can install it on a new Ubuntu 18.04 server. Upgrade from Ubuntu 16.04 to 18.04 still results in some issues. As noted earlier, SQL Server 2017 depends on libcurl3. Our testing indicates that upgrade to Ubuntu 18.04 may result into upgrade of libcurl library to libcurl4. There is no currently known way to install that library side by side. Because of the way dependencies work in Linux, during the distribution upgrade libcurl library would be upgraded to libcurl 4 and libcurl3 as well as its dependent packages, which includes SQL Server 2017 would be uninstalled. While the databases would typically not be removed as /var/opt/mssql and other data related folders are not removed during uninstall of SQL Server 2017, additional modifying the libcurl library to version 3 after distribution upgrade and install as well as configuration of SQL Server 2017 may result in additional unintended outage. We are currently working on ways to correct this situation.

Support state:

While we have unblocked the installation and use of SQL Server 2017 on Ubuntu 18.04, it has not been fully tested for production use. So, we recommend that you use SQL Server 2017 on Ubuntu 18.04 for non-production purpose only. We will update the support for Ubuntu 18.04 for full production use once thorough testing has been completed and full support has been documented.

Additional Note:

A few systems may require version 1.0 of the OpenSSL libraries to connect to SQL Server. Using OpenSSQL 1.0 can be done as follows:

  1. Stop SQL Server
    1. sudo systemctl stop mssql-server
  2. Open the editor for the service configuration
    1. sudo systemctl edit mssql-server
  3. In the editor, add the following lines to the file and save it:
    1. [Service]
    2. Environment="LD_LIBRARY_PATH=/opt/mssql/lib"
  4. Create symbolic links to OpenSSL 1.0 for SQL Server to use
    1. sudo ln -s /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 /opt/mssql/lib/libssl.so
    2. sudo ln -s /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 /opt/mssql/lib/libcrypto.so
  5. Start SQL Server
    1. sudo systemctl start mssql-server

 

Mitchell Sternke | Software Engineer

Dylan Gray | Software Engineer

Tejas Shah | Senior Program Manager

Vin Yu | Program Manager


How to Add an Azure AD Role to a Enterprise Application (Service Principal)

$
0
0

Introduction

This post is to help users be able to assign administrative roles to Enterprise Applications/Service Principals so that they can perform duties that would otherwise require a user with elevated permissions to accomplish. This is convenient when a user wishes to use a service principal in order to reset a password, or to perform some activity that requires admin privileges programmatically without an interactive sign in (using client credentials grant type flow).

In this post we will go over installing MSOnline (MSOL) PowerShell module, finding the Object ID for your Enterprise Application, and then giving the Enterprise Application an administrative role.

 

Note: We will be using MSOnline powershell cmdlets, these are a bit outdated. So as of 8-29-2018 they have not been deprecated yet, however please be sure to check the status of MSOL library. The history for the AAD libraries can be found here: https://social.technet.microsoft.com/wiki/contents/articles/28552.microsoft-azure-active-directory-powershell-module-version-release-history.aspx

We will be using Version 1.1.166.0 (PowerShell V1 General Availability)

You can also utilize AAD powershell V2.0. We will be going over this as well.

 

 

Prerequisite

In order to add an Application role to a Service Principal, you will need to have the proper permissions to assign roles to objects. Per the documentation here : https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/directory-assign-admin-roles#details-about-the-global-administrator-role

 

You will need to be a Global Administrator in order to set the roles to the Enterprise Application. Please be sure to get the global admin to perform to set the Enterprise Application to have the administrative privilege.

 

Getting Started with MSOL

In order to add the application role to a service principal we will have to utilize the older MSOL powershell Cmdlets.

In order to install MSOL, open up PowerShell and type in :

 

Install-Module -Name MSOnline

 

You can find the library in the PowerShell gallery here : https://www.powershellgallery.com/packages/MSOnline/1.1.183.17

 

After you have installed MSOL, you will need to login to your Azure Active Directory using MSOL. In order to do this use the command :

 

Connect-MsolService

 

There should now be a popup asking you to login to Azure. Be sure to use a global admin account, otherwise you won’t be able to follow the next step to give an enterprise application an administrative role.

 

Getting Started with AAD PowerShell V2.0

This document goes over install AAD Powershell V2.0:

https://docs.microsoft.com/en-us/powershell/azure/active-directory/install-adv2?view=azureadps-2.0

Similar to installing MSOL.

You will utilize the commands :

Install-Module AzureAD

Connect-AzureAD

 

Getting the ObjectID of the Enterprise Application

Now we need to get the Object ID from the Enterprise Application. There are two ways you can do this, you can get the Object ID from the powershell CMDlet, or you can go to the Azure Portal and get the object ID from the Enterprise Application under the properties blade.

 

Using MSOL Powershell

Here we will take a look at a short script you can utilize in order to get the object ID of the Enterprise Application assuming you know the Application Registration Application ID.

 

$tenantID = "<ID for Tenant>"

$appID = "<Application ID>"

$msSP = Get-MsolServicePrincipal -AppPrincipalId $appID -TenantID $tenantID

$objectId = $msSP.ObjectId

 

Using AAD V2.0 PowerShell

You can also utilize AAD powershell v2.0 using the command :

 

$mysp = Get-AzureADServicePrincipal -searchstring <your enterprise application name> 

$mysp.ObjectId

Now we have the service principal stored in the variable $mysp. We will use it later to associate a role to the enterprise application.

Using the Azure Portal

To get the ObjectID through the Azure Portal, you will need to go to portal.azure.com. From there go to Azure Active Directory on the left side bar. Then to Enterprise Applications > All Applications > (Your Enterprise Application to set to an Admin Role) > Properties > Object ID.

 

IN AAD portal

 

Enterprise Application

 

image

 

 

Assigning an Administrative Role for an Enterprise Application

First please make sure you have the Administrative Role Name on hand as you will need it in order to add the Admin Role to the Enterprise Application. You can find all the roles here: https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/directory-assign-admin-roles

 

In this Example we will be using Helpdesk Administrator

 

Using MSOL to Add a Role Member

All we have to do is run the MSOL PowerShell cmdlet Add-MsolRoleMember.

This PowerShell Cmdlet is described in detail here : https://docs.microsoft.com/en-us/powershell/module/msonline/add-msolrolemember?view=azureadps-1.0.

 

For our intents and purposes, we can use this one liner in order to set the Enterprise Application to have the Admin Role :

 

Add-MsolRoleMember -RoleName "Helpdesk Administrator" -RoleMemberType ServicePrincipal -RoleMemberObjectId $objectId

 

Replacing ObjectID with the enterprise application’s ObjectID if you don't have $objectID saved.

We have successfully added the admin role to the enterprise application.

Note:  This may require some time to take effect.

 

Using Azure AAD Powershell V2 to Add a Role Member

When using AAD PowerShell v2, you will need to get the object ID of the AAD role, you can utilize this command below to get the information.

 

$myAADRole = Get-AzureADDirectoryRole | Where-Object {$_.displayName -eq 'Helpdesk Administrator'}

$myAADRole.ObjectId

 

Now that we have the object IDs for the AAD role, we will need to get both object IDs to add the role to the enterprise application. We can use the command below :

 

Add-AzureADDirectoryRoleMember -ObjectId $myAADRole.ObjectId -RefObjectId $mysp.ObjectId

 

Replacing the variables with the object IDs if you don't have the variables saved.

We have successfully added the admin role to the enterprise application.

Note:  This may require some time to take effect.

 

Conclusion

In this article we have gone over downloading MSOL and AAD V2.0, connecting with MSOL and V2.0, getting the object id for the enterprise application, getting the Application Role in MSOL and AAD V2.0, and giving the enterprise application the admin role using both AAD V2.0 and MSOL. you should now have an enterprise application that can do the actions that typically require a user with the admin privileges described in the role. If you have anymore issues feel free to open a support ticket or comment below and either one of our support engineers or I will try to assist you in this endeavor.

Hosted Mac Build Pool Outage in South Central US – 08/29 – Mitigated

$
0
0

Final Update: Wednesday, August 29th 2018 23:21 UTC

The rollback of the agent update has completed. We have confirmed that the issue has been resolved.

Sincerely,
Tom


Update: Wednesday, August 29th 2018 22:48 UTC

We are mitigating the issue currently by rolling back an Agent update. We will confirm back once the mitigation is fully deployed.

Sincerely,
Tom


Initial notification: Wednesday, August 29th 2018 21:44 UTC

We're investigating an outage with the Mac Hosted Build Pool in South Central US.

Its suspected that a recent deployment has caused the issue and the team is evaluating options for mitigation.

  • Next Update: Before Wednesday, August 29th 2018 22:15 UTC

Sincerely,
Eric

Early technical preview of JDBC 7.1.0 for SQL Server released

$
0
0

We have a new early technical preview of the JDBC Driver for SQL Server. Precompiled binaries are available on GitHub and also on Maven Central.

Below is a summary of the new additions to the project, changes made, and issues fixed.

Added

  • Added support for LocalDate, LocalTime and LocalDateTime to be passed as 'type' in ResultSet.getObject() #749
  • Added support to read SQL Warnings after ResultSet is read completely #785

Fixed Issues

  • Fixed Javadoc warnings and removed obselete HTML tags from Javadocs #786
  • Fixed random JUnit failures in framework tests #762

Changed

  • Improved performance of readLong() function by unrolling loop and using bitwise operators instead of additions #763
  • Removed logging logic which caused performance degradation in AE #773

Getting the Preview
The latest bits are available on our GitHub repository and Maven Central.

Add the JDBC preview driver to your Maven project by adding the following code to your POM file to include it as a dependency in your project.

Java 8:

<dependency>
    <groupId>com.microsoft.sqlserver</groupId>
    <artifactId>mssql-jdbc</artifactId>
    <version>7.1.0.jre8-preview</version>
</dependency>

Java 10:

<dependency>
    <groupId>com.microsoft.sqlserver</groupId>
    <artifactId>mssql-jdbc</artifactId>
    <version>7.1.0.jre10-preview</version>
</dependency>

We provide limited support while in preview. Should you run into any issues, please file an issue on our GitHub Issues page.

As always, we welcome contributions of any kind. We appreciate everyone who has taken the time to contribute to the project thus far. For feature requests, please file an issue on the GitHub Issues page to help us track and follow-up directly.

We would also appreciate if you could take this survey to help us continue to improve the JDBC Driver.

Please also check out our tutorials to get started with developing apps in your programming language of choice and SQL Server.

David Engel

Wednesday Featured Forums: .NET Framework Forum

$
0
0

Good Day All!

We are back with the Wednesday's Featured Forums where we write about a Microsoft forum that we think is great. Today we will be looking at .NET Framework Forum.

Let's learn first what is .NET Framework? .NET Framework created by Microsoft which is a collection of Application Programming Interfaces (APIs) and a shared library of code that developer's can call when developing applications, so that they don’t have to write the code from scratch. It is a layer between the operating system and the programming language that supports many languages like C#, VB.NET, C++ and F# etc. The first version of the .Net framework was released in the year 2002 and was called .Net framework 1.0. Since then .Net framework has come a long way.

In the .NET Framework, the library of shared code is named the Framework Class Library (FCL). The bits of code in the shared library can perform all kinds of different functions. Say, for example, a developer needed to display a string to the computer monitor. Instead of writing that code themselves, and then writing all the little bits and pieces that have to interpret what string is and how to display, they can use code from the library (Console.WriteLine();) that performs that function.

Let's learn some more about .NET Framework:

  • Previously .NET Framework was only for windows users now Thanks to .NET Core, a set of tools consisting of the runtime, library and compiler components, you can create apps that run on Windows, Mac OS X and Linux.
  • Several parts of .NET were made available under open source licenses, meaning any developer can contribute to it. For example, the C# compiler Roslyn was made open source by Microsoft under the Apache License. The source code of Roslyn can be downloaded from GitHub, and guidelines on how to participate and contribute were made available.
  • .NET has a huge collection of predefined class libraries (pre-written code) that has support for simple and complex data structures. Essentially, that means you can rely on the work of hundreds of other developers and pull in already-written code into your own programs. .NET even has specific libraries for security, encryption, and database access.

This is one of the most widely used MSDN forum and it contains Question and Answer related to programming languages:

  • Visual C#
  • Visual Basic
  • .NET Framework Class Libraries
  • Common Language Runtime Internals and Architecture

Among these let's look into one of the sub forum which is Visual C# forum contains 108425 pages and Visual Basic forum contains 85781 pages of questions and answers which is big! More information about Visual C# forum can be found in this featured post Wednesday Featured Forums: Visual Studio Languages Forum.

Now that we have a brief overview let's move to the .NET Framework Forums. To access the forums you can click on the link below:

.NET Framework

That is all for now and see you on the next one. Good Luck!

Thank You,

-Ninja Sabah

Debugging Node.js applications on Azure App Services Linux/containers

$
0
0


This is the continuation of the earlier post where I had explained how to enable logging for node applications on App Service Windows.  In this blog, you will learn how to debug the node.js applications that are hosted on Azure App Service Linux / Azure webapp for containers.

Enabling logs on linux offerings of AppService is fairly easier and pretty straight forward when compared to Windows.

Enable Logging on Linux

  • Navigate to your webapp from the azure portal
  • Choose the Diagnostic Logs blade. Here we have only one option to enable logs which is docker container logs unlike in Windows, where we have the storage option as well. As of today, we do not have the option of storing the logs to blob storage in App Service Linux.
  • Choose File System, specify the retention period of your choice. This will delete the logs that are older than specified number of days

clip_image001

  • This will gather all the STDOUT and STDERR logs of the container into /home/LogFiles folder
  • Once you have configured using above step, you will be able to access these log files by following the steps below
  • Click on Debug Console –> Bash. If it is app service Linux, you can alternatively choose SSH option
  • Please note that SSH works only for App Service Linux. But for Webapp Containers, you would need to configure it by following the official documentation here
  • Navigate to the LogFiles folder by running the command “cd LogFiles
  • Now run the command “ls -lrt” to list all the log files with the latest one in the bottom. The stderr and stdout logs are stored in the file that ends with default_docker.log as shown below

clip_image002

  • You can view this by running the command cat <filename>. If your application has any stdout/stderr configured, it will be seen in this log file.

clip_image003

  • You can also download it using FTP and steps to follow are specified here
  • If you are only looking for the latest docker logs, the quickest way is to go to Kudu console and click on Current Docker Logs option as shown below. However, this will only give the information related to the container, but not the stdout/stderr logs.

clip_image004

  • Once you click on this link, you will be redirected to a new page with all the details related to the latest docker logs

clip_image005

  • Now, copy paste the link that is highlighted into the new tab and you should see the output as below

clip_image006

[Skype for Business Online] Get-CsUserSession の ClientVersion について

$
0
0

こんにちは、Japan Lync/Skype for Business サポートチームです。
今回は Skype for Business Online  の Get-CsUserSession の Client Version 情報についてご案内いたします。

Skype for Business Online に PowerShell で接続して、各ユーザーのセッション情報を Get-CsUserSession コマンドにて出力できますが、出力されたユーザーセッションから表示される From/To Client Version は、各 Lync/Skype for Business などのクライアントでどのように表示されるか、以下にご参考までにご案内いたします。

■デスクトップクライアント
※ "X" の部分はクライアントのバージョンによります

・Skype for Business 2016:
UCCAPI/16.0.XXXXX.XXXXX OC/16.0.XXXXX.XXXXX (Skype for Business)

・Skype for Business 2015
UCCAPI/15.0.XXXX.XXXX OC/15.0.XXXX.XXXX (Skype for Business)

・Lync 2013
UCCAPI/15.0.XXXX.XXXX OC/15.0.XXXX.XXXX (Microsoft Lync)

・Skype for Business for Mac
RTCC/7.0.0.0 UCWA/7.0.0.0 SfBForMac/16.X.XX.0000 (Mac OSX XX.XX.X)

■モバイルクライアント
※ "X" の部分はクライアントのバージョンによります

・Skype for Business for iOS (iPhone)
RTCC/7.0.0.0 UCWA/7.0.0.0 iPhoneLync/6.XX.X.0000 (iPhone iOS XX.X.X)

・Skype for Business for iOS (iPad)
RTCC/7.0.0.0 UCWA/7.0.0.0 iPadLync/6.XX.X.0000 (iPad iOS XX.X.X)

・Skype for Business for Android
RTCC/7.0.0.0 UCWA/7.0.0.0 AndroidLync/6.XX.X.XX (XXXXXXX Android X.X)

■その他
※ "X" の部分は環境により異なる場合がございます。

・Skype MeetingApp (WebApp)
RTCC/7.0.0.0 UCWA/7.0.0.0 LWA/7.0 (EWP=XXXX.XXXX.XXXX; windows X.X ; IE 7.0; os64Browser32; XX.X.X; XXXXX)

・OWA (Outlook on the Web)
RTCC/7.0.0.0 UCWA/7.0.0.0 SkypeWeb/0.X.XXX master SWX (X.XXX.XXX - Exchange)

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Configuring node.js application on Azure App Service with Azure Blob Storage

$
0
0


In this article, we will learn about configuring the node js applications deployed on Azure App Services with the Azure blob storage.

If you would like to store the application logs in a blob storage on azure for the node.js applications, you would need to follow the steps below. It basically includes 2 main steps:

  1. Enabling from Portal
  2. Configuring from the code

Enabling from Portal

· Navigate to your webapp from the azure portal

· Choose the Diagnostic Logs blade. For application level logging, choose Blob storage as shown below. If you would like to choose file system, please check out the blog post here

clip_image002

Configuring from the code

  • In your application, install the packages winston & winston-azure-blob-transport by the running the commands below


npm install winston

npm install winston-azure-blob-transport

  • Now, use the code below in order to configure them from the application


var winston = require("winston");

require("winston-azure-blob-transport");

var logger = new (winston.Logger)({

    transports: [

      new (winston.transports.AzureBlob)({

        account: {

          name: process.env.ACCOUNT_NAME,

          key: process.env.ACCOUNT_KEY

        },

        containerName: process.env.CONTAINER_NAME,

        blobName: "test.log",

        level: "info"

      })

    ]

  });

logger.info("Hello!");


  • test.log in the above code snippet is the name of the blob where the logs get stored.
  • You would be able to get the Account name, Account key and the container name from the Azure storage account view
  • You would need to make use of Storage Explorer in order to check the log file as shown below.

clip_image004


Using Azure AD B2B collaboration support for Google IDs and Conditional Access

$
0
0

Hi,

Alex Simons announced the public preview of Azure AD B2B Collaboration support for Google IDs, so i thought I'd share my findings on configuration, user experience and bring Azure AD Conditional Access into the mix.

The first step to setting this up is to add Google as an identity provider in your Azure AD tenant. This has been beautifully documented by the Azure AD team here.

Once you have followed those steps then you are ready to roll...

 

 

Now it's time to add a guest user. There are numerous ways to invite guest users but for the purposes of this post I'm inviting from the Azure AD Portal.

NOTE: At the time of writing this document it's only @gmail.com users that are supported.

 

 

Once invited the user then the user receives an invitation.

 

 

The user then clicks "Get Started" and then they are prompted to authenticate with their GoogleID.

 

 

Once successfully authenticated the user then has to review the permissions and "Accept".

 

 

Access is then provisioned.

 

 

The user is now logged on using their Google ID! Now lets have a look at what that account looks like in Azure AD.

 

 

The user is now provisioned in Azure AD as a "Guest" user and the "Source" is Google.

 

 

You may be aware of a recent public preview which enables you to target Conditional Access policies at "guest users". This is a great feature and allows you to target all guest users (including GoogleID's) to provide additional protection when they are accessing your organisations resources. Let's set up a Conditional Access Policy which will enforce Azure Multi-Factor Authentication for all guest users.

 

 

Create a new Conditional Access Policy and select "All guest users (preview)", make the relevant "Cloud app" assignments and any additional conditions you would like to test.

NOTE: This will apply to all users with the userType attribute set to guest

 

 

Now select "Require multi-factor authentication" as a control to enforce when granting access. As always with Conditional Access make sure you understand the impact of the policy and always test before enabling in production.

Let's have a look at the user experience. Access the resource (At this stage the URL must include the tenant context e.g. https://myapps.microsoft.com/?tenantid=<tenant id>) and the user is prompted for authentication.

 

 

Once the user has entered their GoogleID credentials they are prompted with the following.

 

 

The user is now forced to register their MFA details, undergo verification and then they get access to the resource.

 

 

Subsequent authentications the user is prompted for MFA.

 

 

In summary, invited users can now use their GoogleID to access Azure AD integrated resources. Your organisation also has the ability to enforce a stronger level of authentication with Conditional Access to ensure a higher level of protection for your companies resources.

Thanks for reading!!

MD

Follow me on Twitter for future posts.

The early history of Windows file attributes, and why there is a gap between System and Directory

$
0
0


Let's look at the values for the basic

Windows file attributes
.
There's a gap where 8 should be.


STYLE="border-collapse: collapse">

FILE_ATTRIBUTE_...
Value


READONLY
0x00000001


HIDDEN
0x00000002


SYSTEM
0x00000004


 
0x00000008


DIRECTORY
0x00000010


ARCHIVE
0x00000020



Rewind to CP/M.




CP/M supported eleven attributes
:


STYLE="border-collapse: collapse">

Name
Meaning


F1, F2, F3, F4
User-defined


F5, F6, F7, F8
Interface-defined


T1
Read-only


T2
System


T3
Archive



The operating system imposed no semantics for user-defined attributes.
You can use them for whatever you want.



The meanings of the
interface-defined attributes were defined by each operating system
interface.
Think of them as four bonus flag parameters for each syscall
that takes a file control block.
You could set interface-defined attributes before calling
a function, and that passed four additional flags in.
Or the function could manipulate those attributes before returning,
allowing it to return four flags out.
Interface-defined attributes are always clear on disk.



The read-only bit marked a file as read-only.



The system bit had two effects:
First, it hid the file from directory listings.
Second, if the file belonged to user 0,¹ then the file was
available to all users.
(This was handy for program files.)



The archive bit reported whether the file has been backed up.



These attributes were retrofitted onto the existing directory
structure by taking over the high bits of the eleven filename
characters!
That's why they are named F1 through F8
(high bits of the eight-character file base name)
and T1 through T3
(high bits of the three-character extension, also known as the file type).



You can infer from this that CP/M file names were limited to 7-bit ASCII.



Anyway,

MS-DOS 1.0

split the dual meaning of the
system attribute into two attribute
(hidden and system),
and even though it didn't implement the read-only attribute,
it reserved space for it.



That explains why the first three attributes are read-only (1),
hidden (2), and system (4).



MS-DOS 2.0 most notably added support for subdirectories,
but another feature that came along was volume labels.
Since there was no space for the volume label in the
disk header, the volume label was added as a directory
entry in the root directory, with a special attribute
that says "This is a volume label, not a file."²


The next attributes became volume label (8),
directory (16), and archive (32).


Win32 adopted the same file attribute values as MS-DOS
and 16-bit Windows, presumably in an effort to minimize
surprises when porting code from 16-bit to 32-bit.
The volume label attribute disappeared from Win32,
but the bits for directory and archive were left at their
original values to avoid problems with programs that
operated with file attributes.
Those programs contained

their own definitions for the
file attributes

because 16-bit Windows didn't provide any.



¹
CP/M supported up to 16 users, numbered 0 through 15.
When you started the computer, you were user 0,
but you could change users by saying
USER n.
Files belonging to other users were hidden,
except that system files belong to user 0 were
available to everyone.
Anybody could change to any user at any time,
so this was a file organization feature, not a security feature.
In practice, nobody really used it because floppy discs
were so small that it was
easier to organize your files by putting them on different
floppies
than by trying to remember which user you used for each file.



²
Windows 95 later repurposed the volume label attribute
to mark directory entries as being used for long file names.
Disk utilities often parsed directory entries directly,
so any change in the disk format was a compatibility risk.
The choice to use the volume label attribute for this
purpose came after a lot of experimentation to find the
least disruptive file format for long file names.
It turns out that most low-level disk utility
programs ignored anything marked with the
volume label attribute.

Gastbeitrag von Ferdinand Stipberger: Teamwork in der Schule kann funktionieren

$
0
0

Teamwork, gemeinsam an Projekten arbeiten – das verlangen wir Lehrkräfte von unseren Schülerinnen und Schülern immer wieder während ihrer Schullaufbahn. Sie erwerben dabei Kompetenzen, die nicht nur später im Beruf enorm wichtig sind, sondern heutzutage bereits in vielen Firmen bei der täglichen Arbeit benötigt werden. Gemeinsam an Projekten arbeiten spart nicht nur Zeit, sondern erhöht auch die Qualität des Endproduktes. Teams werden immer dann gebildet, wenn es gilt, sich mit neuen Themen auseinanderzusetzen, unterschiedliche Meinungen und Herangehensweisen auf ihre Machbarkeit zu prüfen und schlussendlich im Schulbereich einen kreativen und durchdachten Unterrichtsinhalt zu gestalten.

Gefordert von unseren Schülerinnen und Schülern, ist Teamwork jedoch in vielen Lehrerzimmern ein Fremdwort. Noch immer sind viele Lehrerinnen und Lehrer Einzelkämpfer und bereiten ihren Unterricht für sich allein, ohne Zuhilfenahme anderer vor – mit dem Nachteil, dass gleiche Unterrichtsinhalte mehrmals von unterschiedlichen Personen ausgearbeitet werden. Das ist nicht nur uneffektiv und ressourcenfressend, sondern kann im schlimmsten Fall auch dazu führen, dass die Unterrichtsstunden für unsere Schülerinnen und Schüler langweilig werden, da sie wenig schülerzentriert oder abwechslungsreich gestaltet sind. Im regulären Schulalltag ist eben manchmal nicht die Zeit, um innovative Ideen allein umsetzen zu können. Zudem fehlen oftmals noch die „handwerklichen Voraussetzungen“ im Bereich der Umsetzung – gerade im Umgang mit digitalen Medien. Warum also nicht die Kräfte bündeln und Teams bilden, in denen Fähigkeiten und Wissen in allen Bereichen vorhanden sind und das volle Spektrum der Möglichkeiten ausnutzen?

Genau das hat die Gregor-von-Scherr-Realschule aus Neunburg vorm Wald (Bayern) mit der Einführung des neuen LehrplanPlus in der 5. Jahrgangsstufe versucht. Sie stellte die Weichen für mehr Zusammenarbeit. Die Fachschaften Mathematik und Englisch wollten gemeinsam die Inhalte des neuen Lehrplanes an die Herausforderungen des Lernens im 21. Jahrhundert anpassen. Im Fachbereich Englisch arbeitete ein Jahrgangsstufenteam aus Neunburg schulintern, während die Mathematiker zusätzliche eine Kooperation mit der Realschule Pfuhl (Bayern) eingingen, die etwa 280 km entfernt liegt. Beide Schulen machten sich auf, um gemeinsam die mathematischen Inhalte der 5. Klasse zu gestalten und gemäß diesen zu unterrichten.

Für gemeinsames Arbeiten müssen jedoch in der Schule Strukturen geschaffen werden, die ein gemeinsames Arbeiten tatsächlich ermöglichen. Der Grundstein wurde in Neunburg mit dem Stundenplan gelegt, in dem eine gemeinsame Besprechungsstunde für die jeweiligen Jahrgangsstufenteams eingearbeitet wurde. Diese Zeit sollte zum Er- bzw. Umarbeiten der Lehrplaninhalte sowie zum Austausch und zur Organisation genutzt werden.

Die Jahrgangsstufenteams mussten sich dann auf einer gemeinsamen Plattform organisieren. Während das Englisch-Team ein geteiltes OneNote-Notizbuch für den Austausch und als Materialbörse nutzte, entschieden sich die Mathematiker für Microsoft Teams. Mit dem OneNote StaffNotebook bot sich hier die beste Möglichkeit, konsequent gemeinsam im Collaboration Space die Unterrichtsinhalte zu erstellen. Dabei wurden einzelne Abschnitte mit den zu erarbeitenden Themenbereichen erstellt, die jeder Kollege mit Inhalt bestücken konnte. Aus dieser losen Ideensammlung wurden anschließend die Unterrichtsinhalte erstellt. Zudem konnte jeder einzelne Kollege in seinem privaten Bereich eigene Ideen zusammentragen und ordnen. Durch die Verwendung des Kalenders konnten ohne großen Aufwand Meilensteine gesetzt, im Chat diskutiert, Links geteilt und auch online Meetings einberufen werden. Im Vergleich zu einem geteilten OneNote Notizbuch konnte man bei Microsoft Teams flexibler auf Dateien zurückgreifen und entsprechend kollaborativ an geplanten Materialien arbeiten.

Ziel der Kooperation war es, abwechselnd eine Unterrichtssequenz (die einem Kapitel des Lehrplanes entspricht) zu erstellen und die 5. Jahrgangsstufe in Mathematik im Flipped-Classroom-Konzept zu unterrichten. Wichtig waren das Lernen voneinander, die Weiterentwicklung des Unterrichts sowie die Vorzüge und Erfahrungen der beteiligten Kolleginnen und Kollegen auszunutzen, um Inhalte zu erstellen, wie es ein einzelner Lehrer nicht schaffen würde – heißt auch urheberrechtlich unbedenklich. Zudem wollte man die Schülerinnen und Schüler dazu bringen, über unterrichtliche Dinge zu kommunizieren, ihrer Kreativität möglichst freien Lauf zu lassen, kritisch zu hinterfragen, was richtig oder falsch ist, und kooperativ an Lösungsansätzen zu arbeiten.

Dazu sollte zu jedem Unterrichtsthema entweder ein Impulsvideo oder -arbeitsblatt erstellt werden, das die Schülerinnen und Schüler auf das neue Thema hinleiten bzw. in eine einheitliche Richtung lenken sollte, ohne schon zu viel Fachliches preiszugeben. Diese fachspezifischen Inhalte wurden in der folgenden Unterrichtsstunde gemeinsam erarbeitet und den Schülerinnen und Schülern in einem Video nochmals als Zusammenfassung zur Verfügung gestellt. Zum Erstellen der Videos verwendeten die Kolleginnen und Kollegen hauptsächlich PowerPoint und dessen Möglichkeiten, einzelne Folien zu vertonen und mit handschriftlichen Notizen zu versehen (Beispiele: https://youtu.be/gqTwZ8VkQNE oder https://youtu.be/bqH9bB-EvFI). Die Teams erstellten für die Schüler einen gemeinsamen, nach Schwierigkeitsgrad differenzierten Aufgabenpool aus dem gemeinsamen Schulbuch, den die Schülerinnen und Schüler im Unterricht zu bearbeiten hatten. Dies schaffte eine effizientere Nutzung der Unterrichtszeit: Anstelle des Lehrenden rückten die Schülerinnen und Schüler in das Zentrum des Unterrichts. Passende Lösungen und weiterführende Aufgaben für die schneller Lernenden sowie themenbezogene mathematische Spiele waren ebenfalls dabei.

Nun mussten die erstellten Inhalte nur noch den Schülerinnen und Schülern bereitgestellt werden. Dazu verwendete man die Lernplattform Mebis, die an den bayerischen Schulen kostenlos und entsprechend den geltenden Datenschutzbedingungen zur Verfügung gestellt wird. Alle Inhalte konnten nun in einem Redaktionskurs von allen beteiligten Kollegen gesammelt und geordnet werden. Für die einzelnen Klassen bediente sich der zugehörige Fachlehrer aus diesem Redaktionskurs und erstellte daraus seinen eigenen Kurs, den er zusätzlich noch – wenn gewünscht – ergänzen konnte. Nach Fertigstellung konnte der Kurs der Klasse zur Verfügung gestellt werden.

Neben dem positiven Effekt, dass jedem Kollegen ein Grundgerüst der Lerninhalte über ein komplettes Schuljahr hinweg an die Hand gegeben werden konnte, war es gerade die Arbeitsersparnis, die sich bemerkbar machte. In den Phasen, in denen eine Schule nicht direkt mit der Erstellung des aktuellen Inhalts beschäftigt war, konnte das kommende Thema auf mehrere Schultern verteilt werden und so ließen sich auch aufwendigere Inhalte erstellen. Nebenbei bemerkt können nun alle nachfolgenden Kolleginnen und Kollegen auf die erstellten Inhalte zurückgreifen und darauf aufbauend wiederum eigene Kurse kreieren.

Zusätzlich erkannten andere Kollegen sehr schnell, dass diese Art zu unterrichten gerade bei Vertretungsstunden einen großen Vorteil bietet. Während sie sich sonst erkundigen mussten, an was die zu vertretende Klasse gerade arbeitet, konnte nun einfach weitergearbeitet werden.

Teamwork kann also gelingen. Natürlich muss man sich als Lehrer zuerst einmal darauf einlassen, „fremde“ Inhalte zu verwenden oder gar für eigene Werke „kritisiert“ zu werden. Dennoch kann ich rückblickend nur ein positives Fazit ziehen. Neben vielen Ideen, die aus der Zusammenarbeit entstanden sind, haben sich bei den unterschiedlichen Themen verschiedenste Herangehensweisen gezeigt. Dies öffnet ein wenig den Blick aus dem eigenen Universum heraus und weitet den Horizont.

Dennoch ist die Mentalität des Teilens und der Kollaboration noch nicht in allen Köpfen angekommen. Es ist noch ein weiter Weg zurückzulegen. Er ist es aber allemal wert, gegangen zu werden. Meine Empfehlung für alle, die es versuchen wollen: Suchen Sie sich Gleichgesinnte. Nehmen Sie zuerst diejenigen mit ins Team, die richtig Lust darauf haben, so zu arbeiten. Machen Sie andere Kolleginnen und Kollegen neugierig und laden Sie diese ein, auch einmal Ihrem Unterricht zu hospitieren.

Ferdinand Stipberger

Ferdinand Stipberger

Ferdinand Stipberger ist seit 2000 Lehrer, aktuell an einer Realschule in Bayern, und unterrichtet die Fächer Mathematik, Sport und Informationstechnologie. Neben seiner Tätigkeit als Lehrer bietet er zu vielen Themen der Digitalisierung und zum Einsatz digitaler Medien im Unterricht Workshops und Fortbildungen an.

Improving your productivity in the Visual Studio Editor

$
0
0

Over the last few updates to Visual Studio 2017, we’ve been hard at work adding new features to boost your productivity while you’re writing code. Many of these are the result of your direct feedback coming from the UserVoice requests, Developer Community tickets, and direct feedback we’ve encountered while talking to developers like you.

We are so excited to share these features with you and look forward to your feedback!

Multi-Caret Support

One of our top UserVoice items asked for the ability to create multiple insertion and selection points, often shortened to be called multi-caret or multi-cursor support. Visual Studio Code users told us they missed this feature when working in Visual Studio. We heard you opened single files in Visual Studio Code to leverage this feature or installed extensions such as MixEdit, but in Visual Studio 2017 Version 15.8, you won’t need to do this anymore. We’ve added native support for some of the top requested features in the multi-caret family and we’re just getting started.

There are three main features we’d like to highlight. First, you can add multiple insertion points or carets. With Ctrl + Alt + Click, you can add additional carets to your document, which allows you to add or delete text in multiple places at once.

GIF showing how to add carets in multiple locations

Second, with Ctrl + Alt + . you can add additional selections that match your current selection. We think of this as an alternative to find and replace, as it allows you to add your matching selections one by one while also verifying the context of each additional selection. If you’d like to skip over a match, use (Ctrl + Shift + Alt + .) to move the last matching selection to the next instance.

Lastly, you can also grab all matching selections in a document at once (Ctrl + Alt + Shift + ,) providing a scoped find and replace all.

Quick Commands

Just like papercuts, smaller missing commands hurt when you add them up! We heard your pain, so in the past few releases, we’ve tried to address some of the top features you’ve asked for.

Duplicate line

The reduction of even a single keystroke adds up when multiplied across our userbase and one place we saw an opportunity to optimize your workflow was in duplicating code. The classic Copy + Paste worked in many cases, but we also heard in feedback that you wanted a way to duplicate a selection without affecting your clipboard. One scenario where this often popped up was when you wanted to clone a method and rename it by pasting a name you had previously copied.

To solve this issue, we introduced Duplicate Code (Ctrl + D) in Visual Studio 2017 version 15.6 which streamlines the process of duplicating your code while leaving your clipboard untouched. If nothing is selected, Ctrl + D will duplicate the line the cursor is in and insert it right below the line in focus. If you’d like to duplicate a specific set of code, simply select the portion of code you want to duplicate before invoking the duplicate code command.

Expand/Contract Selection

How do you quickly select a code block? In the past, you could incrementally add to your selection word by word or perhaps you used a series of Shift plus arrow keystrokes. Maybe you took that extra second to lift you hand off the keyboard so you could use a mouse instead. Whatever the way, you wanted something better. In Visual Studio 2017 version 15.5, we introduced expand /contract selection which allows you to grow your selection to the next logical code block (Shift + Alt + +) and decrease it by the same block if you happen to select too much (Shift + Alt + -).

gif showing expand /contract selection which allows you to grow your selection to the next logical code block and decrease it by the same block

Moving between issues in your document

You’ve been able to navigate to Next Error via Ctrl + Shift + F12 but we heard this experience was sometimes jarring as Next Error might jump you all around a solution as it progressed through issues in the order they appeared in the Error List. With Next/Previous Issue (Alt + PgUp/PgDn) you can navigate to the next issue (error, warning, suggestion) in the current document. This allows you to move between issues in sequential versus severity order and gives you more progressive context as you’re moving through your issues.

Go To All – Recent Files and File Member search

You can now view and prioritize search results from recent files. When you turn on the recent files filter, the Go To All results will show you a list of files opened during that session and then prioritizes results from recent files for your search term.

Additionally, Go To Member is now scoped to the current file by default. You can toggle this default scope back to solution level by turning off Scope to Current Document (Ctrl + Alt + C).

Go To Last Edited Location

We all know the feeling of starting to write a feature and then realizing we need some more information from elsewhere in the solution. So, we open another file from Solution Explorer or Go to Definition in a few places and suddenly, we’re far off from where we started with no easy way back unless you remember the name of file you were working in originally. In Visual Studio 2017 version 15.8, you can now go back to your last edited location via Edit > Go To > Go To Last Edit Location (Ctrl + Shift + Backspace).

Expanded Navigation Context Menu

Keyboard profiles for Visual Studio Code and ReSharper

Learning keyboard shortcuts takes time and builds up specific muscle memory so that once you learn one set, it can be difficult to retrain yourself when the shortcuts change or create mappings that match your previous shortcuts. This problem came to light as we heard from users who frequently switch between Visual Studio and Visual Studio Code, and those who used ReSharper in the past. To help, we’ve added two new keyboard profiles, Visual Studio Code and ReSharper (Visual Studio), which we hope will increase your productivity in Visual Studio.

Keyboard profiles for Visual Studio Code and ReSharper

C# Code Clean-up

Last, but certainly not least, in Visual Studio 2017 version 15.8, we’ve configured Format Document to perform additional code cleanup on a file--like remove and sort usings or apply code style preferences. Code cleanup will respect settings configured in an .editorconfig file, or lacking that rule or file, those set in Tools > Options > Text Editor > C# > [Code Style & Formatting]. Rules configured as none in an .editorconfig will not participate in code cleanup and will have to be individually fixed via the Quick Actions and Refactorings menu.

Options dialog showing format document options for C# Code Clean-up

Update and Give Feedback

With Visual Studio Version 15.8, you’ll have access to all the features above and more so be sure to update to take advantage of everything Visual Studio has to offer.

As you test out these new features, use the Send Feedback button inside Visual Studio to provide direct feedback to the product team. This can be anything from an issue you’re encountering or a request for a new productivity feature. We want to hear all of it so we can build the best Visual Studio for you!

Allison Buchholtz-Au, Program Manager, Visual Studio Platform

Allison is a Program Manager on the Visual Studio Platform team, focusing on streamlining source control workflows and supporting both our first and third party source control providers.

Azure Data Architecture Guide – Blog #2: On-demand big data analytics

$
0
0

In our second blog in this series, we'll continue to explore the Azure Data Architecture Guide! Find the first blog in this series here:

The following example is a technology implementation we have seen directly in our customer engagements. The example can help lead you into the ADAG content to make the right technology choices for your business.

On-demand big data analytics

Create cloud-scale, enterprise-ready Hadoop clusters in a matter of minutes for batch and real-time data processing. With Azure, you can build your entire big data processing and analytics pipeline from massive data ingest to world-class business intelligence and reporting, using the technology that's right for you.

Highlighted services

Related ADAG articles

Please peruse ADAG to find a clear path for you to architect your data solution on Azure:

 

Azure CAT Guidance

"Hands-on solutions, with our heads in the Cloud!"

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>