Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Top Stories from the Microsoft DevOps Community – 2018.08.17

$
0
0

Another Friday means another happy hour! No, not the time when I hit the bar after work — the time when I catch up on the news around DevOps for VSTS and Azure. Here's what I've been reading (and, this week, writing!)

Jekyll with VSTS and Azure
I love GitHub Pages, but sometimes you need a more complex setup, like a proper deployment pipeline with approval gates. I demonstrate my CI/CD pipeline based around Jekyll — the app that runs GitHub Pages — using VSTS to deploy to Azure.

An Interview with Jez Humble on Continuous Delivery, Engineering Culture, and Making Decisions
When any of the fine folks at DORA — DevOps Research and Assessment — talk, I listen. In this interview, Jez Humble gives his thoughts on the state of the art of delivering software.

VSTS Extension – Tagging all resources within a Resource Group
Peter Rombouts has many ARM templates, and needs to tag all the resources. He's created a VSTS extension to simplify this and add tags to resources within a Resource Group automatically.

Private Sitecore nuget feeds using VSTS
Sitecore cleverly uses Nuget package feeds for their packages. So if you need some more advanced package management, like different feeds for pre-release and released packages, Bas Lijten shows you how to use VSTS package management to build custom views for Nuget feeds.

Hard lessons in asynchronous JavaScript code
Knock knock. Who's Race condition! there? (A little programmer humor makes threading problems easier to deal with.) In this enlightening article, Jesse Houwing explains how he debugged and fixed a race condition in his popular VSTS extension.

When should we scan for vulnerabilities in our build using WhiteSource Bolt?
It's important to keep your code secure and free of known security vulnerabilities. Willy-Peter Schaub takes a look at scanning for vulnerabilities as part of your CI/CD process.


The Merge Agent fails with ‘Reason:Invalid date format’.

$
0
0

Taiyeb Zakir
Microsoft SQL Server Escalation Support Services

I recently worked on a case where Merge agent was failing with this error:

>>>

2018-04-09 19:44:40.123 The merge process is retrying a failed operation made to article 'Project' - Reason: 'Invalid date format'.

2018-04-09 19:44:40.123 OLE DB Distributor 'USSECCMPSQCE202INST2': {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}

2018-04-09 19:44:40.311 OLE DB Subscriber 'DERUSCMPSQCE103INST2': {call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}

2018-04-09 19:44:40.389 Percent Complete: 0

2018-04-09 19:44:40.389 The merge process could not enumerate changes at the 'Publisher'. When troubleshooting, restart the synchronization with verbose history logging and specify an output file to which to write.The merge process is retrying a failed operation made to article 'Project' - Reason: 'Invalid date format'.

2018-04-09 19:44:40.389
>>>

We needed to find out what row was generating the "Invalid date format" error.

We increased reporting level for replmerg.log and saw this in the replmerg.log on the Subscriber

>>>

Oledbcon                      , 2018/05/09 14:37:56.284, 16988,  4357,  S5, PROFILER:0 Spid:221 , Srv:USSECCMPSQCE202INST2, Db:distribution                  , RPC:{call sys.sp_MSadd_merge_history90 (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}

Oledbcon                      , 2018/05/09 14:37:56.315, 5648,  4357,  S5, PROFILER:0 Spid:271 , Srv:USSECCMPSQCE202INST2, Db:SQTool                        , RPC:create table #retry_table_agent (tablenick int NOT NULL, rowguid uniqueidentifier ROWGUIDCOL default newid() not null, generation int NOT NULL, errcode int NOT NULL, errtext nvarchar(255) NULL, type tinyint NOT NULL)

Oledbcon                      , 2018/05/09 14:37:56.424, 5648,  4357,  S5, PROFILER:0 Spid:271 , Srv:USSECCMPSQCE202INST2, Db:SQTool                        , RPC:create procedure dbo.#insert_retry_proc_agent (
@tablenick int,
@rowguid uniqueidentifier,
@generation int,
@errcode int,
@errtext nvarchar(255),
@type tinyint ) as
update #retry_table_agent set tablenick=@tableni

Oledbcon                      , 2018/05/09 14:37:56.518, 5648,  7257,  S5, PROFILER:1 Spid:271 , Param1                  , DBTYPE_I4           , Len:4      , Val:96089000

Oledbcon                      , 2018/05/09 14:37:56.518, 5648,  7257,  S5, PROFILER:1 Spid:271 , Param2                  , DBTYPE_GUID         , Len:16     , Val:cb258cdd-933f-e811-80f6-549f35cdbf10

Oledbcon                      , 2018/05/09 14:37:56.518, 5648,  7257,  S5, PROFILER:1 Spid:271 , Param3                  , DBTYPE_I8           , Len:8      , Val:3039010000000000

Oledbcon                      , 2018/05/09 14:37:56.518, 5648,  7257,  S5, PROFILER:1 Spid:271 , Param4                  , DBTYPE_I4           , Len:4      , Val:0

Oledbcon                      , 2018/05/09 14:37:56.518, 5648,  7257,  S5, PROFILER:1 Spid:271 , Param5                  , DBTYPE_WVARCHAR     , Len:38     , Val:Invalid date format

Oledbcon                      , 2018/05/09 14:37:56.518, 5648,  7257,  S5, PROFILER:1 Spid:271 , Param6                  , DBTYPE_I1           , Len:1      , Val:02

Oledbcon                      , 2018/05/09 14:37:56.518, 5648,  4357,  S5, PROFILER:0 Spid:271 , Srv:USSECCMPSQCE202INST2, Db:SQTool                        , RPC:{call [#insert_retry_proc_agent] (?,?,?,?,?,? )}

Oledbcon                      , 2018/05/09 14:37:56.628, 5648,  7257,  S5, PROFILER:1 Spid:197 , Param2                  , DBTYPE_I4           , Len:4      , Val:49

Oledbcon                      , 2018/05/09 14:37:56.628, 5648,  7257,  S5, PROFILER:1 Spid:197 , Param3                  , DBTYPE_I4           , Len:4      , Val:3

Oledbcon                      , 2018/05/09 14:37:56.628, 5648,  7257,  S5, PROFILER:1 Spid:197 , Param4                  , DBTYPE_WVARCHAR     , Len:214    , Val:The merge process is retrying a failed operation made to article 'Project' - Reason: 'Invalid date format'.
>>>

Error was coming from Project table with rowguid 'cb258cdd-933f-e811-80f6-549f35cdbf10'.

We ran SELECT query for that rowguid checking row on the Publisher and Subscriber, but row looked okay.

Looking at table scheme, found Project table had nvarchar(max) column, considered BLOB data. We found that Merge agent performs blob optimization by default for BLOB data and here it modified the metadata incorrectly which caused this error. For more details on blob optimization check this doc and look for @stream_blob_columns.

To resolve the issue we can use either of these workarounds:

  • Change nvarchar(max) to nvarchar(fixed length)
  • Change @stream_blob_columns to false
  • Don't publish nvarchar(max) column

We changed @stream_blob_columns to false and that resolved the issue.

DECLARE @publication AS sysname;
DECLARE @article AS sysname;
SET @publication = N'test_pub';
SET @article = N'ReplTest';
USE SQLTool
EXEC sp_changemergearticle
@publication = @publication,
@article = @article,
@property = N'stream_blob_columns',
@value = N'false';
GO
 

Issue with security update for the Remote Code Execution vulnerability in SQL Server 2016 SP2 (CU): August 14, 2018

$
0
0

On Tuesday August 14, we published a Security Update for six different releases of SQL Server 2016 and 2017. For one of those releases, SQL Server 16 SP2 CU (KB4293807), we inadvertently published additional undocumented trace flags that are normally not on by default. We are working on replacing the update in the next few days. If you installed KB4293807 and are experiencing issues please uninstall the update until the replacement update (KB4458621) is available.

 

Thank you

SQL Server Release Services

Experiencing Alerting failure issue in Azure Portal for Many Data Types – 08/18 – Resolved

$
0
0
Final Update: Sunday, 19 August 2018 03:18 UTC

We've confirmed that all systems are back to normal with no customer impact as of 08/19 2:20 UTC. Our logs show the incident started on 08/18 1:10 UTC and that during the 25 hours 10 minutes that it took to resolve the issue some customers who configured Availability test to run from US regions would have experienced failures in Azure Portal.
  • Root Cause: The failure was due to issues with one of the back end service.
  • Incident Timeline: 25 Hours & 10 minutes - 08/18 1:10 UTC through 08/19 2:20 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Praveen


Update: Sunday, 19 August 2018 01:16 UTC

We continue to investigate issues within Application Insights. Root cause is not fully understood at this time. Some customers who configured Availability test to run from US regions continue to experience issues in Azure Portal. We are working to establish the start time for the issue, initial findings indicate that the problem began at 08/18 01:10 UTC. We currently have no estimate for resolution.
  • Work Around: none
  • Next Update: Before 08/19 04:30 UTC

-Praveen


Initial Update: Saturday, 18 August 2018 22:13 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers in US regions may experience availability test issues in Azure Portal.
  • Work Around: none
  • Next Update: Before 08/19 01:30 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Praveen


Design against crime & Microsoft Azure with Shiny

$
0
0

image

Guest blog by Mariam Elgabry Microsoft Student Partner at UCL Crime Science Department uses azure to host a shiny webapp

Microsoft student partners IN ACTION!

Mariam Elgabry is a Microsoft Student Partner at the University College London completing her Masters of Research on Security and Crime Science whilst entering her PhD on future crimes. She has graduated from Imperial College London with a Masters in Bioinformatics and was a Microsoft Student Partner Lead for the years 2016 – 2017, establishing the 3Hack, an interdisciplinary pre-hack for the Microsoft ImagineCup, TourdeTech an international initiative for computer science outreach and was selected to attend the MVP Summit in 2016 in Seattle, USA. Mariam is an acting Special Sergeant at the Metropolitan Police of London, volunteering in her spare time. She has also completed an internship at Toren Consulting during which she developed a Shiny App that produces an interactive map of local crime per London neighbourhood and generates a report on the prevalent crimes and hot spot areas. It is this app we will later learn how to deploy a shiny app onto Microsoft Azure services.

toren consulting

Toren Consulting Ltd (torenconsulting.co.uk) is a specialist security design consultancy for the built environment. The company was founded in 2017 by Mark Tucknutt MPhys CPhys MSc MSyI (LinkedIn) and provides security advice and design services to the property sector. Toren supports office, hotel and residential developments and fit-outs from concept design through detailed engineering design to construction and handover.

One of Toren’s frequent tasks is to deliver crime risk assessments, often in support of a project’s sustainability objectives via a BREEAM Security Needs Assessment. A key aspect of this is consideration of reported crime in the area local to a project, including a review of data available via police.uk. We engaged with UCL to find a student from the Security and Crime Science course who could help us to improve our crime risk assessment workflow. Mariam has developed for us the app described in this post, which we expect will both improve the quality of our crime risk assessments and reduce the time taken to produce them.

DEPLOYING A SHINY APP ONTO MICROSOFT AZURE Step-by-step

After developing your awesome shiny new app, you’ll have to host it through a web service if you want other to share its awesomeness. That’s where Microsoft Azure comes in and provides for a secure platform to host your application on. Essentially what you’ll learn today is a step-by-step process (with a few tangents here and there on the obstacles I faced, which will hopefully help you if you face them as well!) of how to build a shiny-server to run on Linux as a web interface to host your R shiny app. So let’s get started!

First and foremost, you’ll need to set up your MSDN account with Microsoft Azure, as a Microsoft Student Partner (and as a new user!) you get various credits given to you, make sure you make good use out of them! Once you are all set up, you will have to create a new Ubuntu virtual machine (VM), ideally you want the latest but most stable version. Select the name and size, remember the username you’ve created and select password for your authorisation option. Include the ssh and http ports in your configuration (we will later select the 80 port for our endpoint). Complete your setup and wait patiently until azure has worked its magic and has – viola – created your VM!

image

As Azure is completing your configuration and creating your VM, make sure you have PuTTY downloaded and installed (www.putty.org/). Grap the IP address from the “Overview” section within your newly deployed VM and Copy & Paste into PuTTY.

clip_image004

This will prompt you to login with the username and password that you created on your Azure account. Once you’ve gotten the green light there (I know, it’s so confusing typing in the password without the cursor moving!), we need to update and upgrade to the latest versions. Type in the following commands in your command line:

$ sudo apt-get update

$ sudo apt-get upgrade

If it says some have not been upgraded, manually upgrade the ones it points out to you. We then need to getting cracking on installing R and, of course, install shiny:

$ sudo apt-get install r-base

$ sudo su - -c "R -e "install.packages('shiny', repos='http://cran.rstudio.com/')""

The shiny package does the reporting and the server allows for the hosting, so now we need our server! Configure it all with the following commands:

$ sudo apt-get install gdebi-core

$ wget https://download3.rstudio.org/ubuntu-14.04/x86_64/shiny-server-1.5.7.907-amd64.deb

$ sudo gdebi shiny-server-1.5.7.907-amd64.deb

This will produce your new best friend, the .conf file, where you will be defining everything on how you want your app to run – so get acquainted to this fast! You now need to open your .conf file and change the port in which you are allowing traffic into your shiny server from. You will be changing the default port of 3838 to port 80 (or whichever you prefer). Use the following commands to do so:

$ sudo nano /etc/shiny-server/shiny-server.conf

(you can sudo vi or vim or whatever editor suits you best – I quite like nano! You sudo everything as an administrator or else it will tell you you are not authorised to make such changes, keep this in mind when you’re trying to look at your log files later down the line during debugging phase…)

Good advice also states, with every change, restart your server:

$ sudo restart shiny-server

Now if everything has gone according to plan, on your browser type in your IP address, semicolon, the portal you just authorised (80), and you should see a lovely shiny welcome page!

clip_image006

You will most likely have an error on the right of your page in the two panels. These panels hold a demonstration of two shiny apps that use rmarkdown – so let’s go ahead and install that into our VM so we can see them on our welcome page!

$ sudo su - -c "R -e "install.packages('rmarkdown', repos='http://cran.rstudio.com/')""

clip_image008

If at any point (not that this happened to me..no…not at all..), after trying to run and unrun things, the homepage disappears (again…not that this EVER happened to me..) check the configurations on your azure portal Overview page. The settings of your DNS and assigned name. I changed it from dynamic to static and ran through the installation process again – if it is already installed, it won’t re-install it, but if there is an update available it will update it. So really doesn’t hurt…

clip_image010

Once all that works out, refresh your page and (hopefully) voila! You should see the applications!

My app – i want my app!

Up until now we’re doing great, everything is set up and working, but we want to deploy our own app – the mission continues! Firstly, you’ll need to copy over the necessary files from your local host onto your brand spanking new (and shiny) VM – just in case you’re such a noob that you don’t know your local host name (once again…not that any of this relates to me and what I had to deal with through deployment..) open up your PC command line (windows, type in cmd in your start page) and type the following command:

$ ipconfig /all

On windows, you can change this to something you like and can remember (and type faster) by going to your Settings > System > About (– who doesn’t like naming their computer?!)

If you’re a FURTHER noob (and I am not saying I came across these issues whatsoever…she says as she whistles away) and forgot your password (because you use the high tech facial rec or easier-to-remember PIN instead) then open your command line as an Administrator (right click on command line and select “Run as Administrator”) and type in the following command:

$ net user Administrator *

This will allow you to set up your new password, re-type it and it will overwrite your previous one – keep it safe and for the love of God, remember it!

Right back to shiny app deployment, once you’ve collected yourself, scp your files over (I did it from my computer command line to the VM – rather than from PuTTY, but either way works):

$ scp -r file_to_be_copied user@DNSname:

(remember, you’re copying your files reclusively using scp -r, stating the file and then telling it where to send it off in the format of username setup on your azure “at” the DNS name or IP address again taken from y our overview page on your Azure, semicolon path – here I would suggest you send it to /srv/shiny-server and then mkdir a folder to keep your app files in – for this app I created the folder “TorenApp”)

Once you have all the necessary files (namely, your ui.R and server.R scripts needed to run your app – I also have an .Rmd file as users of my app can download written content, using the knitr package), open the .conf file to edit the directory of your app, in order to display it on the webpage. Under location, site_dir needs to be altered to app_dir. This instructs the Shiny Server to attempt to serve a single application hosted at the given directory:

# Define the location '/TorenApp'
location /TorenApp {
  app_dir /srv/shiny-server/TorenApp;
}

If you reload your server (sudo reload shiny-server) and refresh your page, you should see your app – if you’re super-duper lucky! If not, move onto the debug section below!

the torturous debugging phase

This phase is never easy, because your app will function exactly how you want it on your local machine – which may be of a different operating system to lets say Linux – which is your current VM OS! The best way to debug is by checking your log files and looking at where the system breaks by addressing the errors. Use this command to get to your log files:

$ cd /var/log/shiny-server

$ nano appName-shinyuser-yyyymmdd-hhmmss-41509.log

Add the following line into your code (make sure to remove this later):

options(shiny.sanitize.errors = FALSE)

Edit your shiny server .conf file and add the following line after "run_as" :

$ sudo nano /etc/shiny-server/shiny-server.conf

preserve_logs true;

If you’re not that comfortable to work on command line, download X2go (https://wiki.x2go.org/doku.php), which is a lovely interface.

The go-to things I’d make sure I do in debugging if things aren’t working are:

· For one, make sure you have two files for your app, I had it in a single file and ran it through there, but the server here expects the format in two separate files in order to be run and hosted properly.

· It goes without saying that all your dependencies should be explicitly downloaded through your script – don’t be clever, pacman does NOT work. What worked for me was manually installing every package onto the VM and only loading library calls in the script. You only have to do this once, so might as well do it and create an environment your app runs on.

· Make sure the R version you used on your local host is the same as the one you have downloaded onto your VM, for the dependencies to work the same. I for example had to update to the latest version of R – you’d think this would by default be downloaded when installing R, for some reason it was running in an older version.

· Check that your script doesn’t include any relative pathways and explicitly write the new ones.

Well that’s a wrap! Hope you’ve found this blog interesting and please do get into contact if you face any other issues that aren’t mentioned here or that aren’t covered in the following (awesome) tutorials and blogs!

Awesome tutorials / documentation to help!

https://www.rstudio.com/products/shiny/download-server/

https://www.digitalocean.com/community/tutorials/how-to-set-up-shiny-server-on-ubuntu-14-04

https://sqlbits.com/Sessions/Event14/Shiny_dashboards_in_R

http://docs.rstudio.com/shiny-server/

https://stephlocke.info/Rtraining/shiny-server_build.html

https://www.top-password.com/knowledge/forgot-local-password-but-remember-pin.html

https://knowledge.autodesk.com/customer-service/network-license-administration/get-ready-network-license/getting-network-license-file/finding-your-host-name-and-id

How to make your classroom more accessible and inclusive

$
0
0



Mark Anderson is a former teacher and school leader and now award-winning author, blogger, speaker, thought-leader and trainer around all things to do with teaching, learning and effective use of technology in the classroom.

Mark firmly believes that education is a force for good and under his moniker of the ICT Evangelist he strives to demonstrate how technology is something that can help to make the big difference to the lives of learners and teachers alike.

He’s taking over as our guest editor over the summer with a series of blog posts highlighting the great things you can do with technology so that it can have the impact it so rightly should!

 



 

One of the most beautiful parts of the role of the teacher is that ability you have to make a difference. I am a fervent believer that Education is a force for good and the opportunity to make that difference in the world, for me, makes teaching one of the best and most rewarding jobs in the world. That doesn't mean that it is an easy job by any stretch of the imagination.  With teachers and middle leaders working on average 54 hours per week with senior leaders working on average 60 hours per week (source: DfE’s Workload Challenge poll). When thinking about implementing any new strategy, adding additional workload to an already heavy workload is going to struggle to make any impact; something has to give.

When it comes to using technology in education there are lots of ways in which the workload of teachers, middle and senior leaders can be streamlined and made more efficient. On top of this, technology for education has never been more affordable and better value for money than it is currently.

 

 

People often talk about the potential of edtech to support and enhance teaching and learning; rarely though do we truly see technology transforming lives. One area that edtech truly can transform lives is when it is placed in the hands of learners who have accessibility and disability needs or other additional learning needs. For many of these learners, even just accessing the written word (without technology) can be nigh on impossible. When we start adding technology to the mix there are so many amazing tools that can help with this.

In recent years, Microsoft have really been pushing through to forefront of free and readily available technologies to support learners to make classrooms more inclusive and accessible. One of the most popular developments to come from Microsoft have been the Learning Tools such as Immersive Reader that was developed to assist learners with accessing and reading text in many of the different tools within Office 365.

Immersive reader can recognise text. It can read text to you at various speeds in a variety of voices. The text can be displayed in different way, such as showing nouns, syllables or adjectives. It can change the background of the page the text sits on too. It has a huge number of superb features which can open up the written word to those who might otherwise find it difficult or impossible to do so.

 

 

Artificial Intelligence or more often referred to as AI underpins lots of other little tweaks and fantastic tools to come out from Microsoft to help you and your learners. For example, have you noticed the ‘QuickStarter’ option in PowerPoint? When you next need to create a presentation using the tools, try choosing the ‘QuickStarter’ option. Add in the keywords for your presentation, choose a layout and away you go. Boom! The bare bones of an already beautiful presentation have been made for you, thus speeding up the whole process of creating your resources for your learners.

Another superb tool to be released by Microsoft is the award-winning fantastic ‘Seeing AI’ app from which can describe your surroundings to you or read text off a page to you or describe someone right down to their age and emotional expression. Superb for the low-vision community, this research project from within the UK Microsoft team has received a number of awards and recognitions since its launch earlier this year.

 

 

Another brilliant AI powered feature which can help your classroom be even more accessible and inclusive is that of ‘Presentation Translator for PowerPoint’, part of ‘Microsoft Translator‘. Learn more here about the ways in which the Rochester Institute of Technologyp5r` is using custom speech models and Microsoft Translator to make learning for their deaf or hard-of-hearing students more accessible than ever.

You can find out more about Microsoft's accessibility tools, products and developments by visiting their accessibility site here.


Follow Mark Anderson on social now! 

Twitter > @ICTEvangelist

Instagram > @ICTEvangelist

Facebook > /theictevangelist

LinkedIn > /themarkanderson

Blog > ictevangelist.com

 



Next on the Menu – A new tool! xel2sql

$
0
0

Based on a request by Microsoft Test Consultant Robert George, this SQL Snacks™ along with a tool I am releasing to the community will allow you to run a Transact-SQL workload on Azure SQL Database, capture the xel files to Azure Blob Storage, and then process them to produce an executable Transact-SQL Script that duplicates the captured workload.

The source code (quite simple) is available on my GitHub repo  https://github.com/bobtaylor29708/xel2sql

 

The PowerPC 600 series, part 11: Glue routines

$
0
0


The PowerPC has a concept of a "glue routine".
This is a little block of code to assist with control
transfer, most of the time to allow a caller in one module
to call a function in another module.
There are two things that make glue routines tricky:
Jumping to the final target
and juggling two tables of contents
(the caller's and the callee's).



Registers r11 and r12 are available
to glue routines as scratch registers.
You can use them in your code,
but be aware that they may be trashed by a glue routine,
which means in practice that they are good only until the
next taken jump instruction.
(We saw earlier that r12 is used by prologues,
but since prologues run at the start of a function,
and you must have jumped there,
prologues are welcome to use r12 as a scratch
register because any valid caller must have assumed that
r12 could have been trashed by a glue routine anyway.)



Let's take care of the easy case first:
Suppose the routines share the same table of contents.
This is usually the case if the caller and callee are in the same
module.
A glue routine may become necessary if a branch target ends up
being too far away to be reached by the original branch,
and the linker needs to insert a glue routine near the caller
that in turn jumps to the callee.
(On the Alpha AXP,

this is called a trampoline
.)



bl toofar_glue
...

toofar_glue:
lwz r11, n(r2) ; r11 = original jump target (toofar)
mtctr r11 ; ctr = original jump target (toofar)
bctr ; and jump to toofar



Exercise: We had two choices for the register to use
for the indirect jump.
We could have used ctr or lr.
Why did we choose ctr?



Next is the hard part:
A glue routine that needs to connect functions that may
have different tables of contents.
This sort of thing happens if you naïvely import a function.



bl toofar_glue
...

toofar_glue:
lwz r11, n(r2) ; r11 = function pointer
lwz r12, 0(r11) ; r12 = code pointer
stw r2, 4(r1) ; save caller's table of contents
mtctr r12 ; ctr = code for target
lwz r2, 4(r11) ; load callee's table of contents
bctr ; and jump to toofar



The inter-module glue function sets up both the code pointer
and the table of contents for the destination function.
But there's the question of what to do with the old table of contents.
For now, we save it in one of the reserved words on the stack,
but we're still in trouble because the callee will return back to
the caller with the wrong table of contents.
Oh no!



The solution is to have the compiler leave a nop
after every call that might be to a glue routine
that jumps to another module.
If the linker determines that the call target is indeed a glue routine,
then it patches the nop to lwz r2, 4(r1) to reload
the caller's table of contents.
So from the caller's perspective, calling a glue routine looks
like this:



; before
bl toofar ; not sure if this is a glue routine or not
nop ; so let's drop a nop here just in case

; after the linker inserts the glue routine
bl toofar_glue ; turns out this was a glue routine after all
ldw r2, 4(r1) ; reload caller's table of contents



The system also leaves the word at 8(r1) available
for the runtime,
but I don't see any code actually using it.¹
The remaining three reserved words in the stack frame
have not been assigned a purpose yet; they remain reserved.



If the compiler can prove² that the call destination uses the same
table of contents as the caller, then it can omit the nop.



The glue code saves the table of contents at 4(r1),
but the calling function may have already saved its table of contents
on the stack,
in which case saving the table of contents again
is redundant.
On the other hand,
if a function does not call through any function pointers,
then it doesn't explicitly manage its table of contents because
it figures the table of contents will never need to be restored.
So there's a trade-off here:
Do you force every function to save its table of contents on the stack
just in case it calls a glue routine (and teach the linker how to fish
the table of contents back out, so it can replace the nop
with the correct reload instruction)?
Or do you incur an extra store at every call to a glue routine?
Windows chose the latter.
My guess is that glue routines are already a bit expensive,
so making them marginally more expensive is better than penalizing
every non-leaf function with extra work that might end up not needed
after all.³



Exercise: Discuss the impact of glue routines on tail call
elimination.


¹
My guess is that intrusive code coverage/profiling tools may
use it as a place to save the r11 register,
thereby making
r11 available to increment the coverage count.
But I haven't found any PowerPC code coverage instrumented binaries
to know for sure.



²
Microsoft compilers in the early 1990's
did not support link-time code generation,
so the compiler can prove this only if the function
being called resides in the same translation unit as the caller.



³
It's possible to eliminate most glue routines
with sufficient diligence:
Explicitly mark your imported functions as
__declspec(dllimport) so that they aren't
naïvely-imported any more.
The only glue routines remaining would be the ones for calls
to functions that are too far away.


Reaching Azure disk storage limit on General Purpose Azure SQL Database Managed Instance

$
0
0

Azure SQL Database Managed Instance is SQL Server implementation on Azure cloud that keeps all database files on Azure storage. In this post you will see how Managed Instance allocates disks in the storage layer and why is this important.

Azure SQL Database Managed Instance has General Purpose tier that separates compute and storage layers where the database files are placed on Azure Premium disks. Managed Instance uses pre-defined sizes of azure disks (128GB, 256GB, 512GB, etc.) for every file so every file is placed on a single disk with the smallest size that is enough to fit the file with the current file size.

This is important because every Managed Instance has up to 35TB of internal storage. This means that once you provision Managed Instance you have two storage limits:

  1. Managed instance user storage is the managed instance storage size that you choose on portal and you pay for this amount of storage
  2. Internal physically allocated azure premium disk storage that cannot exceed  35TB

When you create database files, they are allocated on the azure premium disks with the sizes that are greater than file size so Managed Instance has some "internal fragmentation" of files. This is implemented because Azure Premium Disk storage offers fixed set of disk sizes, so Managed Instance tries to fit database files on the matching disk.

The sum of the allocated disk sizes cannot be greater than 35TB. If you reach the limit, you might start getting the errors even if you don't reach user-defined Managed Instance storage limit.

In this post, you will see some scripts that can help you to see are you reaching this storage limit.

First, we will create a schema and view that wraps standard sys.master_files view and returns the allocated disk size for every file:

CREATE SCHEMA mi;
GO
CREATE OR ALTER VIEW mi.master_files
AS
WITH mi_master_files AS
( SELECT *, size_gb = CAST(size * 8. / 1024 / 1024 AS decimal(12,4))
FROM sys.master_files )
SELECT *, azure_disk_size_gb = IIF(
database_id <> 2,
CASE WHEN size_gb <= 128 THEN 128
WHEN size_gb > 128 AND size_gb <= 256 THEN 256
WHEN size_gb > 256 AND size_gb <= 512 THEN 512
WHEN size_gb > 512 AND size_gb <= 1024 THEN 1024
WHEN size_gb > 1024 AND size_gb <= 2048 THEN 2048
WHEN size_gb > 2048 AND size_gb <= 4096 THEN 4096
ELSE 8192
END, NULL)
FROM mi_master_files;
GO

Now we can see the size allocated for the underlying Azure Premium Disks for every database file:

SELECT db = db_name(database_id), name, size_gb, azure_disk_size_gb
from mi.master_files;

Sum of the azure disk sizes should not exceed 35TB - otherwise you will reach the azure storage limit errors. You can check total allocated azure storage space using the following query:

SELECT storage_size_tb = SUM(azure_disk_size_gb) /1024.
FROM mi.master_files

Using this information, you can find out how many additional files you can add on a managed instance (assuming that new file will be smaller than 128GB):

SELECT remaining_number_of_128gb_files = 
(35 - ROUND(SUM(azure_disk_size_gb) /1024,0)) * 8
FROM mi.master_files

This is important check because if this count became zero, you will not be able to add more files of database on the instance.

Bing.com runs on .NET Core 2.1!

$
0
0

Bing.com is a cloud service that runs on thousands of servers spanning many datacenters across the globe. Bing servers handle thousands of users' queries every second from consumers around the world doing searches through their browsers, from our partners using the Microsoft Cognitive Services APIs, and from the personal digital assistant, Cortana. Our users demand both relevancy and speed in those results, thus performance and reliability are key components in running a successful cloud service such as Bing.

Bing's front-end stack is written predominantly in managed code layered in an MVC pattern. Most of the business logic code is written as data models in C#, and the view logic is written in Razor. This layer is responsible for transforming the search result data (encoded as Microsoft Bond) to HTML that is then compressed and sent to the browser. As gatekeepers of that front-end platform at Bing, we consider developer productivity and feature agility as additional key components in our definition of success. Hundreds of developers rely on this platform to get their features to production, and they expect it to run like clockwork.

Since its beginning, Bing.com has run on the .NET Framework, but it recently transitioned to running on .NET Core. The main reasons driving Bing.com's adoption of .NET Core are performance (a.k.a serving latency), support for side-by-side and app-local installation independent of the machine-wide installation (or lack thereof) and ReadyToRun images. In anticipation of those improvements, we started an effort to make the code portable across .NET implementations, rather than relying on libraries only available on Windows and only with the .NET Framework. The team started the effort with .NET Standard 1.x, but the reduced API surface caused non-trivial complications for our code migrations. With the 20,000+ APIs that returned with .NET Standard 2.0, all that changed, and we were able to quickly shift gears from code modifications to testing. After squashing a few bugs, we were ready to deploy .NET Core to production.

ReadyToRun Images

Managed applications often can have poor startup performance as methods first have to be JIT compiled to machine code. .NET Framework has a precompilation technology, NGEN. However, NGEN requires the precompilation step to occur on the machine on which the code will execute. For Bing, that would mean NGENing on thousands of machines. This coupled with an aggressive deployment cycle would result in significant serving capacity reduction as the application gets precompiled on the web-serving machines. Furthermore, running NGEN requires administrative privileges, which are often unavailable or heavily scrutinized in a datacenter setting. On .NET Core, the crossgen tool allows the code to be precompiled as a pre-deployment step, such as in the build lab, and the images deployed to production are Ready To Run!

Performance

.NET Core 2.1 has made major performance improvements in virtually all areas of the runtime and libraries; a great treatise is available on a previous post in the blog.

Our production data resonates with the significant performance improvements in .NET Core 2.1 (as compared to both .NET Core 2.0 and .NET Framework 4.7.2). The graph below tracks our internal server latency over the last few months. The Y axis is the latency (actual values omitted), and the final precipitous drop (on June 2) is the deployment of .NET Core 2.1! That is a 34% improvement, all thanks to the hard work of the .NET community!

The following changes in .NET Core 2.1 are the highlights of this phenomenal improvement for our workload. They're presented in decreasing order of impact.

  1. Vectorization of string.Equals (@jkotas) & string.IndexOf/LastIndexOf (@eerhardt)

Whichever way you slice it, HTML rendering and manipulation are string-heavy workloads. String comparisons and indexing operations are major components of that. Vectorization of these operations is the single biggest contributor to the performance improvement we've measured.

  1. Devirtualization Support for EqualityComparer<T>.Default (@AndyAyersMS)

One of our major software components is a heavy user of Dictionary<int/long, V>, which indirectly benefits from the intrinsic recognition work that was done in the JIT to make Dictionary<K, V> amenable to that optimization (@benaadams)

  1. Software Write Watch for Concurrent GC (@Maoni0 and @kouvel)

This led to reduction in CPU usage in our application. Prior to .NET Core 2.1, the write-watch on Windows x64 (and on the .NET Framework) was implemented using Windows APIs that had a different performance trade-off. This new implementation relies on a JIT Write Barrier, which intuitively increases the cost of a reference store, but that cost is amortized and not noticed in our workload. This improvement is now also available on the .NET Framework via May 2018 Security and Quality Rollup

  1. Methods with calli are now inline-able (@AndyAyersMS and @mjsabby)

We use ldftn + calli in lieu of delegates (which incur an object allocation) in performance-critical pieces of our code where there is a need to call a managed method indirectly. This change allowed method bodies with a calli instruction to be eligible for inlining. Our dependency injection framework generates such methods.

  1. Improve performance of string.IndexOfAny for 2 & 3 char searches (@bbowyersmyth)

A common operation in a front-end stack is search for ':', '/', '/' in a string to delimit portions of a URL. This special-casing improvement was beneficial throughout the codebase.

In addition to the runtime changes, .NET Core 2.1 also brought Brotli support to the .NET Library ecosystem. Bing.com uses this capability to dynamically compress the content and deliver it to supporting browsers.

Runtime Agility

Finally, the ability to have an xcopy version of the runtime inside our application means we're able to adopt newer versions of the runtime at a much faster pace. In fact, if you peek at the graph above we took the .NET Core 2.1 update worldwide in a regular application deployment on June 2, which is two days after it was released!

This was possible because we were running our continuous integration (CI) pipeline with .NET Core's daily CI builds testing functionality and performance all the way through the release.

We're excited about the future and are collaborating closely with the .NET team to help them qualify their future updates! The .NET Core team is excited because of our large catalog of functional tests and an additional large codebase to measure real-world performance improvements on, as well as our commitment to providing both Bing.com users fast results and our own developers working with the latest software and tools.

This blog post was authored by Mukul Sabharwal (@mjsabby) from the Bing.com Engineering team.

Unified Service Desk 4.0 is released – A modern, unified, adaptable, and reliable offering

$
0
0

Continuing towards our goal towards bringing the best and the brightest of Dynamics 365 experiences to our users, enabling our developer community to build and deploy robust solutions, and providing our users and administrators with modern, unified, adaptable and reliable Unified Service Desk experiences, we have released the latest version - Unified Service Desk 4.0.

Download the latest version of the product:

The highlights of the release are as follows:

Unified Interface in Unified Service Desk

With the release of Dynamics 365 (online), version 9.0, we've introduced a new user experience -Unified Interface- which uses responsive web design principles to provide an optimal viewing and interaction experience for any screen size, device, or orientation. Unified Service Desk supports the apps built using Unified Interface framework. That is, you can load a URL or page from Dynamics 365, which is built based on the Unified Interface framework.

 

For more information, see Support for Unified Interface Apps in Unified Service Desk

Web Client- Unified Interface Migration Assistant

.Web Client - Unified Interface Migration Assistant for Unified Service Desk, a tool that helps you to seamlessly migrate your existing Unified Service Desk configurations from Dynamics 365 Web Client to Dynamics 365 Unified Interface App.

For more information, see Web Client - Unified Interface Migration Assistant

Prevent Accidental Closure of Unified Service Desk

While working on Unified Service Desk, accidentally if you select the X Close button, you may lose all the unsaved work. The Close Confirmation Window is introduced to prevent the accidental closure of the Unified Service Desk client application.

For more information, see How to configure close confirmation window to prevent accidental closure of Unified Service Desk

Unified Interface KM Control

The Unified Interface KM Control hosted control is introduced for your knowledge base search experience with Unified Service Desk. You must configure the Unified Interface KM Control when you are using a Unified Interface App in Unified Service Desk.

For more information, see Unified Interface KM Control (Hosted Control)

Preview Feature: Unified Service Desk Administrator App 

With Unified Service Desk 4.0, you can use Unified Service Desk Administrator App built on the Unified Interface framework to administer and manage the Unified Service Desk client application.

The Unified Service Desk Administrator app is built on the Unified Interface framework, which has a new user experience - Unified Interface - which uses responsive web design principles to provide an optimal viewing and interaction experience for any screen size, device, or orientation. The Unified Service Desk Administrator app brings rich experience to administer and manage your Unified Service Desk client application.

For more information, see Preview feature - Unified Service Desk Administrator app

Preview feature: Unified Interface Settings

Unified Interface Settings is a new configuration element introduced in the Unified Service Desk Administrator App. This configuration elements lets you configure a default Unified Interface for your agents so that when they sign-in to Unified Service Desk, straightaway the agent lands to Unified Interface App. This configuration saves time and enhances the agents sing-in experience to Unified Service Desk.

For more information, see Preview feature - Set default Unified Interface App using Unified Interface Settings

Preview feature: Stack notification in Unified Service Desk 

You can configure stack notification notifications in Unified Service Desk to display popup notification messages to your customer service agents that contains general information or some customer or process-related information that the agent can act on.

This facilitates simultaneous toast notifications in a multi-session environment.

For more information, see Stack notifications

Preview feature: Switch between local sessions, and between local and global session 

When you are working on a case (local session) and want to review your Dashboard (global session) or another case (local session), you can easily switch from the case to Dashboard or another case, without affecting your session timer. That is, when you switch from local session, your session timer will not be counted until you switch back to the session. This helps in efficiently measure the agents' productivity.

For more information, see SwitchSession Action in Session Tabs (Hosted Control)

Call to Action:

You are encouraged to validate the latest release in their environments and plan for an upgrade of the Unified Service Desk client. New and existing customers, can use the best practice analyzer tool to validate their solutions and deployment for adherence to the best practices for best results.  Existing customers, on earlier versions of Dynamics 365 or with Unified Service Desk solutions built around legacy web-client experience, can use the migration tool to migrate their solutions to the Unified Interface experience. To learn more about the new features and enhancements, see the following documentations:

 

(This blog has been authored by Kumar Ashutosh and Karthik Balasubramanian)

Docker Hub’s scheduled downtime on 25 August: potential impacts to App Service customers

$
0
0

Docker has scheduled a maintenance window for Docker Hub on Saturday August 25th, which has potential impacts to App Service customers.

For Web App for Containers (using custom Docker image), customers will not be able to create new web apps using a Docker container image from Docker Hub during the maintenance window.  Customers can still create new apps using Docker images hosted on Azure Container Registry or a private Docker registry.  For App Service on Linux (using non-preview built-in stacks),  customers will not be impacted as we have Docker container images cached on our Linux workers. 

To avoid unnecessary service interruptions, we recommend Web App for Containers customers not make any changes or restart your apps, or use an alternative Docker registry during the Docker Hub maintenance window.

Experiencing Data Access Issue in Azure Portal issue in Azure Portal for Many Data Types – 08/20 – Resolved

$
0
0
Final Update: Monday, 20 August 2018 11:59 UTC

We've confirmed that all systems are back to normal with no customer impact as of 08/20, 10:14 UTC. Our logs show the incident started on 08/20, 08:47 UTC and that during the 1 Hour 27 minutes that it took to resolve the issue 352 customers experienced data access issues in the portal.
  • Root Cause: The failure was due to issues with the configuration of one of our back-end service.
  • Incident Timeline: 1 Hour & 27 minutes - 08/20, 08:47 UTC through 08/20, 10:14 UTC

We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Anmol


Initial Update: Monday, 20 August 2018 09:45 UTC

We are aware of issues within Log Analytics and are actively investigating. Some customers in SouthEast Asia and West Europe region may experience Data Access issue
  • Work Around: None
  • Next Update: Before 08/20 12:00 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Abhijeet



Microsoft Office Specialist World Championship 2018: Rückblick

$
0
0

Schülerinnen und Schüler sowie Studierende aus der ganzen Welt trafen sich vom 29. Juli bis 1. August in Florida, um an der Microsoft Office Specialist World Championship 2018 teilzunehmen. Auch Lisa Grau war dabei: „Ich kann wirklich jedem nur wünschen, sich für die MOS World Championship zu qualifizieren! Das war eine Reise, die ich auf jeden Fall nie vergessen werde“, berichtet die Studentin.

Der weltweite Wettbewerb wird von Certiport, Inc. präsentiert. In der MOS World Championship können Schülerinnen und Schüler sowie Studentinnen und Studenten ihr Wissen um Microsoft Office Word, Excel oder PowerPoint (2013 oder 2016) unter Beweis stellen. Teilweise ist dem MOS World Championship ein Wettbewerb auf nationaler Ebene vorgeschaltet. Die Besten der nationalen MOS-Meisterschaften nehmen automatisch an der MOS World Championship teil. Auf die Gewinnerinnen und Gewinner jeder Kategorie warten dann neben Zertifikat, Medaille und Trophäe auch Preisgelder in Höhe von bis zu 7.000 US-Dollar.

Es lebe Harry Potter!

Insgesamt nahmen an der diesjährigen MOSWC 152 Finalistinnen und Finalisten teil, 18 Gewinnerinnen und Gewinner wurden ausgezeichnet. Der Wettbewerb fand im Hilton Orlando Lake Buena Vista in Orlando, Florida statt. „Die ganze Veranstaltung war unglaublich detailreich und liebevoll organisiert. Neben diversen Reden, beispielsweise von Story Musgrave oder vorherigen MOSWC-Teilnehmern, welche dadurch unglaubliche Chancen bekommen haben (eine Teilnehmerin hatte mit 15 Jahren einen Vollzeitjob angeboten bekommen), wurden die offiziellen Teile im Harry-Potter-Stil abgehalten“, berichtet Lisa Grau. Motivation und Spaß seien ein wichtiger Teil der Veranstaltung gewesen, so die Studentin. „Die Willkommensrede drehte sich komplett um das Thema ‚Everybody needs a passion‘ und die Party am Montagabend war auch im Harry-Potter-Stil.“

Auch durch das durchdachte Aufgabenkonzept habe das Event richtig Spaß gemacht.

„In der Aufgabe wurde nur das Ziel genannt und den Weg dahin musste man selbst finden“, so Lisa Grau. „Die Programme sind um diverse Funktionen erweitert, sodass man gar nicht alles vorher lernen kann, sondern auch Über-den-Tellerrand-Denken gefragt ist.“

MOSWC-Teilnehmende aus mehr als 50 Ländern

Im Mittelpunkt der MOS World Championship stehen aber nicht nur die Office-Kompetenzen der jungen Teilnehmerinnen und Teilnehmer, sondern auch die globale Vernetzung. 51 Länder waren beim diesjährigen MOS World Championship vertreten. „Besonders schön anzusehen war, wie verschiedenste Traditionen und Kulturen aufeinandertreffen und miteinander zurechtkommen bzw. die Unterschiede sogar begrüßen.

Es wurden Freundschaften geknüpft, die wahrscheinlich noch sehr lange bestehen, und Erinnerungen geschaffen, die man bestimmt nie vergessen wird“, erzählt Lisa Grau.

Wie bewerbe ich mich für die MOS World Championship 2019?

Schülerinnen und Schüler sowie Studentinnen und Studenten können sich durch eine Microsoft Office Specialist-Prüfung in Word, Excel oder PowerPoint (2013 oder 2016) für die Teilnahme an der MOS World Championship 2019 qualifizieren. Die Teilnehmer müssen zum Stichtag zwischen 13 und 22 Jahre alt sein. Die Anmeldung erfolgt in der Regel im Sommer, das genaue Datum wird auf der Webseite der MOS World Championship veröffentlicht. Fest steht allerdings schon, wohin es für die Finalisten 2019 geht: Nach New York in das New York Marriott Marquis.

Continuous Integration, Deployment and Test Automation for Dynamics CRM

$
0
0

In this post, App Dev Manager Kamal Yuvaraj explorers CI/CD and Test Automation for Dynamics CRM


A successful agile software development process enables shorter development cycles which means a faster time to market. DevOps adoption is key to achieve a successful agile software process. Dynamics CRM has it’s challenges, especially when it comes to continuous integration and deployment (CI/CD) process. Through this series I will be able to share my journey on implementing CICD process for Dynamics CRM with process workflow, POC, and tools. I am sure this will assist developers and architects in having confidence in implementing a successful CICD process for CRM. Multiple organizations have adopted this process with great success. The end goal in adoption of this process is to enable a repeatable and reliable release pipeline for a Dynamics CRM application.

Challenges

My journey started when I was assigned as DevOps lead for a Dynamics CRM implementation project. Having come from a custom development background, I was used to automated, continuous build and deployment every time code is committed into source control. I had few key challenges with Dynamics CRM.

  • Solution files were manually extracted and imported to target as a deployment process
  • No Unit testing or validation for deployed solution
  • Multiple deployment process is followed between release environments. For example, in Dev and Sit environment the solution was migrated manually, and in UAT, Pre-Prod and Prod environment DB compare was applied to promote changes
  • Reference data was mutually entered in each environment
  • Multiple developers working in the same organizations overwriting the changes
  • CRM is a project, team must work within the exposed API for automation

Rules and Term definitions

Before we started solving the problem, let us take a moment to define some rules of engagement and common terminologies. The rules also illustrate the end goal of the process.

Rules

  • Everything must be in source control. For example Plugin Code, Solution file, Reference Data and user roles etc..
  • Check in regularly and every change should trigger the feedback process
  • Feedback process should be short and validates the committed component
  • Track changes and rollback as needed

Dynamics CRM CICD process

The below diagram illustrates a basic flow of an applications’ CICD process.

A basic continuous integration system

ci

Dynamics CRM Workflow

crmcicd

Pre-Commit: This is the development stage where the changes are made to the application. At a start of scrum, a user story is assigned to developer. He uses this environment to make the change and validates them. Unit test is developed in parallel as part of test driven development. XRM interface was used to validate the entity and attribute changes. I have written a framework which simplifies the access to XRM which I will be sharing part of this series.

Commit: This is a completely automated phase which begins as soon as the developer commits the changes into the version control. The result will be reported with pass or fail status. This stage also generates a solution validation script, integrates the checked in solution and validates the solution after deployment.

Acceptance: This is the phase in deployment where the validation is in more detailed and takes hours to complete. This phase is executed against the latest version of solution passes commit phase. In my project this phase was a daily job which usually runs regression test using selenium scripts and validates the integration to multiple systems. Again, the result is pass of fail with published result. If passed the deployable artifacts are versioned and stored in a Binary repository. This version will be used for production release.

Having defined the basic CICD workflow. In my future posts, lets investigate flow of the developer activities and build process in each of the above environments. I will dive into each of the above phases (Pre-Commit, Commit and Acceptance) with detailed explanation of the process with sample code on how to automate the steps.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.


Visual Studio for Mac version 7.6

$
0
0

Today we are announcing the release of Visual Studio for Mac version 7.6. Our focus with this release has been to improve product reliability in various areas, with a special focus on the code editing experience. We have also made several fixes that improve IDE performance. Finally, we’ve extended our support for Azure functions with the addition of new templates and the ability to publish your function to Azure from within the IDE.

This post highlights the major improvements in this release. To see the complete list of changes, check out the Visual Studio for Mac version 7.6 Release Notes. You can get started by downloading the new release or updating your existing install to the latest build available in the Stable channel.

Improving reliability of the Code Editor

We’ve focused our attention on improving the reliability of the code editor in Visual Studio for Mac and have addressed several issues with the code editor. In particular, we want to highlight the following fixes to issues many of you have reported:

Improving performance of the IDE

One of the top reported bugs in previous releases has been performance issues in the editor. Having a fast and reliable code editor is a fundamental part of any IDE and an important part of any developer's workflow, so we’ve made some improvements in this area:

  • We improved tag-based classification for C# with PR #4740 by reusing existing Visual Studio for Windows code, which should improve typing performance in the editor.
  • We now support no-op restore of NuGet packages when opening a solution. This change speeds up NuGet restores on solution load.

We’ve also added many more small fixes that improve startup time and reduce memory consumption of the IDE.

Richer support for Azure Functions

Azure functions are a great way to quickly get up and running with a serverless function in just a few minutes. With this release, we have introduced new templates for you to choose from when creating your Azure Functions project:

New Project Dialog showing how to configure Azure Functions project

These new templates allow you to configure access rights, connection strings, and any other binding properties that are required to configure the function. For information on selecting a template, refer to the Available function templates guide.

Another major part of the Azure functions workflow that we are introducing with this release is publishing of functions from Visual Studio for Mac to the Azure Portal. To publish a function, simply right-click on the project name and select Publish > Publish to Azure. You’ll then be able to publish to an existing Azure App Service or use the publishing wizard to create a new one:

New App Service Dialog showing how to create new app service on Azure

For information on publishing to Azure from Visual Studio for Mac, see the Publishing to Azure guide.

Share your Feedback

Addressing reliability and performance issues in Visual Studio for Mac remains our top priority. Your feedback is extremely important to us and helps us prioritize the issues that are most impacting your workflow. There are several ways that you can reach out to us:

  • Use the Report a Problem tool in Visual Studio for Mac.
    • We are enhancing the Report a Problem experience by allowing you to report a problem without leaving the IDE. You’ll have the ability to automatically include additional information, such as crash logs, that will help our Engineering team narrow down the root cause of your report more effectively. This will be introduced in an upcoming servicing release to 7.6 that will be available in the Stable channel within the next few weeks.
  • You can track your issues on the Visual Studio Developer Community portal where you can ask questions and find answers.
  • In addition to filing issues, you can also add your vote or comment on existing issues. This helps us assess the impact of the issue.
Dominic Nahous, Senior PM Manager, Visual Studio for Mac
@VisualStudio

Dominic works as a PM manager on Visual Studio for Mac. His team focuses on ensuring a delightful experience for developers using a Mac to build apps.

Resolved: Issue with security update for the Remote Code Execution vulnerability in SQL Server 2016 SP2 (CU): August 14, 2018

$
0
0

We have replaced KB4293807 with KB4458621 resolving the issue described below. Updated SQL Server 2016 SP2 CU packages are now available on the Microsoft Download Center and Microsoft Update as outlined in KB4458621.

If you have previously installed KB4293807 it is recommended that you install KB4458621 as soon as possible

PowerShell PowerTip: searching and installing modules on the command line

$
0
0

PowerShell 5+ ships with the module PowerShellGet, which lets us search and install modules from cmdlets. The default nuget repository is the PowerShell Gallery, but you could add others yourself (including custom ones for internal modules).

There are a lot of reasons this could help you:

  • You need a custom module installed on a machine for a remote script. This lets us build in the logic to detect if it is there, and install it if its not. Additionally we can use remoting to just install a module on a bunch of machines at once.
  • It will install your modules to the best-practice install location by default: C:Program FilesWindowsPowerShellModules
  • It feels more like the Linux terminal for interactive work

There are a lot of cmdlets in the module, but the ones you'll use the most will be:

  1. Find-Module
  2. Install-Module (you'll probably want to run this as an admin)

You can also specify the scope on install-module and put it in your user module location for ones just for you on a machine.

Hope that helps, tune in more often to get short and sweet PowerTips!

プリンタードライバーの GPD ファイル記述に関する注意事項

$
0
0

今回は Universal プリンタードライバーにて記述を行う GPD ファイルの記述に関する注意事項を簡単にご紹介させていただきます。

GPD ファイルはテキストベースでプリンターの Charasteristics、Commands、Features 等を記述するものですが、定義されたキーワードが多数あります。GPD ファイルの概要については Introduction to GPD Files をご参照いただくこととして、その事前定義されたキーワードに関しては、大文字小文字が区別されますので注意が必要です。

Standard Features として定義されているものとして ”Orientation”、”InputBin”、”PageSize”、”MediaType” などがありますが、例えば “ORIENTATION” というように、大文字で記述すると、正しく Standard Feature として認識されません。
以下は Windows Driver Sample に含まれている bitmap.gpd ファイルの抜粋ですが、以下の黄色に示した各キーワードはすべて大文字小文字が区別されます (ほぼすべてになりますが...)。

*%******************************************************************************************
*%                                      Paper Size
*%******************************************************************************************
*Feature: PaperSize
{
    *rcNameID: =PAPER_SIZE_DISPLAY
    *DefaultOption: LETTER
    *Option: LETTER
    {
        *rcNameID: =RCID_DMPAPER_SYSTEM_NAME
        *switch: Orientation
        {
            *case: PORTRAIT
            {
                *PrintableArea: PAIR(9500, 12500)
                *PrintableOrigin: PAIR(400, 400)
                *CursorOrigin: PAIR(300, 300)
                *Command: CmdSelect
                {
                    *Order: DOC_SETUP.12
                    *Cmd: ""
                }
            }

もし、正しく記述しているつもりであるにも関わらず、期待通りに動作しない場合は、今一度、大文字小文字に誤りがないか、以下の URL をご参照いただき、比較・確認していただければと思います。

General Attributes
https://docs.microsoft.com/en-us/windows-hardware/drivers/print/general-attributes

Standard Features
https://docs.microsoft.com/en-us/windows-hardware/drivers/print/standard-features

Standard Options
https://docs.microsoft.com/en-us/windows-hardware/drivers/print/standard-options

WDK サポートチーム 祝田

Experiencing Data Access Issue in Azure and OMS portal for Log Analytics in East US – 08/21 – Resolved

$
0
0
Final Update: Tuesday, 21 August 2018 04:33 UTC

We've confirmed that all systems are back to normal with no customer impact as of 08/21, 03:35 UTC. Our logs show the incident started on 08/21, 03:00 UTC and that during the 35 minutes that it took to resolve the issue all the customers with workspaces in East US experienced data access issues in the portal.
  • Root Cause: The failure was due to an issue in one of our dependent platform services.
  • Incident Timeline: 35 minutes - 08/21, 03:00 UTC through 08/21, 03:35 UTC

We understand that customers rely on Azure Log Analytics as a critical service and apologize for any impact this incident caused.

-Leela


Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>