Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Booting Windows 10 natively from a .VHDX drive file

$
0
0

This post is an update (using Windows 10 and a newer version of Convert-WindowsImage.ps1) of a similar post I had in my blog about booting natively from a .VHDX file:

https://blogs.msdn.microsoft.com/cesardelatorre/2014/10/18/booting-windows-8-1-update-natively-from-a-vhdx-image/

I’m also publishing this for my own records and folks asking me about it, as it is not a super straight forward procedure..

This procedure is very useful when you need to boot Windows natively, but you need to have multiple different environments like when using BETA/RC versions of Visual Studio, dev tools or simply dual/multiple boots with different configuration and software installed but you don’t want to have any compromise in UI performance like when you use Hyper-V or any other virtual machine environment.

Doing this you don’t have to give up on performance, this is the real thing! you boot natively.  This is NOT a VM (Virtual Machine) booting from Hyper-V.

This is native boot but instead of booting from files in a partition, you boot from files that are placed within a .VHDX file. But you boot natively!

Why would you want to boot natively? Here are a few reasons:

– Need to use Android emulators on top of any hypervisor like Hyper-V (Nested virtualization doesn’t have great performance..).

– If you want to deploy mobile apps from Visual Studio (Xamarin apps to Android or Windows devices, for instance), you’d need to connect those mobile devices to any USB port. But, Hyper-V VMs don’t support USB connections to devices.

– Want to get good UI/Graphics experience, as much as you PC can offer with your graphics card not being limited by any hypervisor, like Hyper-V

– In any case where you want to get good performance from your machine because what you get from a Hyper-V VM is not enough and at the same time you want to handle multiple environments within the same machine (although you’d be able to boot just one of them, of course).

Here’s some additional info if you want to know more about “Virtual Hard Disks (.VHD/.VHDX) with Native Boot”: http://technet.microsoft.com/en-us/library/hh824872.aspx

In the past, I used to follow more complex steps in order to create a Windows 8 or Windows 7 .VHD master image, then booting natively my machine by configuring the boot options with bcdedit. Here’s my old post:  http://blogs.msdn.com/b/cesardelatorre/archive/2012/05/31/creating-a-windows-8-release-preview-master-vhd.aspx?wa=wsignin1.0

This is the way I currently do it. I’m using .VHDX format but you could also specify a .VHD (older) if you’d like.

Here are the steps. Pretty simple, actually!

1. You need to have any Windows .ISO image, like a “Windows 10 x64 – DVD (English)” from MSDN subscription, or any other version (any Windows 10 version and x64 or x86).

2. Download Convert-WindowsImage.ps1 from Microsoft TechNet Gallery ( https://gallery.technet.microsoft.com/scriptcenter/Convert-WindowsImageps1-0fe23a8f ) and copy it to a temporary directory. You can also download it from this .ZIP download in my Blog where I already wrote the function execution line

[Another way to create the .VHDX, that I haven’t tested, instead of using that PowerShell script is by using the DISM tool (Deployment Image Servicing and Management) from the Windows ADK) ]

3. Start the PowerShell console in administrator mode

4. Before executing the PowerShell script, you’ll need to allow scripts executions in the policies of your machine or user. If you want to allow that at a local machine scope, run the following command in the PowerShell console. IMPORTANT, run PowerShell with Admin rights (“Run as Administrator” mode):

Set-ExecutionPolicy Unrestricted -Scope LocalMachine

image

If you don’t run that command or you don’t have that policy in place, you’ll get an error like the following when trying to execute any PowerShell script:

image

For more info about those policies, read the following: http://technet.microsoft.com/library/hh847748.aspx

5. Edit the Convert-WindowsImage.ps1 file with Windows Powershel ISE (or with any editor, even NOTEPAD can work for this).

If using Windows Powershel ISE, you’d better run it with admin rights (“Run as Administrator” mode) so you can directly run the script with F5 afterwards.

Then, add the following line at the end of the script (or update it with your .ISO image name and settings if you got my updated file:

Convert-WindowsImage -SourcePath .en_windows_10_enterprise_x64_dvd.iso -VHDFormat VHDX -SizeBytes 150GB -VHDPath .Windows10_Enterprise_x64_Bootable.vhdx

image

6. Now, run the script either from Windows PowerShell ISE (with F5) or running it from a plain PowerShell command-line (In both cases with Admin privileges)

I’ll be executed like the following screenshot. Be patient, it’ll take a while as it has to copy all the files from the Windows .ISO image to the logical drive based on the .VHDX file that has been created.

image

 

Since my .VHDX is Dynamic and it is still not mounted, its size was just something less than 8GB! 🙂

image

7. MOUNT the .VHDX as a drive in your machine

Right-click the VHDX and mount it. In my case I got the F: as my mounted drive.

8. Set the BOOT files within the .VHDX

The following steps are needed to make your computer boot from the VHDX file:
a.Open an administrative command prompt via WIN+X Command Prompt (Admin)
b.Type bcdboot F:Windows in order to create the boot files in your .VHDX drive.


image

 

9. SAVE/COPY YOUR “MASTER .VHDX IMAGE FILE”!!!

At this point you have a “MASTER IMAGE .VHDX” that you could use in different machines/hardware since you still didn’t spin it up, therefore, it still doesn’t have any specific driver related to any hardware. Copy the Windows10_Enterprise_x64_Bootable.vhdx somewhere else so you’d be able to re-use it in multiple machines or in the same machine but for mutiple environments

 

10. Change the Boot Loader Description to the boot option’s name you’d like

Type again bcdedit /v, search for the boot loader pointing to the .VHDX and copy its GUID.

image

 

Taking that GUID identifier you can change the description in your bootlist by typing something like:

bcdedit /set {bd67a0a8-a586-11e6-bf4e-bc8385086e7d} description “Windows 10 Enterprise – VHDX boot”

(Of course, you should have and use a different GUID..)

image

 

Check again with bcdedit /v that the descrption for your new boot loader has changed:

image

 

11. Re-enable Hyper-V if you had Hyper-V enabled in your original and normal boot partition

If you had configured Hyper-V on your Windows 8.1 computer, don’t forget to enable the hypervisor launchtype:

bcdedit /set hypervisorlaunchtype auto

When messing with the startup, it rebuilds your boot configuration data store. But it doesn’t know if Hyper-V needs to have some specific settings enabled in the boot configuration data store in order to start the hypervisor. In any case, this is not related and you just need to do it if you also have HyperV installed.

 

12. YOU CAN NOW RE-START YOUR COMPUTER AND FINISH THE WINDOWS INSTALLATION.

If you reboot your machine, you’ll be able to select the new NATIVE WINDOWS BOOT but from a .VHDX like in the following screenshot!

Dual-Boot

It’ll be just the final Windows installation detecting devices, applying drivers and final configuration/personalization, and YOU ARE GOOD TO GO!

 

Additionally, bcdedit has many useful options, like copying an entry for pointing to another .VHDX that you just copied in your hard drive, etc. Just type bcdedit /? to check it out or see other options that I explain at the end of my old post: http://blogs.msdn.com/b/cesardelatorre/archive/2012/05/31/creating-a-windows-8-release-preview-master-vhd.aspx?wa=wsignin1.0

 

 

END OF PROCEDURE

/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

 

CONFIGURING OTHER MACHINES OR MULTIPLE BOOT LOADERS FROM .VHDXs

If you copy the “MASTER .VHDX”, you could re-use it for multiple boots, even for other machines.

Here’s the procedure once you have an existing MASTER .VHDX already created.

First, copy and rename the .VHDX to a different name depending on what you will install, like “Windows_10_for_Testing_Betas.VHDX” or whatever. In my screenshots I’m still using a similar name than before, though.

1. Check initial boot loaders

You can configure the boot options of windows by using the command-line tool bcdedit.exe.

bcdedit /v

Let’s say you start in another computer with a single boot from a single regular partition, you’ll see a similar description to the following:

image

You can see that I currently just have a single boot loader, booting from the C: partition.

2 What we want to do is to create a second BOOT LOADER by copying the current Windows Boot Loader. Type:

bcdedit /copy {current} /d “Windows 10 .VHDX Boot”

That line means you are copying the current Boot loader (the one I marked) but naming it with a different DESCRIPTION. And also, very important, when you copy any BOOT LOADER, the new copy will have a new GUID identifier, which is what you are going to use.

Then, type again bcdedit /v to see the new BOOT LOADER created:

image

You can see how now you have a second BOOT LOADER (#2 BOOT) with a different GUID than the original (#1 BOOT).

It also has the new description applied like “Windows 10 .VHDX Boot”. You’ll see that description when selecting the Boot option when starting your machine.

However ,you are still not done, as that second BOOT LOADER is still pointing to the C: partition, and you want it to be pointing to the .VHDX file!

 

3 Copy the new GUID (from BOOT #2) with the mouse, so you can use it in the next step. In this case I copy: {bd67a0a4-a586-11e6-bf4e-bc8385086e7d}

 

4 In order to point BOOT LOADER #2 to your .VHDX file, type the following 2 commands:

bcdedit /set {My_new_GUID_Number} device vhd=[C:]VHDsWindows10_Enterprise_x64_Bootable.vhdx

bcdedit /set {My_new_GUID_Number} osdevice vhd=[C:]VHDsWindows10_Enterprise_x64_Bootable.vhdx 

Note the difference in “device” and “osdevice”..

image

Now, you are done with the “hard” configuration.

Check that you have this new boot from Computer properties –> Advanced System Settings –> Advaced –>Startup and Recvovery –>Settings button:

image

You can just reboot the machine and select the BOOT option for your new .VHDX, and it’ll boot natively from that .VHDX!

 

Other BCDEDIT configurations:

You can update your boot loaders with commands like the following using the GUID of the BOOT LOADER you want to change:

TO CHANGE THE DESCRIPTION

bcdedit /set {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} description “Windows 7 .VHD Image”

COPY

bcdedit /copy {Original_GUID_Number} /d “my new description”
or
bcdedit /copy {current} /d “my new description”
or
bcdedit /copy {default} /d “my new description”


Dynamics Retail Discount – Discount Offer

$
0
0

Discount offer in Dynamics Retail Discount Engine is the simple discount. We sometimes also call it periodic discount.

Each discount offer can have multiple discount line definitions, each of which specifies

  1. Which products to discount – we can configure it in one of three options: product, variant and category.
  2. Discount method: one of offer price, amount off ($-off) or percentage off (%-off).

Given we allow multiple discount line definitions, it is possible to define, as an example, 10% off for all computer accessories in one line, and 15% off for a specific keyboard.

Discount offer is boring – it should be – to make it a bit more interesting, I will share a bit technical implementation detail: after we get the discounts from the database and before we start the calculation, we prepare the data structure, mostly lookups, for each discount. At this stage, when we see a product covered by multiple line definitions in one discount, we compare them, pick the best one and remove the rest. As always, compounding can make things more complicated. Which one is better: amount off or percentage off, may be conditional on what has been applied earlier.

For example, a $50 ergo keyboard is covered by two discount line definitions in one compoundable discount offer: 10% off and $4-off.  If the ergo keyboard is not covered by other discounts, then 10% off is better, while if we have applied $20 discount before, then $4-off is better.

One option to address the issue is to enforce one discount method per discount offer, in other words, all discount line definitions for the same discount offer has the same discount method. With the restriction, discount line definition comparison for each product becomes unconditional. We did not enforce it in Dynamics Retail solution because it would be a strong restriction for a small nuance. Even so, I recommend avoiding multiple discount methods in one discount offer, especially in cases where one product is covered by multiple discount line definitions.

At the time of writing, we do not support quantity control, but it is an open option. If we were to add quantity control, it would no longer be a simple discount.

Note on AX6 and AX7 difference: In AX6, we have offer price including tax as a discount method, while in AX7 we got rid of the unnecessary complexity.

Experiencing Data Access Issue in Azure Portal for Many Data Types – 01/22 – Resolved

$
0
0
Final Update: Sunday, 22 January 2017 00:04 UTC

We identified an issue within Application Insights and have actively mitigated it. We’ve confirmed that all systems are back to normal with no customer impact as of 01/22, 12:00 AM UTC. Our logs show the incident started on 1/21, 11:45 PM UTC and that during the 15 minutes that it took to resolve the issue, 6% of customers experienced Data Access Issues in the Azure Portal.
  • Root Cause: The failure was due to an issue in one of our dependent platform services.
  • Incident Timeline: 15 minutes – 1/21, 11:45 PM UTC through 1/22, 12:00 AM UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Sapna



Automated backups configuration fails when configured from Azure portal

$
0
0

In this post, we would like to explain one of the interesting issues that we encountered while using the automated backup feature for a VM from the azure portal (We can find the option once we click on VM > SQL Server Configuration> Automated Backup)

Symptoms

Cannot Configure Automated backups in azure VM from the portal which was created on ARM (Azure Resource Manager Model). It fails with the following error

• TYPE
Microsoft.Compute/virtualMachines/extensions
• RESOURCE ID
/subscriptions/6c28b945-6d98-403d-8936-5e658f228a0f/resourceGroups/Group/providers/Microsoft.Compute/virtualMachines/LTO-CT-SQL/extensions/SqlIaasExtension
• STATUSMESSAGE
{ "status": "Failed", "error": { "code": "ResourceDeploymentFailure", "message": "The resource operation completed with terminal provisioning state 'Failed'.", "details": [ { "code": "VMExtensionHandlerNonTransientError", "message": "Handler 'Microsoft.SqlServer.Management.SqlIaaSAgent' has reported failure for VM Extension 'SqlIaasExtension' with terminal error code '1009' and error message: 'Enable failed for plugin (name: Microsoft.SqlServer.Management.SqlIaaSAgent, version 1.2.10.0) with exception Command C:\Packages\Plugins\Microsoft.SqlServer.Management.SqlIaaSAgent\1.2.10.0\enable.cmd of Microsoft.SqlServer.Management.SqlIaaSAgent has exited with Exit code: 255'" } ] } }
• RESOURCE
LTO-CT-SQL/SqlIaasExtension
• OPERATION ID
B3B967D4EF42741A

Cause

SQL IAAS Agent Service was disabled and dint starts due to insufficient permissions.

Resolution

We can reproduce the issue by the following method.

We deployed a VM on our end and navigated to VM > SQL Server Configuration> Automated Backup and this failed with a similar error:

automatedbackup

Error", "message": "Handler 'Microsoft.SqlServer.Management.SqlIaaSAgent' has reported failure for VM Extension 'SqlIaasExtension' with terminal error code '1009' and error message: 'Enable failed for plugin (name: Microsoft.SqlServer.Management.SqlIaaSAgent, version 1.2.10.0) with exception
statusMessage:{"status":"Failed","error":{"code":"ResourceDeploymentFailure","message":"The resource operation completed with terminal provisioning state 'Failed'.","details":[{"code":"VMExtensionHandlerNonTransientError","message":"Handler 'Microsoft.SqlServer.Management.SqlIaaSAgent' has reported failure for VM Extension 'SqlIaasExtension' with terminal error code '1009' and error message: 'Enable failed for plugin (name: Microsoft.SqlServer.Management.SqlIaaSAgent, version 1.2.10.0) with exception Command C:\Packages\Plugins\Microsoft.SqlServer.Management.SqlIaaSAgent\1.2.10.0\enable.cmd of Microsoft.SqlServer.Management.SqlIaaSAgent has exited with Exit code: -532462766'"}]}}

We then went to the VM and checked the event viewer application and system logs and found the below errors:

The Microsoft SQL Server IaaS Agent service failed to start due to the following error:
The service did not start due to a logon failure.

The SQLIaaSExtension service was unable to log on as NT ServiceSQLIaaSExtension with the currently configured password due to the following error:
Logon failure: the user has not been granted the requested logon type at this computer
.

Service: SQLIaaSExtension
Domain and account: NT ServiceSQLIaaSExtension

This service account does not have the required user right “Log on as a service.”

The above clearly indicates that SQLIAASEXTENSION account needs to have the permissions in security policy.

We went to Run> Secpol.msc> Under Security settings looked for Local Policies > User Rights Assignment > Log on as a service (In right pane) >Right click and go to its properties and this account with Admin permissions.

secpol-msc

We then again tried to create automated backup and dint see the error in event viewer.
Now to figure out where is this account used, we looked at services.msc and found the account is used by Microsoft SQL IAAS Agent service.
We saw the service is in a stopped state.

services-msc

Now researching on this, we found SQL Server IAAS Agent service can help us to automate some administrative tasks, for example run jobs, monitors SQL Server, and processes alerts. When we enable Automated Backup on virtual machine, the extension will be automatically installed but somehow it didn’t start in our scenario due to the account permission issues.

We started the service and then tried to configure the automated backups from the azure portal and saw it completed successfully without any errors. In case if it still fails even after that then the next step is to Look at the locations C:WindowsAzureLogs and C:PackagesPlugins for any errors in the IAAS VM.

More Information:

We require the Microsoft SQL IAAS Agent service in running state for enabling automated backups and its functioning. When we enable Automated Backup on our virtual machine, the extension will be automatically installed.
Automated Backup automatically configures Managed Backup to Microsoft Azure for all existing and new databases on an Azure VM running SQL Server 2014 Standard or Enterprise. This enables us to configure regular database backups that utilize durable Azure blob storage. Automated Backup depends on the SQL Server IaaS Agent Extension.

Related articles:

More information on Automated Backups: https://azure.microsoft.com/en-in/documentation/articles/virtual-machines-windows-sql-automated-backup/

More information on IAAS Agent Service: https://azure.microsoft.com/en-in/documentation/articles/virtual-machines-windows-sql-server-agent-extension/

 

Written by:
Ujjwal Patel, Support Engineer, SQL Server Support

Reviewed by:
Raghavendra Srinivasan, Sr. Support Engineer, SQL Server Support

 

Automate SQL server backup file removal/deletion from Azure blob storage

$
0
0

In this post, we would like to explain one of the interesting issues that we encountered while working with azure backups and restore.

Symptoms

Cannot delete the .bak files from azure blob storage through maintenance plans or any other options from SQL which have been backed up using backup to URL or managed backup

Cause

At this time we don’t have any functionality available to automate the deletion of the backup files on Azure blob Storage account/container.

Resolution

You can backup and restore using maintenance plans to azure blob storage but you cannot use the maintenance cleanup task to clear the data from blob storage like you can do for your on premise.
The only way out to achieve this is by using PowerShell script. We developed the below script which will delete the files which have been modified earlier than one day in the below script from the date it’s called. The date can be changed 1 day to any number needed per requirements. In the below example, we have deleted the files which are older than one day.
We created a new storage account for testing this by using

#To create new storage account
New-AzureRmStorageAccount -ResourceGroupName resource-Test1 -Name bkuptourl -Type Standard_LRS -Location NorthEurope
#Script to delete backup files
$bacs = Get-ChildItem $location # use "-Filter *.bak" or "-Filter *.trn" for deleting bak or trn files specifically
$container="bkup"
$StorageAccountName="bkuptourl"
$StorageAccountKey="xsVyDSvy48113b37ZEu0/VNkYAz9R81cO7UwOTp4qhDYU9zNbLAjioiOh3FVnzhO8n3tDYOyWnSkn=="
$context = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey
$filelist = Get-AzureStorageBlob -Container $container -Context $context
foreach ($file in $filelist | Where-Object {$_.LastModified.DateTime -lt ((Get-Date).AddDays(-1))})
{
$removefile = $file.Name
if ($removefile -ne $null)
{
Write-Host "Removing file $removefile"
Remove-AzureStorageBlob -Blob $removeFile -Container $container -Context $context
}
}

Sample output

samplepsoutput

 

Written by:
Ujjwal Patel, Support Engineer, SQL Server Support

Reviewed by:
Raghavendra Srinivasan, Sr. Support Engineer, SQL Server Support

Understanding the Stock Loss

$
0
0

Q: When does a Stock Loss become a Loss?

A: Here is a suggestion that may help you decide whether to sell: pretend that you don’t own the stock and you have $2,800 in the bank. Then ask yourself, “Do I really want to buy this stock now?” If your answer is no, then why are you holding onto it?

Q: When you Must Cut Your Losses in Stock?

A: Golden Rule: Always Limit Your Losses to 7% or 8% of Your Cost, without Exception

Two Zillow Presentations that Explains how Zillow works

$
0
0

I discovered the following two presentations that explains how Zillow works:

Zillow: Disrupting the Real Estate Marketplace with Data

  • Stan Humphries, Chief Economist from Zillow, talks about what data Zillow has, how Zillow thinks about big data, what applications Zillow builds on top of the data, a little bit about the technical infrastructure, and what questions the data allows Zillow to answer in terms of the real estate marketplace.

How Zillow uses Machine Learning to Transform Real Estate

  • Zillow pioneered providing access to unprecedented information about the housing market. Long gone are the days when you needed an agent to get comparables and prior sale and listing data. And with more data, data science has enabled more use cases. Jasjeet Thind explains how Zillow uses Spark and machine learning to transform real estate.

SQL Server Service fails to start after applying patch. Error: CREATE SCHEMA failed due to previous errors.

$
0
0

 

In this post we would like to explain one of the interesting issues that we encountered while upgrading a SQL Server Instance.

Symptoms

SQL Server Service fails to start after applying SQL patch due to misconfiguration in MSDB.

Error:-

2016-06-28 19:23:41.22 spid5s    Script level upgrade for database ‘master’ failed because upgrade step ‘msdb110_upgrade.sql’ encountered error 2714, state 6, severity 25. Severity: 16, State: 0.
2016-06-28 19:23:41.22 spid5s    CREATE SCHEMA failed due to previous errors.
2016-06-28 19:23:41.22 spid5s    Error: 912, Severity: 21, State: 2.
2016-06-28 19:23:41.22 spid5s    Script level upgrade for database ‘master’ failed because upgrade step ‘msdb110_upgrade.sql’ encountered error 2714, state 6, severity 25. This is a serious error condition which might interfere with regular operation and the database will be taken offline. If the error happened during upgrade of the ‘master’ database, it will prevent the entire SQL Server instance from starting. Examine the previous error log entries for errors, take the appropriate corrective actions and re-start the database so that the script upgrade steps run to completion.

2016-06-28 19:23:41.22 spid5s    Error: 3417, Severity: 21, State: 3.
2016-06-28 19:23:41.22 spid5s    Cannot recover the master database. SQL Server is unable to run. Restore master from a full backup, repair it, or rebuild it. For more information about how to rebuild the master database, see SQL Server Books Online.

2016-06-28 19:23:41.22 spid5s    SQL Server shutdown has been initiated·

Cause

  • The upgrade script [msdb110_upgrade.sql] executes during the first restart of SQL after the service pack installation.
  • This script hits an exception during recreation of database role called “DatabaseMailUserRole”.
  • This is due to the fact that the schema named “DatabaseMailUserRole” was owned by other database role then “DatabaseMailUserRole” – DBO role in our case.

Resolution

1. Start the SQL server service using Trace flag -T902 ( 902 : Used to skip execution of any scripts during SQL Startup)

clip_image002

Or follow these steps:

· Open SQL Server Configuration Manager.

· In SQL Server Configuration Manager, click SQL Server Services.

· Double-click the SQL Server service.

· In the SQL Server Properties dialog box, click the Advanced tab.

· On click the Advanced tab, locate the Startup Parameters item.

· Add ;-T902 to the end of the existing string value, and then click OK.

· Restart the SQL server service

2. Connect to the SQL instance and backup the MSDB database

3. Manually delete the schema named “DatabaseMailUserRole

Management studio > Expand MSDB Database > Go to Security > Schemas > Look for DatabaseMailuserRole

clip_image004

4. Now delete the schema named DatabaseMailuserRole.

5. Restart the SQL Server service.

clip_image006

More information

  • Starting SQL Server 2008 onwards, whenever we upgrade or apply a patch on SQL, it upgrades only the binaries and not the database and its objects.
  • Once the upgrade completes and the service restarts for the first time, it starts the database upgrade using script msdb110_upgrade.sql which is located under

C:Program FilesMicrosoft SQL ServerMSSQLXX.YYYYMSSQLInstall

XX : SQL Version 
SQL 2008/R2  >10
SQL 2012       >11
SQL 2014       > 12

————————————————————–
— Database Mail roles and permissions
————————————————————–
— Create the DatabaseMailUserRole role
IF (EXISTS (SELECT *
FROM msdb.dbo.sysusers
WHERE (name = N’DatabaseMailUserRole’)
AND (issqlrole = 1)))
BEGIN — there are no members in the role, then drop and re-create it
IF ((SELECT COUNT(*)
FROM msdb.dbo.sysusers   su,
            msdb.dbo.sysmembers sm
WHERE (su.uid = sm.groupuid)
AND (su.name = N’DatabaseMailUserRole’)
AND (su.issqlrole = 1)) = 0)
BEGIN
EXECUTE msdb.dbo.sp_droprole @rolename = N’DatabaseMailUserRole’  
EXECUTE msdb.dbo.sp_addrole @rolename = N’DatabaseMailUserRole’  << **************Point of failure
END
END
ELSE
EXECUTE msdb.dbo.sp_addrole @rolename = N’DatabaseMailUserRole’

  • The command “SP_ADDROLE” fails as it also creates schema named “DatabaseMailUserRole” which already exists in the MSDB database.
  • The previous command “SP_DROPROLE” was unable to delete the schema “DatabaseMailUserRole” as it was owned by other database role– DBO role in our case.

Steps to repro the error:-

1. Change the schema owner of “DatabaseMailUserRole” to DBO

USE [msdb]
GO
ALTER AUTHORIZATION ON SCHEMA::[DatabaseMailUserRole] TO [dbo]
GO

2. Try to execute the below statement  and we hit the same error:

EXECUTE msdb.dbo.sp_droprole @rolename = N’DatabaseMailUserRole’  
EXECUTE msdb.dbo.sp_addrole @rolename = N’DatabaseMailUserRole’  <<<**************Point of failure

clip_image008

Related articles:


Written by:
Ujjwal Patel, Support Engineer, SQL Server Support
Reviewed by:
Raghavendra Srinivasan, Support Engineer, SQL Server Support

Error: Could not deploy package. Unable to connect to target server.

$
0
0

In this post we would like to explain one of the interesting issue that we encountered while deploying a DACPAC from sqlpackage.exe.

Symptoms

Cannot Deploy DACPAC Extracted from SQL 2012 Server from .NET custom code or from SQLPackage.exe command to SQL 2014

image

C:Program Files (x86)Microsoft SQL Server110DACbin>SqlPackage.exe /Action:Publish /SourceFile:”C:tempAgentLink2_11.0.6020.dacpac” /tsn:”RAGHAVSDC” /TargetDatabaseName:TestACM

Publishing to database ‘TestACM’ on server ‘RAGHAVSDC’.
The dac history table will not be updated.
Initializing deployment (Start)
Initializing deployment (Failed)
*** Could not deploy package.
Unable to connect to target server.

Cause

We don’t have a DAC Folder at location C:Program Files (x86)Microsoft SQL Server120 in the system but have the folder C:Program Files (x86)Microsoft SQL Server110DACbin (We can successfully publish to SQL 2012 but not SQL 2014)

Resolution

To reproduce the issue, please find a DAC folder at location C:Program Files (x86)Microsoft SQL Server110DACbin

Open a CMD with administrator privileges and navigate to this path and run the sqlpackage.exe to publish it to a SQL 2014/2016 server and we will get the same exact error

“*** Could not deploy package.

Unable to connect to target server.”

The above looks like to be a connectivity error at our first glance but this is not the case here. We tested the connectivity for this on multiple machines and didn’t find an issue with it. The solution to the problem is we need to install the DAC Framework https://www.microsoft.com/en-in/download/details.aspx?id=42293 and once installed, we will be able to see the DAC Folder at C:Program Files (x86)Microsoft SQL Server120DAC

We then can try to publish the DACPAC from sqlpackage.exe from the 120 Location and it gets published successfully.

image
More Information:

In the above scenario, we noticed that we can only publish the DACPAC for the version the DACPAC file was created for.

If we have taken a DACPAC for SQL 2012, then we can publish is to any higher version of SQL but it needs to be published from the 120 folder (C:Program Files (x86)Microsoft SQL Server120DACBin) if we want to publish to SQL 2014. If we are trying to publish the DACPAC taken from SQL 2012 to 2016 then we need to publish the package from the 130 Folder (C:Program Files (x86)Microsoft SQL Server130DACBin)

DACPAC is a feature of our Data Tier application which will allow us to backup the schema of our database. In simple terms, it is only database schema (definition without the data) which can be used on higher versions of SQL Server. SQLPackage.exe is a utility which allows us to automate database development and projects in our environment.

Related articles:

SQLPackage.exe: https://msdn.microsoft.com/en-us/library/hh550080(v=vs.103).aspx

Data Tier Applications: https://msdn.microsoft.com/en-us/library/ee210546.aspx

Design and Implementation for DACPAC: https://technet.microsoft.com/en-us/library/ee210546(v=sql.110).aspx

DAC Framework download: https://www.microsoft.com/en-in/download/details.aspx?id=42293

Written by – Ujjwal Patel, Support Engineer.
Reviewed by – Raghavendra, , Sr. Support Engineer.

C# 7.0 新功能

$
0
0

[原文发表地址]: What’s New in C# 7.0

[原文发表时间]: August 24, 2016

 

下面是对C#7.0版本所有语言功能的描述。随着Visual Studio “15” preview 4 的发布,大部分功能可以被更灵活的应用。现在正是时候将这些功能介绍给大家,你也可以借此让我们知道你的想法。

C#7.0增加了很多新的功能,更专注于数据的消费,代码的简化及代码的性能。或许最大的功能就是元组和模式匹配,它可以快速获得多个返回结果,而模式匹配,它可以根据数据的“形”的不同来简化代码。我们希望将它们结合起来,从而使你的代码更加简洁高效,也可以使你更加快乐并富有成效。您可以点击Visual studio 窗口顶部的“send feedback”按钮来告诉我们是否有哪些功能没有达到预期的功效,也可以告诉我们您对功能进一步改善的一些看法。还有许多功能没有在Preview 4 中实现。我们计划在最终版本发布下面所描述的功能,如果我们不能及时发布这些功能,会在notes上通知大家的。如果这些功能有变化,我们也会通知大家的。最终可能会有一些功能的改变和删除。

如果你对这些功能的设计过程感兴趣,你可以在Roslyn GitHub site上查看我们的设计笔记和讨论。

希望你对C#7.0有一个愉快的体验。

 

输出变量:

在当前的C#版本,使用out参数可能不像我们想的那样方便。在你调用一个带有out参数的方法之前,你必须首先声明一个变量并传递给它。你通常不会初始化这些变量(毕竟它们会被方法重写),也不能用var去声明他们,而是必须指定数据的完整类型。

image

在C#7.0版本中,我们添加了out变量,可以在给一个函数传入参数的时候再去定义变量的能力。

image

变量的作用域是一个封闭的块,因此后续的代码行也可以使用它们。大多数类型的语句都没有它们自己的作用域,因此被声明的out参数通常被引入到封闭的范围内。

注:在Preview 4中,适用范围规则有了更多的限制:out参数的作用域是声明它的语句。因此,上面那个例子只有在下一次发布的时候才能真正使用。

因为out变量会直接被当做out参数来声明,这样编译器通常会告诉他们应该是的类型(除非它们被重载),所以我们可以用VAR来定义,而不必使用真正的类型。

image

Out参数的一种常见用法是Try…模式, 其中一个布尔返回值表示成功,out参数会携带所获得的结果:

image

注:这里的i仅仅在if分支中会用到,所以Preview 4可以很好的处理这种情况。

我们还可以使用通配作为out参数,用*的形式来处理,这样你就可以不必关注你不想关注的返回值了。

image

注:我们还不确定在C# 7.0是否可以使用通配符。

 

模式匹配

C# 7.0引入了模式的概念,抽象的讲,模式是语法元素,能用来测试一个数据是否具有某种“形”,并且在它被使用的时候从中提取信息。

C# 7.0中的模式示例:

  • C形式的常量模式(C是C#中的常量表达式),我们可以验证输入是否等于C
  • TX形式的类型模式(T是一种类型,X 是一个标识符)。
  • Var x形式的Var模式 (x是一个标识符)。

这仅仅是一个开始,模式是C#中的一个新语言元素,我们希望未来在C#中会有更多类型的模式。

在C# 7.0中,我们改进了两种已经存在的模式语法设计:

  • is表达式的右值也可以是模式,而不仅仅只能是一种类型。
  • switch中的case分支,也可以匹配模式,而不仅仅是常量了。

在未来,我们会增加更多模式的使用。

 

使用模式IS的表达式

下面是一个使用Is表达式的例子,用到了常量模式和类型模式。

image

正如你所看到的,模式变量(模式引入的变量)很像之前说过的输出变量,可以在表达式中声明,并且在最近的定义域中使用。就像输出变量一样,模式变量是可变的。

注:像输出变量(out variables)一样,我们会在Preview 4给出更加严谨的规则。

模式和Try方法一起用也是可以的。

image

 

带模式的 Switch 语句

我们正在归纳总结这些 switch 语句,因此:

  • 你可以在任何类型上使用 switch(不仅仅是原始类型)
  • 可以在 case 子句中使用模式
  • Case 子句可以拥有额外的条件

下面是一个例子

image

关于新扩展的 switch 语句,有几点需要注意:

  • Case 子句的顺序现在很重要:就像 catch 子句,case 子句不再是必然不相交的,第一个子句匹配的将被选择。
  • 默认子句总是最后被判断:即使上面代码中 null 子句是最后才来,它会在默认子句被选择前被检查。这是为了与现有 switch 语义相兼容。然而,好的做法通常会让你把默认子句放到最后。
  • Switch 不会到最后的null语句:这是因为当前IS表达式的例子具有类型匹配,不会匹配到null。 这保证了空值不会不小心被任何的类型模式匹配上的情况; 你必须更明确如何处理它们(或放弃它而使用默认语句)。

 

元组

这是一个从方法中返回多个值的常见模式。目前可用的选项并不是最理想的:

输出参数。使用笨拙(即便有上面描述到的改进),它们在使用异步方法时不起作用。

  • System.Tuple<…>返回类型。使用繁琐并且需要分配一个元组对象。
  • 方法的定制传输类型:对于类型,具有大量的代码开销,其目的仅是暂时将一些值组合起来。
  • 通过动态返回类型返回匿名类型。高的性能开销,而且没有静态类型检查。

为了在这方面做得更好,C# 7.0添加了元组类型元组文字

image

这个方法可以有效地返回三个字符串,将其作为元素包含在一个元组值里。

方法的调用者将会接收到一个元组,并且可以单独地访问其中的元素:

image

Item1等是元组元素的默认名,并能够一直使用。但它们不具有描述性,因此你可以选择添加更好的。

image

现在元组的接收者有多个具有描述性的名称可用:

image

你也可以在元组文字中直接指定元素名称:

image

通常来说,你可以给元组类型分配相互无关的名字,只要独立的元素是可以被分配的,元组类型会自如转换成其他元组类型。但是也存在一些限制,特别是对于元组文字,会在常见的错误情况下警告或提示,例如偶然交换元素的名字。

注:这些限制还没在 Preview 4 中实现 。

元组是值类型,而且他们的元素是公开的、可变的。两个元组相等(并且有相同的哈希值),意味着它们的元素都是成对匹配的(并且有相同的哈希值)。

这使得元组对于在需要多个返回值的情况下非常有用。举例来说,如果你需要一个有多种键的词典,使用元组作为你的键,一切会非常顺利。如果你需要在每个位置有多种值的列表,使用元组进行列表搜索,程序会正常运行。

注:元组依赖于一组基本类型,它们还没包括在 Preview 4 中。为了使该特性工作,你可以通过 NuGget 获取它们:

  • 在 Solution Explorer 中右键点击项目,并选择“管理 NuGet 包…”
  • 选择“Browse”选项卡,选中“Include prerelease” ,选择“nuget.org”作为“Package source”
  • 搜索“System.ValueTuple”并安装它

 

Deconstruction解构

消耗元组的另一种方法是解构元组。解构声明是一种将元组拆分,并单独分配到新变量的语法:

image

在解构中可采用var关键字来声明单独的变量:

image

或者把var关键字提取出来放在括号外:

image

你也可以通过解构赋值来解构成一个现有变量:

image

解构不仅仅适用于元组。任何类型都可以被解构,只要它有一个对应的(实例或扩展)解构方法:

image

输出参数由结构产生的值构成。
(为何它使用了参数,而不是返回一个元组?这是为了让你可以针对不同的值拥有多个重载)。

image

构造函数和解构函数以对称方式出现是一种常见的模式。
至于输出变量,我们计划在解构中使用“通配符”,来化简你不关心的那些变量:

image

注:现在仍然不能确定通配符是否会出现在C#7.0中。

 

局部函数

有时,一个辅助函数只在一个使用它的单一方法的内部有意义。现在你可以在其他功能体内部以一个局部函数来声明这样的函数:

image

闭合范围内的参数和局部变量在局部函数内是可用的,就像他们在lambda表达式中一样。
举个例子,迭代的方法实现通常需要一个非迭代的封装方法,以便在调用时检查实参。(迭代器本身不能启动,直到调用MoveNext才会运行)。局部函数非常适合这样的场景:

image

如果Iterator是紧随Filter的一个私有方法,它将有可能被其他成员不小心使用(不需要参数检查)。此外,它将采取所有跟Filter里相同的参数,而不是指定域内的参数。

注:在Preview 4版本中,局部函数必须在它们被调用之前声明。这个限制将被放松,能调用读取直接赋值的局部变量。

 

文字改进

C#7.0允许用”_”作为数字分隔符:

image

你可以把它们放在任意的数字之间,以提高可读性。这对数值没有影响。
另外,C # 7引进了二进制数,因此,您可以直接指定二进制模式,而不需要知道十六进制符号。

image

 

引用返回和局部引用

就像在C#中你可以通过引用(用ref修饰符)传递参数,你现在也可以通过引用来返回参数,并通过引用将它们存储在局部变量中。

image

这个是非常有用的。例如,一个游戏可能会将它的数据保存在一个大的预分配数组结构中(为避免垃圾回收机制暂停)。这个方法可以直接返回一个引用到这样一个结构,且通过调用者可以读取和修改它。

为确保安全,也有以下一些限制:

  • 你只能返回”安全返回”的引用:一个是传递给你的引用,一个是指向对象中的引用
  • 本地引用会被初始化成一个本地存储,并且不能指向另一个存储

 

异步返回类型

到目前为止,在C#的异步方法必须返回void,Task或Task<T>。C#7.0允许以这样的方式来定义其它类型,从而使它们可以从异步方法返回。

例如,我们计划建立一个ValueTask<T>结构类型的数据。建立它是为了防止Task<T> 对象的分配时,异步操作的结果在等候是已可用。对于很多异步场景,比如以涉及缓冲为例,这可以大大减少分配的数量,并使性能有显著提升。

注:异步返回类型尚未在Preivew4 中提供。

 

更多 Expression-bodied 成员

Expression-bodied 的方法、属性等都是C# 6.0的重大突破,但是不是所有的成员都可以使用的。 C#7.0添加了访问器、构造函数和终结器等,使更多成员可以使用Expression-bodied 方法:

image

注:这些额外的Expression-bodied成员尚未在Preview4中提供。

这是一个由社区共享的示例,而非微软C#团队提供的,它是开源的!

 

Throw 表达式

在表达式中间抛出一个异常是很容易的:只要调用一个方法!但在C # 7.0中,我们允许在一些地方直接抛出一个表达式:

image

注:Throw表达式尚未在Preview 4中提供。

SharePoint Crawl log and the mysterious SearchID…

$
0
0

Hi Search Folks,

A very quick post to clarify the mysterious SearchID GUID seen when troubleshooting Crawl Errors.

Whenever an item crawl fails to crawl , the URL View page (CA / Search Service Application) will provide you with a error message but also with a SearchID GUID.

2017-01-22_1237

Well, mystery solved…The SearchID GUID is the actual Correlation Id you can extract to find out more details about the crawl error. You can then issue a simple PS command to extract the much needed ULS (note the lowercasing).

Merge-SPLogFile -Path "D:tempsitemastercrawlerror.log" -Correlation "3006F2D8-20F6-40F1-86A7-87FFC4C5238E".ToLowerInvariant()

As long as you have the ULS still available in your farm, you can start troubleshooting very quickly any crawl error.

One more thing…

For those you like to practice their PS abilities, you may consider automating the extraction of crawl errors out of the Crawl Log output. Remember that the Crawl Log object is available through SP Object Model (SP2013+) and allows you to retrieve all Crawl Log related data. You can then parse the ErrorDesc column to extract the SearchID and issue automatically Merge-SPLogFile.

Example of dumping Crawl Logs errors onto a CSV file (Adapt the SSA name, the Content Source and output file path)

$ssa = Get-SPEnterpriseSearchServiceApplication | Where-Object {$_.Name -eq "Search Service Application"}
$id = Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $ssa | ? { $_.Name -eq "Intranet"}
$log = New-Object Microsoft.Office.Server.Search.Administration.CrawlLog $ssa
$dt = $log.GetCrawledUrls($false, 1000000, $null, $false, $id.Id, 2, -1, [System.DateTime]::MinValue, [System.DateTime]::MaxValue)
$dt | Export-CSV –Path “Crawllogs.csv” -NoTypeInformation

 

That’s it. I hope this could helps you in your SharePoint Search journey.

Keep your Search up !

Dynamics Retail Discount – Quantity Discount

$
0
0

I wish I had a chance to re-design quantity discount for Dynamics Retail Discount Engine.

Let’s review the basics first.

Discount method

One discount method for one quantity discount: either unit price, or percentage off. (We could add unit amount off in the future.)

Quantity Level

Once a product reaches a quantity level, we apply the discount to all quantities of the product. We allow multiple quantity levels. Recently, we added restriction that discount cannot be worse off as quantity level increases.

Product setup

As in discount offer, we can configure a discount line definition in three ways: product, variant and category, and as in discount offer, we can have multiple discount line definitions.

If we configure a line for a single product (non-master product), or variant, then it is a straightforward quantity discount: if you reach a quantity level, you get the discount for the product.

If we configure a line for a category or a master product, then it is actually a mix and match with variable quantity, disguised as a quantity discount. For example, it is straightforward to configure the wine discount of buying 6+ wines and getting additional 10% off – a common promotion in many US grocery stores – with quantity level 6 and wine category.

In short, we are mixing two types of discounts in quantity discount. It gets more complicated when we have multiple discount line definitions. By design, we treat each discount line definition as independent discount for competition. If the discount is compounded and multiple discount lines in the same discount covers the same product, then they cannot compound on top of each other for the product. The current setup can bring non-obvious and non-trivial internal competition when multiple discount lines in the same discount cover the same product.

By now, you can see why I wish I could re-design it in two different types of discounts.

Related: Dynamics Retail Discount – Discount Offer

Screencast/Video: Building a Java-based RESTful Service and Running it in a Docker Linux Container using the Azure Container Service

$
0
0

Published on Jan 22, 2017


introscreen0001

Figure 1: 1 Hour Presentation

Session Abstract

This session is really about the future of computing and how web-based applications will be run at scale on large clusters that provide built-in scalability, failure recovery, and optimal price performance characteristics.

This presentation will provide both conceptual and concrete, hands-on guidance on taking a Java-based (could be PHP, Node, etc) restful application, packaging it up to run inside of the container and ultimately on a large, cloud-based Mesosphere DC/OS Cluster. The presentation will begin with a brief overview of the history of containerization and cluster orchestration technologies, and will conclude a soup to nuts live demonstration of all the steps required to containerizing a web-based application and running it at scale in a fault-tolerant and vendor neutral (100% OSS).

The demos will include cluster management automation, including Marathon and Mesos (DC/OS).
You can see my work here: http://blogs.msdn.microsoft.com/allth

This presentation will help new and existing developers better understand how companies such as Twitter, AirBnB, Netflix run web-based applications that massive scale, leveraging containerization and cluster orchestration technologies. Participants will better understand the trends and future direction of the large public and private clouds.

This session is about automatic scaling, failure recovery, and optimal utilization of compute, storage, and networking. Participants will walk away with a very hands-on and concrete understanding through live presentations about how all the pieces fit together.

OBJECTIVE #1: Discuss the role of containers and the value they provide for building and deploying applications, citing the work I did with McDonald’s to help them design their next point-of-sale software system to run in 34,000 stores worldwide

OBJECTIVE #2: Illustrate how to take a Java-based Restful service (Spring) and create a Tomcat-based container, encapsulating the dependencies within the container

OBJECTIVE #3: Discuss and illustrate the role of container orchestration and large distributed clusters, using the Azure Container Service as the example and the Data Center Operating System (DC/OS)

How is this perf testing thing actually working?

$
0
0

This post is #3 in a series of posts about performance testing.

Post #1 was all about setting up an instance of NAV on Azure and get perf tests up running.

Post #2 was all about scaling the number of users and running multi-tenancy.

But what actually happens when running perf tests?

When running a perf test called OpenCustomerList it doesn’t take a lot of thinking to figure out what the test does, but how does it do it?

The CreateAndPostSalesOrder test will Create a Sales Order and Post it, but how?

The core

The core of perf testing is really to simulate users and user actions. This can of course be done using tools, which can control either the Windows Client or the Web Client in a browser and simulate key presses and mouse events. There are a lot of tools with this functionality, but I would argue that they all rely very much on how the Client is implemented and the how the rendering is done.

Perf testing is slightly different.

If you investigate the settings in the perf test solution you will find a setting called NAVClientService, which is set to

https://<your public dns name>/NAV/WebClient/cs

cs?

If you try to open this Url in a browser it will fail.

If you remove cs, then you have the Url for the Web Client, so what is this cs?

cs is short for Client Services and in order to communicate with the Client Services endpoint of your NAV Server, you will have to have the Microsoft.Dynamics.Framework.UI.Client.dll, which you will find on the DVD, in the Test Assemblies folder:

testassemblies

You should not try to communicate with this endpoint “manually”.

The Client Services endpoint allows you to create a new Client. Unlike Soap or OData Web Services, Client Services will open a session on the server. You will tell the server what action you want to invoke and the server will tell you if you need to display a new page to the user. Doing perf testing we of course do not render the pages physically, but we do everything that’s needed to make the NAV Server think that there are real users behind a real client using the software.

The Visual Studio Solution in Github

The solution you cloned in part #1, has 3 projects:

  • Microsoft.Dynamics.Nav.LoadTest
  • Microsoft.Dynamics.Nav.TestUtilities
  • Microsoft.Dynamics.Nav.UserSession

The first project is where the scenarios are defined and the test mix.

The other 2 projects are there to make the communication with the Client Services endpoint a little easier.

Lets follow the flow when we run the CreateAndPostSalesOrder:

The Test Method looks like this:

[TestMethod]
public void CreateAndPostSalesOrder()
{
    TestScenario.Run(OrderProcessorUserContextManager, TestContext, RunCreateAndPostSalesOrder);
}

TestScenario.Run is a helper function, which takes a UserContextManager, a TestContext and the actual test method as parameters.

The UserContextManager is a class, which is responsible for creating Users of a certain Role. TestScenario.Run will only ask the UserContextManager for a new UserContext if no users are available in the pool and the number of simultaneous users haven’t been reached yet.

TestContext is the test context provided by Visual Studio Load Test Framework.

TestScenario.Run

Let’s look at what TestScenario.Run actually does:

public static void Run(UserContextManager manager, TestContext testContext, Action<UserContext> action, string actionName = null)
{
    var userContext = manager.GetUserContext(testContext);
    var formCount = userContext.GetOpenFormCount();
    action(userContext);
    userContext.CheckOpenForms(formCount);
    userContext.WaitForReady();
    manager.ReturnUserContext(testContext, userContext);
}

Line by line:

  1. Get a UserContext from the UserContextManager, either by getting one from the pool of users or by creating a new user.
  2. Remember the current number of open forms for later
  3. Perform the actual test scenario
  4. Check whether the test scenario closed all the forms that was opened, throw an error if not (ensure session health)
  5. Wait for the UserContext to be ready
  6. return the UserContext to the pool of UserContexts

The UserContextManager

The test scenario runner consults the UserContextManager twice. Once for getting a UserContext and once for returning the “used” UserContext to the manager. The Github sample implements two UserContextManagers, one with NAVUserPassword Authentication and one with Windows Authentication. In the sample we then instantiate one of these with proper parameters.

The UserContextManager is also responsible for distributing users between multiple tenants (if running multi-tenancy) and for selecting company. The Github sample doesn’t really implement this, but in Post #2 you will see an example of how you could implement this in the UserContextManager. In real life load testing you will probably find yourself create at least one new UserContextManager class deriving from one of the existing classes and implementing other ways of managing users, tenants and companies, you shouldn’t need to modify the base objects in the UserSession project.

The two methods you want to override are

/// <summary>
/// Get the UserName for the current virtual user
/// </summary>
/// <param name="testContext">current test context</param>
/// <returns></returns>
protected abstract string GetUserName(TestContext testContext);

/// <summary>
/// Create a new user context for the current virtual user
/// </summary>
/// <param name="testContext">current test context</param>
/// <returns></returns>
protected abstract UserContext CreateUserContext(TestContext testContext);

GetUserName is currently only used in CreateUserContext.

In the GitHub sample, when using Nav User Password authentication, GetUserName will check whether you are running load tests. If that is the case it will append the load test user id (0, 1, 2, 3,…) to the default username (from settings, ex. admin0, admin1, admin2, …). If you right-click a test and select Run Selected Test, the GitHub sample will just connect with the default username from settings.

In the GitHub sample, CreateUserContext will transfer TenandId and Company as static fields from the UserContextManager to the UserContext class. This is where you would implement your own distribution mechanism in your own UserContextManager class.

Note, in NAV 2017 there seem to be a bug, which means that the company selection won’t actually be used. All users will connect to the company they have specified in User Personalization.

The actual test scenario: RunCreateAndPostSalesOrder

RunCreateAndPostSalesOrder is called with the UserContext and as stated earlier, the actual test scenario will simulate what the user is doing, not by invoking key presses and mouse clicks, but by performing the logical interactions, that the user is doing. If you think about it, the user might be able to do a million things with NAV, but on the interaction level there are only so many things:

  • Enter values in controls
  • Inspect values in controls
  • Activate controls
  • Invoke actions

There are probably more, but for now we will settle with this.

You might be thinking: Hey, on my phone, I can swipe left on a customer and stuff happens, but if you think about it, this is just a different way of invoking an action, which is specific to a phone display target. You cannot perform a swipe through Client Services, but you can invoke the same action as the swipe performs.

Closing a Page is invoking an action (different in different display targets)

Opening a Page is not an interaction the user typically is doing. The typical interaction is to invoke an action, which as a side effect will open a Page (due to some PAGE.RUN code in the action. Yes I know you can open a specific page by changing the URL in the WebClient, but it isn’t the typical navigation paradigm.

With this in mind, lets look at the RunCreateAndPostSalesOrder code:

public void RunCreateAndPostSalesOrder(UserContext userContext)
{
    // Invoke using the new sales order action on Role Center
    var newSalesOrderPage = userContext.EnsurePage(SalesOrderPageId, userContext.RoleCenterPage.Action("Sales Order").InvokeCatchForm());
    // Start in the No. field
    newSalesOrderPage.Control("No.").Activate();
    // Navigate to Customer field in order to create record
    newSalesOrderPage.Control("Customer").Activate();
    var newSalesOrderNo = newSalesOrderPage.Control("No.").StringValue;
    TestContext.WriteLine("Created Sales Order No. {0}", newSalesOrderNo);
    // select a random customer
    var custno = TestScenario.SelectRandomRecordFromListPage(TestContext, CustomerListPageId, userContext, "No.");
    // Set Customer to a Random Customer and ignore any credit warning
    TestScenario.SaveValueAndIgnoreWarning(TestContext, userContext, newSalesOrderPage.Control("Customer"), custno);
    TestScenario.SaveValueWithDelay(newSalesOrderPage.Control("External Document No."), custno);
    userContext.ValidateForm(newSalesOrderPage);
    // Add a random number of lines between 2 and 5
    int noOfLines = SafeRandom.GetRandomNext(2, 6);
    for (int line = 0; line < noOfLines; line++)
    {
        AddSalesOrderLine(userContext, newSalesOrderPage, line);
    }
    // Check Validation errors
    userContext.ValidateForm(newSalesOrderPage);
    PostSalesOrder(userContext, newSalesOrderPage);
    // Close the page
    TestScenario.ClosePage(TestContext, userContext, newSalesOrderPage);
}

The first thing that happens here is:

userContext.RoleCenterPage.Action("Sales Order").InvokeCatchForm()

Locate the Sales Order Action on the Role Center, Invoke it and catch the Form that it opens.

This call is encapsulated in a call to EnsurePage, which basically checks whether the page opened by the action is the SalesOrderPage. If this is not the case, the method will throw an exception.

var newSalesOrderPage = userContext.EnsurePage(SalesOrderPageId, userContext.RoleCenterPage.Action("Sales Order").InvokeCatchForm());

This means that we can continue our test scenario flow, knowing that newSalesOrderPage is indeed the Sales Order Page.

The first thing we do in the newSalesOrderPage is to activate the No. field. It is the responsibility of the display target to activate the first control and since we are the display target, we have to do this:

newSalesOrderPage.Control("No.").Activate();

Next thing is activating the Customer control which, as all NAV users will know, means that the actual record is created and the Sales Order No. is filled out. After activating the Customer control, we can inspect the No. control and get the new Sales Order No. (and write it to the test output).

newSalesOrderPage.Control("Customer").Activate();
var newSalesOrderNo = newSalesOrderPage.Control("No.").StringValue;
TestContext.WriteLine("Created Sales Order No. {0}", newSalesOrderNo);

Next thing is to simulate the user pressing the drop down button and select a random customer. In this sample we don’t actually invoke the drop down but instead we select a random customer from the list page that lies behind the drop down.

var custno = TestScenario.SelectRandomRecordFromListPage(TestContext, CustomerListPageId, userContext, "No.");

Next thing – set the value of the customer in the Customer field:

TestScenario.SaveValueAndIgnoreWarning(TestContext, userContext, newSalesOrderPage.Control("Customer"), custno);

The SaveValueAndIgnoreWarning is a method, which will save the value in a field (with delay) and if that action causes a dialog to popup, it automatically tries to press Ignore. If there isn’t an ignore button on the dialog, the function will throw and the test will fail. This is to ensure that stuff like credit limit doesn’t prevent our tests from running.

After setting the Customer, set the External Document No. to the customer no as well (or any random number really):

TestScenario.SaveValueWithDelay(newSalesOrderPage.Control("External Document No."), custno);

SaveValueWithDelay will save the value in a control and sleep for 400ms. This delay is set in DelayTiming.cs and can of course be changed.

The next thing that happens in not really a user interaction, but it is ensuring that we don’t have any validation errors before starting to add lines to the sales order:

userContext.ValidateForm(newSalesOrderPage);

Next up is adding the lines, in the sample we add a random number of lines:

int noOfLines = SafeRandom.GetRandomNext(2, 6);
for (int line = 0; line < noOfLines; line++)
{
    AddSalesOrderLine(userContext, newSalesOrderPage, line);
}

In the AddSalesOrderLine it does really the same things as above. Only difference is getting the current line and adding a think delay after filling out the line.

After this, check for validation errors, post the order and close the page.

Adding a line

When dealing with lines (repeaters), you need to find the repeater and then find the right line. In the sample project this is done by:

// Get Line
var itemsLine = newSalesOrderPage.Repeater().DefaultViewport[line];

If you are going to add more than 5 lines, you will need to scroll down to the desired line (exactly like a user would do in the UI) and then get the desired line. In the NAVLoadTest repository you will find a sample on how this is done:

var repeater = newSalesOrderPage.Repeater();
var rowCount = repeater.Offset + repeater.DefaultViewport.Count;
if (line >= rowCount)
{
    // scroll to the next viewport
    userContext.InvokeInteraction(new ScrollRepeaterInteraction(repeater, 1));
}
var rowIndex = (int)(line - repeater.Offset);
var itemsLine = repeater.DefaultViewport[rowIndex];

If you look into the Repeater() method, it is an Extension method to the ClientLogicalForm and finds the first ClientRepeaterControl in the control tree under the page (including sub pages).

return form.ContainedControls.OfType<ClientRepeaterControl>().First();

If you want to find a different repeater (if multiple exists) you will have to write your own extension method to do that.

After getting the Repeater, we need to find the correct line, potentially scrolling down and then get the desired line. The line has controls just like the page, meaning that you can do stuff like this on a line:

// set Type = Item
TestScenario.SaveValueWithDelay(itemsLine.Control("Type"), "Item");

Posting the Order

Posting the order seems straightforward, but it is a little more complicated than. Locate the Post… action and invoke the action:

postConfirmationDialog = newSalesOrderPage.Action("Post...").InvokeCatchDialog();

On the postConfirmationDialog, locate the OK button and press that.

ClientLogicalForm dialog = userContext.CatchDialog(postConfirmationDialog.Action("OK").Invoke);

If pressing OK on the confirmation dialog causes a dialog to popup, press No on that:

if (dialog != null)
{
    // The order has been posted and moved to the posted invoices tab, do you want to open...
    dialog.Action("No").Invoke();
}

You probably got the picture now, every time the user is expected to do something, you need to code that.

Isn’t there an easier way?

Yes and No. I have helped a few partners write performance tests and I have asked them to describe their scenarios (if possible with a few videos recorded of users doing the actual work) and then we have written the code based on these descriptions/recordings. It does however take a lot of time.

At Directions US 2016, I talked to the guys from ClickLearn (https://www.clicklearn.dk/dynamics/nav/). ClickLearn is a tool, which is specialized at creating documentation and videos based on user scenarios. They demoed a recorder, which could record user interactions in NAV and I proposed that they would make support for generating C# code for the load test framework in their app.

By Directions EMEA 2016, ClickLearn demonstrated that they now were able to create C# scenarios based on their recordings, very cool. I will test this and create a blog post on how to use this for creating the frist stab on creating scenarios and then you can manually fix small issues afterwards.

Next blog post on performance testing will be around how to use/utilize this functionality.

The NAVLoadTest repository

The NAVLoadTest repository has primarily been maintained by David Worthington and has some cool samples on how to do things:

  • Selecting a customer using the drop down on the customer
  • Filtering a list on a column
  • Scroll the repeater
  • and other things

I don’t think the repository is updated to NAV 2017, but a lot of the things in the repo is still good samples, and the majority of things in the API has not changed. There are however changes in the UI between NAV 2016 and NAV 2017, meaning that the user would have to do slightly different things.

Videos

There are a few cool videos on Youtube showing how to write load tests. These videos were also created by David:

Enjoy

Freddy Kristiansen
Technical Evangelist

NRF 2017 – Stand Microsoft

$
0
0

nrf17_msft_1

Retour de la NRF (National Retail Federation), Retail’s Big Show, à New York, qui rassemble chaque année plus de 30 000 personnes et 3 000 enseignes.

3 thèmes présentés

1/ Les enjeux de l’exploitation de la data et l’IA aux usages multiples (optimisation des offres, des stocks, personnalisation des messages, suggestion de ventes croisées, chatbots…).

2/ L’équipement des vendeurs en magasin – côté outils de travail (caisse, digital workplace, gestion de calendrier…) et – côté connaissance clients (historique d’achat, wish list, programme de fidélité…) favorisant ainsi le clienteling.

3/ L’optimisation des processus et des stocks pour permettre les ventes omni-canals (directe ou en magasin) et l’optimisation des coûts.

msft_booth5_partnercustomer

 

Ci-dessous, un résumé des démos et des ISVs présents sur le stand Microsoft

Harness the Power of Data

Cognizant : « Digital Store of the Future Platform”  

  • Parcours client intégrant Beacon/reconnaissance des profils
  • Application de Réalité Augmentée pour visualiser directement sur son smartphone les promos autour de soi en magasin
  • Plateforme BI pour le responsable de magasin

Esri : Geographic information system.

  • Dataviz sur carte pour mieux comprendre la santé de son business, le potentiel du marché… le tout, géolocalisé.

Plexure : Intelligent Marketing de l’IoT au CRM

  • Analyse et croisement des données digitales, des données publiques et de l’expérience en magasin (via des capteurs – beacon ou autre) pour proposer une expérience client personnalisée (coupon de réduction…)
  • Plate-forme PaaS, API et SDK très bien documenté
  • Sur Azure
  • Disponible dans AppSource
  • Références: MacDonald, SevenEleven, IKEA

Neal Analytics : le Machine Learning pour déterminer le meilleur mix de produits à proposer en magasin.

  • Optimisation des réassorts en fonction de données clients combinées à d’autres informations (saison, temps, géographie…)
  • Offre principalement de consulting
  • Demo : Optimisation des réassortiments des distributeurs de boisson chez Mars Drinks
  • Références Mars Drinks/ Coca Cola

Deliver Unified Commerce

FreedomPay : Solution de payement norme PCI s’appuyant sur Azure pour une expérience plus sécurisée et plus personnalisée de payement.  

Orckestra Plate-forme modulaire de commerce connecté

  • Un orchestrateur pour capitaliser sur l’existant
  • Un déploiement tactique et/ ou par incrément
  • Plate-Forme PaaS, extensible en .NET
  • Couverture complète en matière de commerce connecte
  • Une agilité sans compromis avec le future
  • Sur Azure
  • S’intégre aux ERP et notamment parfaitement avec Ax

Episerver : plate-forme CMS, Marketing, Commerce et Personalization

  • 30000 Websites dans le monde
  • $18 000 000 000 de revenue pour l’ensemble des sites omni-channels
  • Sur Azure

Sitecore : Vue 360 du parcours client et adaptation des contenus digitaux/offres au profil des consommateurs.

  • Plate-forme de gestion intégrée: Content, data, commerce, et de livrables
  • Leader du  Web Content Management
  • Sur Azure

Microsoft Dynamics 365 for retail. Différents scenarios présentés :

  • Gestion de l’inventaire via tous les canaux de distribution (retail, vente directe et commerce en gros) : approvisionnement, réassorts, stocks, gestion des commandes… + Power BI pour avoir une vision globale des process.
  • Gestion de gamme, de catalogue, category management, pricing et promotions.
  • Vue à 360 des clients (omni-canal) : historique d’achat, wish lists, programme de fidélité + Cortana Analytics Suite pour des recommandations de vente
  • POS : caisse supportée sur tous les devices et OS en mode online et offline

Empower your employees

Azure : Identité/sécurité avec Azure Active Directory

            Pour la gestion des devices et la fédération avec clients et partenaires

Footmarks : plate-forme de beacons

StaffHub dans O365 : présentation de Staff Hub

  • lancé la semaine dernière dans O365
  • gestion de planning en magasin et plateforme d’échange avec les forces de vente

Unily : digital workplace pour les forces de vente en magasin sur SharePoint

 

Optimize Operations

Smart Buildings : Comment l’IoT permet de réduire sa consommation énergétique dans ses locaux/magasins.

  • Retour d’expérience des déploiements dans les locaux Microsoft à Redmond
  • Démonstration de la plate-forme IoT Azure

Mojix : Utiliser la Blockchain pour réduire les coûts de transaction et rendre la chaîne logistique plus fiable et agile

Azure : Mise en avant de Cortana Intelligence Suite pour prédire les next best actions. Plateforme sécurisée pour traiter les données du point de vente

 

Customer Journey

Le pôle Customer journey permettait une mise en musique de certains des ISVs présentés plus haut. Présentation vidéo de l’expérience https://mediastream.microsoft.com/events/2017/1701/NRF/player/NRF1003-MTC.html

Innovation Zone

Acuity brands : lampes LED sans fil permettant des économies d’énergie intégrant des technologies de géolocalisation en magasin

Lakeba-shelfie robot : robot et drone permettant les scans des étagères pour l’optimisation des linéaires et des inventaires magasins et stocks

Powershelf : optimisation des linéaires via des capteurs de poids identifiant les ruptures, avec mise à jour de l’étiquetage et alertes des équipes en magasin + évaluation des risques de perte en $ via Power BI

FaceCake : AR permettant l’essai virtuel de vêtements via Kinect ou directement en utilisant la caméra de son mobile

 

Naturellement, je me tiens à votre disposition si vous avez des questions ou cherchez des contacts.

msft_booth12_aerial 


7 great ways to hit the ground running this school year

$
0
0

Guest post by Helen Gooch, Microsoft Fellow and Master Trainer. Connect with Helen on the Educator Community. 

List of new students for your class. Check! 

Classroom arranged for optimal learning styles. Check! 

Bulletin boards done. Check! 

Now let’s discover seven great ways to hit the ground running this school year by getting your curriculum ready, getting to know your students, and helping your class get started with technology. 

By putting together some tools that will help you to deliver highly effective lessons, you will be set up for the year ahead and hit the ground running. Having an arsenal of teaching assisting tools at your hand will help you to stay in touch and on point this year.

  1. Start your OneNote Class Notebook, move content over, and close old notebooks 

A centralised place of putting together thoughts and ideas can make the school year a lot easier. Many teachers use OneNote to organise curriculum, including previously used lesson plans, quizzes, and tests. A well-organised OneNote notebook is a springboard you can simply copy and reuse each year when you start new class notebooks, and update as you discover new ideas and lessons. With its organisational features, information can be kept on file and easily retrieved for use when needed.

Here’s how to set up and organise lesson plans for the year using OneNote:

  • Start by setting up that new Class Notebook in your Office 365 account by clicking on the Class Notebook tile in the app launcher (upper right corner). Find in-depth instructions on how to use Class Notebook wizard and create your Class Notebook
  • After you have finished with the Class Notebook creation wizard, you have the option of clicking on “Open in OneNote” from the top menu and working from your OneNote desktop. 
  • To move content from previous OneNote notebooks you wish to use again, simply right-click on any pages or sections and copy into the current year’s notebook. 
  • When you no longer need to see older OneNote notebooks in your list, simply follow these directions in OneNote 2016 help. You have not deleted your notebook; you’ve just closed it so that it isn’t visible in this list. Because it lives in the cloud, anyone who had access to the notebook still has access to it. 

For more ideas, download the new sample teacher notebook. And to learn more about saving valuable classroom time by distributing assignments and content between notebooks, check out this recent blog post, watch this instructional Office Mix, and download the add-in (free for OneNote 2013 or 2016). 

  1. Find lessons and classroom activities 

Lesson plans have to be written, but where do you find great new ideas for developing a lesson aligned to content standards that will engage your students?

By collaborating with other teachers, you are able to compare and improve upon existing lesson plans by tapping into tried and tested methods. 

The Microsoft Educator Community is a phenomenal place to start.  You can quickly click on “Find a Lesson,” and filter by your subject and grade level. For free ideas, videos, samples, and training, just visit the Microsoft Educator Community’s OneNote “One Stop.” 

If your students would benefit from a virtual field trip, Skype in the Classroom provides that opportunity. No permission slips, no long bus rides, and no collecting fees to cover your trip. 

Finally, MIE Expert Tammy Dunbar just started a blog series on how to use Microsoft tools for delivering your interactive lessons. Check it out here: Lesson Planning with Microsoft: Introducing the Lesson! 

screenshot-2016-12-06-14-32-14

Learn from other teachers about better ways to teach and deliver learning experiences.

  1. Survey your students’ interests 

By gaining an understanding of students’ interpretations of lessons and their feedback on lesson plans enables a stronger connection and understanding of what works in the classroom. A simple way to do this is by gathering vital data.

Send a Form out as a learning inventory to capture your students’ attention while you learn about what types of books they enjoy reading, how they prefer to learn or study, and how to help them achieve.

Check out this recent blog post by MIE Expert Laura Stanner, on how she is using new features in Microsoft Forms to individualise instruction. 

feedback, learning, students

Gain insight into student learning with feedback.

Another great way to learn about your students is to have them use Sway to “Sway my summer,” or create a “Who am I?” as a classmate introduction. 

 

Another great way to learn about your students is to have them use Sway to “Sway my summer,” or create a “Who am I?” as a classmate introduction.

Another great way to learn about your students is to have them use Sway to “Sway my summer,” or create a “Who am I?” as a classmate introduction.

  1. Find documents and assignments from the start

Keeping track of progressing documents provides convenience when teaching cumulative lesson plans. The new Office.com home page now displays all the key apps for learning, as well as your recent documents, to make work you did last class, yesterday, or the day before easier to find.

This saves valuable class time when starting a new day and returning to a lesson already introduced. It is also a great springboard when students log into Office 365 and need to open a new Word document to begin an essay or jump into their teacher’s lesson at the beginning of class.

Keep track of work in progress plans.

Keep track of work in progress plans.

  1. Create documents easily from a browser

Students also need a convenient way to create and access their documents directly from the browser, even when they aren’t on the Office.com home page. There are also Office Online extensions for the Microsoft Edge browser, where you can easily create new Office documents, access your recent files, and open content stored on your OneDrive. This also works on Chrome browsers.  

Make accessing documents easy.

Make accessing documents easy.

  1. Keeping notes and content organised in different ways for students

Teach your students how to use OneNote, whether they are taking notes on their own notebook, or using a Class Notebook created by their teacher.

By providing flexibility in idea sharing, different learning and teaching tactics can be delivered – from embedding videos, to capturing web resources, to inking and annotating right in their notebook, to recording voice notes, and the list goes on! But first, students must learn to use OneNote. And believe it or not, even shared iPad schools or Chromebook schools can use OneNote! When students sign in to the shared iPad or a Chromebook, they can access all their notebooks on the OneNote iPad app or a Chrome browser. The OneNote Notebook just lives in the cloud.

Check out this great site for students (and teachers):  http://www.onenoteineducation/students

Share this new sample student notebook with your students to get them started with OneNote today!

  1. Going paperless this school year   

Many students who love OneNote will also love using Office Lens to scan and digitize nearly anything and then send it to their notebooks without the need for any scanner. Plus, no more heavy backpacks full of paper!

Office Lens is now available for all Windows 10 devices—joining the iOS and Android apps – so any student can start scanning hard copy documents or whiteboards today. Just scan content with your device and then send it into OneNote, including your Class Notebooks, to reference and search on it later.

Once you send your content to OneNote, you can even search the text in the image using magical optical character recognition.

In addition to OneNote, you can send your content to be retyped in Word, sketched in PowerPoint, be searchable on OneDrive or sent via Outlook. The latest Office Lens updates released this week for all mobile phones let you just sign in with your school account and your scans will benefit from the easy sharing and security within your school’s Office 365 environment.

Check out this recent blog post to learn more about Office Lens as a “pocket scanner” with Office 365.

Students and teachers can get Office Lens for free today on their devices: iOS |Android | Windows

 

scanning, student notes, learning plan

Office Lens is an app that’s like having a scanner in your pocket.

These are just a few of the great ways to hit the ground running this year, to shake up your approach, and to engage students in learning. Pick your favourite, and incorporate that idea into a lesson soon. Then come back and adopt another idea, and another. Before you know it you will be using most — if not all — of these ideas, and your students will thank you!

Let us know your tips in the comments section.

To get more tips and tricks for this school year, check out our blog post “10 great ways to rock back to school with Microsoft” and follow us on Twitter at @MSAUedu

Original link – https://blogs.technet.microsoft.com/microsoft_in_education/2016/09/08/7-great-ways-school-year/ 

将我的 VB6 Windows 应用引入Windows 10的应用商店

$
0
0

[原文发表地址]: Putting (my VB6) Windows Apps in the Windows 10 Store – Project Centennial

[原文发表时间]: September 14, ’16

 

image_3今天我注意到Evernote已经引入Windows的应用商店里了。我去应用商店里下载Evernote,然后运行它。没有下一步->下一步->下一步->完成之类的安装方式,只是安装上了并且运行的很好。它是一个Win32的app并且显示使用NodeWebKit作为它UI界面的一部分。但是它是一个Windows的app,正如VB6应用程序、.Net 应用程序和UWP(普遍的Windows平台)一样,因此我觉得这相当的酷。由于Evernote是应用商店的app,它就可以用Win10的一些新功能了,比如动态磁贴和通知,并且会持续更新。

利用通用Windows平台来创建和包装应用程序,Windows应用商店开始慢慢地扩展和引入了一些已有的应用程序和游戏。这就是他们在开发者会议上宣布的“百年工程”。它可以让你把任何的Windows app 引入Windows应用商店里,这简直太酷了。App在那里会比较安全,也不会扰乱你的机器,并且可以快速地安装和卸载。

这篇文章中,你可以了解到你的App转换时的一些详情。Windows应用商店最主要的一个好处就是,应用商店里面的App不会扰乱你的机器。

这些app在一个特殊的环境里面运行,访问app后, app的文件系统和注册表会被重定向。文件Regedit.dat是用于注册表的重定向的,它实际上是个注册表的配置单元,因此你可以在windows注册编辑器(Regedit)中查看它。AppData文件夹是文件系统里唯一的重定向文件系统,它重定向到所有UWP应用程序的应用程序数据存储的同一位置。这个位置被称为本地应用程序数据存储区,并且你可以通过LocalFolderproperty属性访问它。这样,你无需做任何事,代码就已经移植到读取和写入应用程序数据的正确位置了。你也可以直接在哪里进行写操作。文件系统重定向的一个好处是应用程序会卸载的比较干净。

该”DesktopAppConverter“现在已经发布在Windows 商店中了,虽然它目前是以命令提示符运行。如果您的Windows桌面应用程序有一个”静默安装程序”,那么你可以运行此DesktopAppConvertor从而生成一个APPX包,然后就可以将其上传到Windows应用商店了。

备注:这个“百年纪念”的技术是在Windows10 的周年更新中,如果你还没有更新到此版本,现在你可以参考此文来进行更新。

也可以通过一些第三方的工具,如InstallShield和Wix来创建可发布到应用商店的App。这样您现有的MSI的App可以转换成UWP包然后发布到应用商店。

image_thumb_1

 

看起来有好几种方法可以使你现有的Windows应用程序转变成Windows10 应用商店里面可用的应用程序。你可以使用DesktopAppConverter去运行你的静默安装的程序。一旦你让你的应用程序变成应用商店的应用程序,你可以利用代码来使用动态磁贴、通知以及其他功能来“点亮”你的应用程序。可以在 GitHub里看到一个展示如何添加标题或者后台任务的示例

https://github.com/Microsoft/DesktopBridgeToUWP-Samples。如果你有Windows应用商店和可以通过传统方式安装的windows桌面程序版本,你就可以使用[Conditional(“DesktopUWP”)] 来编译。

如果你的应用程序是一个没有安装程序,只是简单的Xcopy部署应用程序,那就更简单了。这可以通过我在Windows10的机器上成功安装了VB6来证明。

备注:我使用VB6作为一个很有趣也很酷的例子,VB6我们好久都不支持了,但用它创建的应用程序仍然可以在Windows正常运行,是因为它们是Win32应用程序。对我来说,这意味着,如果我有一个VB6的程序,我想把他发布在应用商店并且扩大使用范围是完全可以的。

我做了一个可以在VB6上运行的小程序project1.exe。

image_8

 

参照HelloWorld的例子,我的 AppxManifest.xml文件内容如下。

image

文件夹里的内容包括工程文件 porject1.exe以及logo和一些图片文件。

如果我有一个静默安装程序,现在我可以运行DesktopAppConverter,但由于我只有一个xcopy的应用程序,我将在我的本地机器上运行以下命令来进行测试。

现在我的VB6小应用程序已经安装在本地,在开始菜单里可以看到。

image_11

当我准备用Visual Studio把我的应用程序制作好以提交到应用商店时,我会按照这个指导文档来操作。或者可以在命令行中使用MakeAppxSignTool工具来手动完成。

“C:Program Files (x86)Windows Kits10binx86makeappx” pack /d . /p Project1.appx

 

接下来我需要购买一个代码签名证书,现在我会使用一个本地的假的证书来签名。

“C:Program Files (x86)Windows Kits10binx86makecert” /n “CN=HanselmanVB6” /r /pe /h /0 /eku “1.3.6.1.5.5.7.3.3,1.3.6.1.4.1.311.10.3.13” /e 12/31/2016 /sv MyLocalKey1.pvk MyLocalKey1.cer
“C:Program Files (x86)Windows Kits10binx86pvk2pfx” -po -pvk MyLocalKey1.pvk -spc MyLocalKey1.cer -pfx MyLocalKey1.pfx
certutil -user -addstore Root MyLocalKey1.cer

备注:确保应用的清单的标识代码与签名证书的CN相匹配。这是从证书获取的完整字符串,否则你将会在事件查看器中看到如下奇怪的内容Microsoft|WindowsAppxPackagingOM|Microsoft-Windows-AppxPackaging/Operational 比如

“error 0x8007000B:应用程序清单发布服务器名称(CN=HanselmanVB6, O=Hanselman, L=Portland, S=OR, C=USA) 必须与签名证书的主题名称完全匹配 (CN=HanselmanVB6).”

我会使用像这样的命令行。记住,Visual studio可以隐藏很多类似这样的操作,但是手动做的好处是可以很好的了解细节。

“C:Program Files (x86)Windows Kits10binx86signtool.exe” sign /debug /fd SHA256 /a /f MyLocalKey1.pfx Project1.appx
The following certificates were considered:
Issued to: HanselmanVB6
Issued by: HanselmanVB6
Expires: Sat Dec 31 00:00:00 2016
SHA1 hash: 19F384D1D0BD33F107B2D7344C4CA40F2A557749
After EKU filter, 1 certs were left.
After expiry filter, 1 certs were left.
After Private Key filter, 1 certs were left.
The following certificate was selected:
Issued to: HanselmanVB6
Issued by: HanselmanVB6
Expires: Sat Dec 31 00:00:00 2016
SHA1 hash: 19F384D1D0BD33F107B2D7344C4CA40F2A557749
The following additional certificates will be attached:
Done Adding Additional Store
Successfully signed: Project1.appx
Number of files successfully Signed: 1
Number of warnings: 0
Number of errors: 0

现在我有一个本地开发者签名的Appx包,它里面有一个VB6应用程序。如果我双击就会得到Appx安装程序,但是我真正想做的就是把它用一个真正的证书签名并发布在Windows应用商店!

image_14

这是应用程序运行的截图。我觉得这真是一个很棒的用户体验。

image_15

在我看来,现在还只是初期阶段,我也很期待到时候我可以去Windows应用商店,下载到我最喜欢的一些软件,如Open Live Writer、Office和 Slack等等。现在该是您开始探索这些工具的时候了。

 

相关链接:

Apache Kafka for HDInsight (public preview) (1)

$
0
0

Microsoft Japan Data Platform Tech Sales Team

高木 英朗

 

分散型のストリーミングプラットフォームとして人気の Apache Kafka が Microsoft Azure の HDInsight に Kafka for HDInsight としてリリースされました。 本記事の投稿時点 (2017/01/23) ではパブリックプレビュー版となります。

HDInsight については以下をご参照ください。

 

Apache Kafka とは?
Apache Kafka はリアルタイム アプリケーションなどによって生成される非常に大量のデータを処理するための、オープンソースの分散ストリーミングプラットフォームです。Pub/Sub 型のメッセージングモデルで、名前付きデータストリームの公開および購読ができる、メッセージブローカーを提供しています。

Kafka には 4 つのコア API があります。

  • Producer API: トピックにストリームデータを配信
  • Consumer API: トピックからストリームデータを購読
  • Streams API: Spark Streaming や Storm などを使用せずにリアルタイムのストリーム処理(加工や変換等)を実行
  • Connector API: 他のデータソース(RDBMS 等)に接続してデータをやり取り

kafka-apis
From <https://kafka.apache.org/intro>

 

トピック
Kafka は「トピック」というストリームデータのカテゴリごとに送受信します。トピックはクラスタ内でパーティション化されます。パーティション数はトピック作成時に指定することができます。Producer によってレコードが書き込まれると、パーティションの後ろに追記していきます。パーティションをノード間で複製することでフォールト トレランスを実現することができます。各パーティション内でそれぞれシーケンス番号がふられて順序が保たれています。Producer はトピック内のどのパーティションにレコードを書くかを Round-Robin やレコード内の Key に基づいた関数に従って選択することができます。
Kafka は一般的なメッセージキューと異なり、Consumer によってレコードが読まれたからといってレコードは削除されず、Retention period に従って維持されます。

log_anatomy
From <https://kafka.apache.org/intro>

 

パーティションと負荷分散
パーティションは負荷分散のための重要な機能です。Kafka は複数の Consumer 間でトピックの読み取りの負荷分散が可能です。Consumer Group と呼ばれるラベルを振り、同じグループを使用する 複数の Consumer で読み取りの負荷分散を実現します。Consumer はパーティション単位で読み取るため、Consumer Group にはパーティション数以上の Consumer を含めることはできません。トピック内の 1 つの パーティションにつき、Consumer Group 内の 1 つの Consumer が担当するような構成をとります。
consumer-groups
From <https://kafka.apache.org/intro>

レコードの順序はパーティション内のみで保証されますが、パーティション間では保証されません。
Producer からのレコード書き込み時に分散方式に Key を指定することで、同じ Key を同じパーティションに書き込むことが可能なため、この仕組みを利用して対応すると良いでしょう。もし、レコード全体で順序保証したい場合はパーティションを分割せずに利用する必要があります。

 

Kafka for HDInsight
Kafka for HDInsight は Apache Kafka を HDInsight に機能追加したものです。HDInsight により、管理された、拡張性の高い、高可用性の Kafka サービスを Microsoft Azure 上で使用することが可能です。
diagram
From <https://azure.microsoft.com/ja-jp/services/hdinsight/apache-kafka/>

大規模システムで実績豊富な Kafka が HDInsight に登場したことで、様々な OSS 分析基盤との連携がしやすくなり、よりデータ活用の幅が広がります。Kafka を実際に導入している企業はこちらから参照することができます。

次回は実際に Kafka for HDInsight をデプロイして動かす方法についてご紹介します。

関連記事

David Chappell: hogyan változtatja meg a SaaS modell az üzletünket? Harmadik rész.

$
0
0

Három poszttal ezelőtt a SaaS modellben rejlő lehetőségekről és kihívásokról írtam, amely David Chappell How SaaS Changes an ISV’s Business: A Guide for ISV Leaders című munkájából volt egy kivonat. (Emlékeztetőül: David Chappell, egy Egyesült Államokban élő, de a világ több országában tevékenykedő független konzulens. Munkássága arra fókuszál, hogy új technológiákat és azok hatásait megismertesse és megértesse az emberekkel, szervezetekkel.)

A mostani posztom az előző kettő folytatásaként (első, második) kivonatot közöl az említett tanulmányból, amely folytatja annak a témának a boncolgatását, hogyan változtathatja meg egy szoftverfejlesztő cég üzletét a SaaS modell:
1. Az SaaS alkalmazásunk átlagos árának hatása az üzletünkre. SaaS alkalmazások havi díjai igen széles skálán mozognak: akár 5$-tól egészen 499$ dollárig, vagy még feljebb. Az ár, amennyiért el tudjuk adni a szolgáltatásunkat alapvetően határozza meg, hogyan tudjuk az ügyfeleket megszerezni, hogyan tartsuk a kapcsolatot az ügyfelekkel és így tovább:
table

2. Marketing: ha a SaaS termékünk ára a fenti táblázat felső sorában van, akkor a marketingünk nem fog sok mindenben különbözni az eddigiektől. (a legjelentősebb új terület a digitális marketing lesz – ha eddig még nem csináltuk). Ellenben, ha a termékünket alacsony áron tudjuk kínálni, akkor egy teljesen új megközelítés szerint kell felépítenünk a marketingünket. Ekkor a termék/cég weboldala lesz a kulcs: ez lesz a marketingesünk és az értékesítőnk is egyben. Az oldalt fel kell készíteni arra, hogy a potenciális ügyfeleink rátaláljanak, ki tudják próbálni az alkalmazást úgy, hogy a részünkről semmilyen interakció ne kelljen hozzá, illetve egyértelműen megértsék annak használatának az értékét. Digitális marketing eszköztárát kell alkalmaznunk. Továbbá mindent mérnünk kell! Hány látogató jött az oldalra, ebből hányan próbálták ki az alkalmazást, ebből hányan fizettek elő, stb…

3. Támogatás. Az ügyfelek támogatása az egyik olyan tényező, amelyen a legkönnyebben el lehet bukni a SaaS üzletben. Arra kell törekedni, hogy a support költségeket a lehető legalacsonyabban tartsuk. Néhány kulcs terület:

  • teljesen önkiszolgáló fizetés, számlázás;
  • magas rendelkezésre állás, könnyű kezelhetőség;
  • transzparens, őszinte, nyílt kommunikáció hibák, leállások esetén;
  • önkiszolgáló support: wiki vagy GYIK (FAQ) oldal;

4. A SaaS üzlet cégünk napi működésére is hatással lehet:

  • Számlázás;
  • A SaaS appot futtató infrastruktúra költségei, amely két részre bomlik:
    • adatközpont/saját infrastruktúra költségei;
    • a folyamatos futtatás költségei (pl.: mindig legyen ügyeletben egy mérnök, aki egy esetleges nem várt leállás esetén minél hamarabb be tud avatkozni).

5. Szoftverfejlesztés: mivel a SaaS alkalmazásunk az általunk kezelt infrastruktúrán fut (ellentétben az ügyfeleink kliensei, vagy szerverein futó alkalmazással), ezért sokkal könnyebben és ami még fontosabb, gyakrabban tudjuk azt frissíteni. Az új funkciók, javítások így nagyon rövid idő (kétheti, havi rendszerességgel) alatt bekerülhetnek az éles alkalmazásba. Ennek nagyon fontos következménye, hogy a klasszikus vízesés módszertan nem igazán tudja támogatni ezt a gyakorlatot. Ezért, -ha még nem tettük meg,- akkor mindenképpen át kell állnunk agilis metodikára, és el kell kezdenünk használni az ezt támogató szolgáltatásokat, eszközöket is (pl.: VSTS). Továbbá meg kell ismerkednünk a gyors alkalmazás fejlesztést és közzététellelt támogató metodikákkal is: DevOps, Continuous Integration, Continuous Delivery, stb…

 

 

Microsoft Graph – Excel REST API (C#) を使い Range を操作するサンプル コード

$
0
0

こんにちは、Office Developer サポートの森 健吾 (kenmori) です。

今回の投稿では、Microsoft Graph – Excel REST API を使用して、指定されたアドレスの Range オブジェクトを操作するプログラムを、実際に C# で開発するエクスペリエンスをご紹介します。

ウォークスルーのような形式にしておりますので、慣れていない方も今回の投稿を一通り実施することで、プログラム開発を経験し理解できると思います。前回の OneDrive API のウォークスルー同様、本投稿では、現実的な実装シナリオを重視するよりも、Excel REST API を理解するためになるべくシンプルなコードにすることを心掛けています。例外処理なども含めていませんので、実際にコーディングする際には、あくまでこのコードを参考する形でご検討ください。

Excel REST API は、OneDrive API が前提となります。OneDrive API エンドポイント配下で取得したファイルに対してのみ、Excel REST API は使用できます。Office 365 という前提はありますが、一度慣れてしまえばオートメーションの要件において、OpenXML などのプログラムを実装するよりも開発生産性が高いと思います。

事前準備

以前の投稿をもとに、Azure AD にアプリケーションの登録を完了してください。少なくとも以下の 2 つのデリゲートされたアクセス許可が必要です。

・Have full access to all files user can access
・Sign users in

その上で、クライアント ID とリダイレクト URI を控えておいてください。

 

開発手順

1. Visual Studio を起動し、Windows フォーム アプリケーションを開始します。
2. ソリューション エクスプローラにて [参照] を右クリックし、[NuGet パッケージの管理] をクリックします。

3. ADAL で検索し、Microsoft.IdentityMode.Clients.ActiveDirectory をインストールします。

4. [OK] をクリックし、[同意する] をクリックします。
5. 次に同様の操作で Newtonsoft で検索し、Newtonsoft.Json をインストールします。
6. 次にフォームをデザインします。

excelapi1

コントロール一覧

  • ExcelTestForm フォーム
  • fileListCB コンボボックス
  • worksheetCB コンボ ボックス
  • refreshBtn ボタン
  • saveBtn ボタン
  • rangeGV グリッド ビュー

7. プロジェクトを右クリックし、[追加] – [新しい項目] をクリックします。
8. MyFile.cs を追加します。
9. 以下のような定義 (JSON 変換用) を記載します。

using Newtonsoft.Json;
using System.Collections.Generic;

namespace ExcelAPITest
{
    public class MyFile
    {
        public string Name { get; set; }
    }

    public class MyFiles
    {
        public List<MyFile> Value;
    }

    public class WorkSheet
    {
        public string Name;
    }

    public class WorkSheets
    {
        public List<WorkSheet> Value;
    }

    public class Range
    {
        public List<List<string>> Values;
        public List<List<string>> Formulas;
    }
}

10. フォームのコードに移動します。
11. using を追記しておきます。

using Microsoft.IdentityModel.Clients.ActiveDirectory;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;

12, フォームのメンバー変数に以下を加えます。
※ clientid や redirecturi は Azure AD で事前に登録したものを使用ください。

        const string resource = "https://graph.microsoft.com";
        const string clientid = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx";
        const string redirecturi = "urn:getaccesstokenfordebug";
        // ADFS 環境で SSO ドメイン以外のテナントのユーザーを試す場合はコメント解除
        // const string loginname = "admin@tenant.onmicrosoft.com";
         string AccessToken;

13. フォームのデザインでフォームをダブルクリックし、ロード時のイベントを実装します。

        private async void ExcelTestForm_Load(object sender, EventArgs e)
        {
            // デザイナーで設定しても構いません。
            rangeGV.AllowUserToAddRows = false;
            rangeGV.AllowUserToDeleteRows = false;
            rangeGV.ColumnHeadersVisible = false;
            rangeGV.RowHeadersVisible = false;

            AccessToken = await GetAccessToken(resource, clientid, redirecturi);
            DisplayFiles();
        }

        private async Task<string> GetAccessToken(string resource, string clientid, string redirecturi)
        {
            AuthenticationContext authenticationContext = new AuthenticationContext("https://login.microsoftonline.com/common");
            AuthenticationResult authenticationResult = await authenticationContext.AcquireTokenAsync(
                resource,
                clientid,
                new Uri(redirecturi),
                new PlatformParameters(PromptBehavior.Auto, null)
                // ADFS 環境で SSO ドメイン以外のテナントのユーザーを試す場合はコメント解除
                //, new UserIdentifier(loginname, UserIdentifierType.RequiredDisplayableId)
            );
            return authenticationResult.AccessToken;
        }

        private async void DisplayFiles()
        {
            using (HttpClient httpClient = new HttpClient())
            {
                httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken);
                HttpRequestMessage request = new HttpRequestMessage(
                    HttpMethod.Get,
                    new Uri("https://graph.microsoft.com/v1.0/me/drive/root/children?$select=name")
                );
                var response = await httpClient.SendAsync(request);
                MyFiles files = JsonConvert.DeserializeObject<MyFiles>(response.Content.ReadAsStringAsync().Result);
                fileListCB.Items.Clear();

                foreach (MyFile file in files.Value)
                {
                    if (file.Name.ToLower().EndsWith(".xlsx"))
                    {
                        fileListCB.Items.Add(file.Name);
                    }
                }
                if (fileListCB.Items.Count > 0)
                {
                    fileListCB.SelectedIndex = 0;
                }
            }
        }

14. fileListCB の SelectedIndexChanged イベントをダブルクリックして、処理を実装します。

        private async void fileListCB_SelectedIndexChanged(object sender, EventArgs e)
        {
            string fileLeafRef = fileListCB.SelectedItem.ToString();

            using (HttpClient httpClient = new HttpClient())
            {
                httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken);

                HttpRequestMessage request = new HttpRequestMessage(
                    HttpMethod.Get,
                    new Uri(string.Format("https://graph.microsoft.com/v1.0/me/drive/root:/{0}:/workbook/worksheets", fileLeafRef))
                );

                var response = await httpClient.SendAsync(request);
                // UI コントロールの表示アイテムのクリア
                workSheetsCB.Items.Clear();
                rangeGV.Rows.Clear();
                rangeGV.Columns.Clear();

                WorkSheets worksheets = JsonConvert.DeserializeObject<WorkSheets>(response.Content.ReadAsStringAsync().Result);
                foreach (WorkSheet worksheet in worksheets.Value)
                {
                    workSheetsCB.Items.Add(worksheet.Name);
                }

                if (workSheetsCB.Items.Count > 0)
                {
                    workSheetsCB.SelectedIndex = 0;
                }
            }
        }

15. workSheetsCB の SelectedIndexChanged イベントをダブルクリックして、処理を実装します。

        private void workSheetsCB_SelectedIndexChanged(object sender, EventArgs e)
        {
            DisplayData();
        }

        private async void DisplayData()
        {
            string fileLeafRef = fileListCB.SelectedItem.ToString();
            string WorkSheetName = workSheetsCB.SelectedItem.ToString();
            using (HttpClient httpClient = new HttpClient())
            {
                httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken);
                HttpRequestMessage request = new HttpRequestMessage(
                    HttpMethod.Get,
                    new Uri(string.Format("https://graph.microsoft.com/v1.0/me/drive/root:/{0}:/workbook/worksheets(%27{1}%27)/range(address=%27A1:E5%27)", fileLeafRef, WorkSheetName))
                );

                var response = await httpClient.SendAsync(request);
                range = JsonConvert.DeserializeObject<Range>(response.Content.ReadAsStringAsync().Result);
                RenderRangeData(range);
            }
        }

        private void RenderRangeData(Range range)
        {
            rangeGV.Columns.Clear();
            // 列名はダミーとする (非表示)
            rangeGV.Columns.Add("A", "A");
            rangeGV.Columns.Add("B", "B");
            rangeGV.Columns.Add("C", "C");
            rangeGV.Columns.Add("D", "D");
            rangeGV.Columns.Add("E", "E");
            rangeGV.Rows.Clear();
            foreach (var RowData in range.Values)
            {
                int rowIndex = rangeGV.Rows.Add();
                var Row = rangeGV.Rows[rowIndex];
                for (int j = 0; j < RowData.Count; j++)
                {
                    Row.Cells[j].Value = RowData[j];
                    if (RowData[j] != range.Formulas[rowIndex][j])
                    {
                        Row.Cells[j].ReadOnly = true;
                    }
                }
            }
        }

16. refreshBtn をダブルクリックして、Click イベントを実装します。

        private void refreshBtn_Click(object sender, EventArgs e)
        {
            DisplayData();
        }

17. saveBtn をダブルクリックして Click イベントを実装します。

        private async void saveBtn_Click(object sender, EventArgs e)
        {
            string fileLeafRef = fileListCB.SelectedItem.ToString();
            string WorkSheetName = workSheetsCB.SelectedItem.ToString();

            using (HttpClient httpClient = new HttpClient())
            {
                httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken);
                HttpRequestMessage request = new HttpRequestMessage(
                    new HttpMethod("PATCH"),
                    new Uri(string.Format("https://graph.microsoft.com/v1.0/me/drive/root:/{0}:/workbook/worksheets(%27{1}%27)/range(address=%27{1}!A1:E5%27)", fileLeafRef, WorkSheetName))
                );
                range = GetRangeData();
                request.Content = new StringContent(JsonConvert.SerializeObject(range), Encoding.UTF8, "application/json");
                var response = await httpClient.SendAsync(request);
                MessageBox.Show(response.StatusCode.ToString());
            }
        }

        private Range GetRangeData()
        {
            int rowIndex = 0;
            Range temprange = new Range();
            List<List<string>> formulas = new List<List<string>>();
            List<List<string>> values = new List<List<string>>();
            foreach (DataGridViewRow Row in rangeGV.Rows)
            {
                List<string> formulasInRow = new List<string>();
                List<string> valuesInRow = new List<string>();
                for (int j = 0; j < rangeGV.Columns.Count; j++)
                {
                    string CellData = (Row.Cells[j].Value == null) ? "" : Row.Cells[j].Value.ToString();
                    valuesInRow.Add(CellData);
                    if (Row.Cells[j].ReadOnly)
                    {
                        formulasInRow.Add(range.Formulas[rowIndex][j]);
                    }
                    else
                    {
                        formulasInRow.Add(CellData);
                    }
                }
                formulas.Add(formulasInRow);
                values.Add(valuesInRow);
                rowIndex++;
            }
            temprange.Values = values;
            temprange.Formulas = formulas;
            return temprange;
        }

上記ソリューションをビルドして、Excel REST API の動作をご確認ください。

動作概要

・左上から最初のコンボボックスにOneDrive 上にある xlsx ファイルを格納されます。
・左上から 2 番目のコンボボックス名には、ワークシート名が格納されます。
・ファイル名、シートをもとにセル (左上から縦横5 マス) が表示されます。
・数式のセルは読み取り専用になります。
[Save] ボタンをクリックすると変更内容が保存されます。

excelapi2

Excel API の大きな特徴として、更新内容は共同編集セッションで更新されます。Excel ファイルを誰かが開いているから競合するなどといった状況もありません。今回の場合更新対象のセルは左上から縦横 5 マス (A1:E5) ですので、それ以外のセルについては他ユーザーの編集は保持されます。

 excelapi3

上図は、他のユーザーがブラウザーで編集していた際の画面です。別セッションで、アプリケーションが Save をクリックした際に、シートの右上に共同編集者の更新が通知されます。

参考情報

Excel REST API をさらに使用する場合は、以下の情報をご参考にしてください。

タイトル : Microsoft Graph での Excel の操作
アドレス : https://graph.microsoft.io/ja-jp/docs/api-reference/v1.0/resources/excel

タイトル : New additions to the Excel REST APIs on the Microsoft Graph endpoint
アドレス : https://dev.office.com/blogs/additions-to-excel-rest-api-on-microsoft-graph

タイトル : Power your Apps with the new Excel REST API on the Microsoft Graph
アドレス : https://dev.office.com/blogs/power-your-apps-with-the-new-excel-rest-api

前回の投稿と繰り返しとなりますが、Json.NET に関するドキュメントは以下をご参考にしてください。

タイトル : Json.NET Documentation
アドレス : http://www.newtonsoft.com/json/help/html/Introduction.htm

タイトル : Serializing and Deserializing JSON
アドレス : http://www.newtonsoft.com/json/help/html/SerializingJSON.htm

開発工数削減のため、アプリケーション開発前に Graph Explorer, Fiddler や Postman などを使用して、あらかじめ使用する REST を確立しておくことをお勧めします。デバッグ方法については、以下をご参考にしてください。

タイトル : Microsoft Graph を使用した開発に便利なツール群
アドレス : https://blogs.msdn.microsoft.com/office_client_development_support_blog/2016/12/13/tools-for-development-with-microsoft-graph/

Excel REST API については、今後も何度かにわけて様々なサンプルを記載しようと思います。

今回の投稿は以上です。 

Viewing all 35736 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>