Create class presentations anywhere, anytime with the PowerPoint Web App
Top 10 Microsoft Developer Links for Monday, February 10, 2014
- Brian Harry: Application Insights Visual Studio Add-in preview
- Brian Harry: Visual Studio Online Update – Feb 10
- Scott Hanselman: If you had to start over, what technologies would you learn in 2014?
- Andrew Chan: .NET Memory Analysis: Object Inspection
- Avneep Dhanju: JSON Debugger Visualizer in Visual Studio 2013
- Subodh Raghav: How to Create a Web service in ASP.NET
- Bill Wilder for Coding Out Loud: Dumping objects one property at a time?
- Visual Studio Blog: Visual Studio 2013 arriving soon for Windows Embedded Compact 2013
- Ardalis: Working with Kendo UI Templates
- Damien Edwards: Checklist: What NOT to do in ASP.NET
1972
"Property 'SAP__Origin' is invalid"
Symptoms:
After using DuetConfig.exe with the -ImportBdc command to import models into SharePoint, you may see the following error in the SAP logs when trying to use Duet Enterprise functionality:
"Property 'SAP__Origin' is invalid"
Cause:
The LsiUrl value used when importing the model into SharePoint was not correct. The LsiUrl was missing the required multi-origin ( ;mo ) identifier which routes the request to the SAP service that expects the 'SAP__Origin' property that is being sent with the Duet Enterprise request from SharePoint.
Resolution:
Remove and re-import the models using the correct LsiUrl syntax as documented in "Table 2" of the Duet Enterprise 2.0 SAP Configuration Guide:
For example, here are the URLs for Workflow:
LSI URL for Workflow:
https://<host>:<port>/sap/opu/odata/IWWRK/DUET_WORKFLOW_CORE;mo;c=SHAREPOINT_DE/
LSI URL for Subscription:
https://<host>:<port>/sap/opu/odata/IWBEP/SUBSCRIPTIONMANAGEMENT;mo;v=2;c=SHAREPOINT_DE/
Here is an example of a correct ImportBbc command for Workflow:
DuetConfig.exe -ImportBdc -FeatureName Workflow -LsiUrl https://SAPGATEWAY.domain.com:8001/sap/opu/odata/IWWRK/DUET_WORKFLOW_CORE;mo;c=SHAREPOINT_DE -BdcServiceApplication "Business Data Connectivity Service" -UserSubLsiUrl https://SAPGATEWAY.domain.com:8001/sap/opu/odata/IWBEP/SUBSCRIPTIONMANAGEMENT;v=2;mo;c=SHAREPOINT_DE
What's the size of my TFS Databases?
How many times have you been caught trying to figure out what is the current size of your TFS databases? I find more often than not, just before a TFS Migration or a backup procedure, this is a very frequent ask. Here's a nifty T-SQL Script to get your answer.
/*==============================================*/
-- Please use judiciously as Microsoft does not recommend you directly hitting the DB
-- Determine the size of MS SQL Server Database
-- This is especially useful to determine how much space you need for backup/migration etc.
DECLARE @SQL VARCHAR(MAX)
SELECT @SQL = COALESCE(@SQL + CHAR(13) + 'UNION ALL' ,'') + ' SELECT ''' + name + ''' AS DBNAME,' + 'sum(size * 8 /1024.0) AS MB from ' + QUOTENAME(name) + '.dbo.sysfiles'
FROM sys.databases WHERE name LIKE 'tfs_%'
ORDER BY name
EXECUTE (@SQL)
/*==============================================*/
Keep it handy for other systems why don't ya :-)
C#/XAML App suspend: Bug/Feature – OnNavigatedFrom called during app Suspend
Half past Bold?
Dawn of a new day: Power BI for Office 365 is here!
Today, we announced that Power BI for Office 365– our self-service business intelligence solution designed for everyone – is generally available. We have written about the incredible power of Power BI in this blog before while in product preview. But now, Power BI is available and empowers all kinds of business users to find relevant information, pull data from Windows Azure and other sources, and prepare compelling business intelligence models for analysis, visualization, and collaboration.
Some of the resources you can leverage include:
Plus, you can read what the press is saying by going here. It really is the dawn of a new day as we have transformed the way people will realize value from data and ushered in the age of modern business intelligence. May the cloud be with you! – Jon C. Arnold
Safer Internet Day - Microsoft welcomes Queensland Police Services, Western Australia Police and Neighbourhood Watch Australasia to ThinkUKnow
Microsoft welcomes Queensland Police Service, Western Australia Police and Neighbourhood Watch Australasia to the ThinkUKnow program. The partnership will help deliver more training to more parents on this extremely important topic, resulting in a safer online environment for children. We are proud to be a part of the program supporting Safer Internet Day.
RegSaveKey() is failing with error ERROR_NO_SYSTEM_RESOURCES (1450)
One of our senior emea Windows SDK Engineer, Nitin Dhawan, got an interesting issue where an application was failing to save a registry key to a file on local disk using RegSaveKey(). Investigating further, he found that the API RegSaveKey() was failing with Win32 error 1450 as reported by GetLastError().
Winerror.h has details about error 1450 as below-
//
//
MessageId: ERROR_NO_SYSTEM_RESOURCES
//
//
MessageText:
//
//
Insufficient system resources exist to complete the requested service.
//
#define ERROR_NO_SYSTEM_RESOURCES 1450L
Registry APIs use paged-pool memory for its operations. Paged-pool memory is limited on the machine and shared by multiple components on the system. So if the memory allocation request by RegSaveKey() is not
successful, it returned with error ERROR_NO_SYSTEM_RESOURCES. This may happen even if the paged-pool memory is available but it is fragmented in a way that there is no single block of free memory available to fulfill the request.
In this issue, RegSaveKey() was failing with ERROR_NO_SYSTEM_RESOURCES, because the size of the registry key was about 16MB which application was trying to save to the file on local disk.
Note: This link on MSDN has information about Registry Element Size Limits (Windows) . For a registry value the recommendation is less than 1 MB of size or any value more than 2KB should be stored in file and application should be using the file not the registry.
The error ERROR_NO_SYSTEM_RESOURCES may happen in other scenarios as well, references as kb article links provided below:
You receive error 1450 "ERROR_NO_SYSTEM_RESOURCES" when you try to create a very large file in Windows XP
http://support.microsoft.com/kb/913872
Backup program is unsuccessful when you back up a large system volume
http://support.microsoft.com/kb/304101
Written by: Nitin Dhawan, Senior Support Engineer
Blog Reviewed by: Jeff Lambert(Sr. Escalation Engineer)
February Updates to the Windows Azure Toolkit for Eclipse – SSL Support, plus new JDKs, Windows Azure configurations and more
Microsoft Open Technologies, Inc., has released the February preview of the Windows Azure Toolkit for Eclipse. This release includes multiple updates since our October 2013 release, including SSL support, additional support for the latest versions of GlassFish and the Azul Zulu OpenJDK package, a new option to choose the A5 instance on Windows Azure, Windows Server 2012 R2, some tweaks to the menu, and a new “Auto” option for private endpoints. Have a look at the documentation update for full details.
Support for SSL
Instead of having the user manually configure every Java-based Web Application Server running on Windows Azure to accept SSL certificates and authentication, which varies from server to server, our Engineering team has developed what we call SSL Offloading. Offloading allows you to easily enable HTTPS support (one of our most requested features) without requiring configuration of SSL on your Java application server. Instead, SSL authentication is handled set up by the Toolkit automatically, using IIS + and Application Request Routing (ARR) under the hood in your VM. So after the decryption, your Java Web application Server receives (and responds with) just standard HTTP. This also works in conjunction with sticky sessions for session persistence and the ACS filter for user authentication.
To enable SSL offloading, select the Worker Role you want to work with in Role Properties, then click on Enable SSL Offloading (HTTPS), as shown below. You will be asked to confirm an endpoint change to 443 (HTTPS) and provide a certificate. Note that this change will only happen for this role. This allows you to have some roles without SSL, for example for a Website home page, but other roles with SSL enabled for access only by authenticated users or requiring more secure communication.
Customizable certificate name (CN) in the self-signed certificate creation UI
You may already be familiar with the Toolkit’s UI to easily generate self-signed certificates for testing purposes. (It’s recommended that you use a certificate verified by a recognized SSL certificate provider for staging and production to avoid users seeing browser warnings about untrusted connections and unsigned certificates).
Previously, you could generate certificates with the same hard-coded Common Name (CN) for all new certificates. Specifying your own name helps track and manage multiple certificates in the Windows Azure portal used for different purposes (like SSL vs Remote Desktop). Here’s a sample of the enhanced UI in action:
Support for GlassFish OSE 4
Glassfish OSE 4 joins the multiple versions of Tomcat, Jetty, JBoss and Glassfish OSE 3 as the latest option to include as part of your deployment package. As before, you can test your deployments locally in Eclipse before you deploy with full emulation.
Here’s the full list of Application Servers recognized by the Toolkit in this release:
More Options for Azul’s Zulu Open JDK package
In July we announced a partnership with Azul Systems, and in September Azul Systems released Zulu, an OpenJDK Build for Windows Azure leveraging the latest advancements from the open source community. Zulu has been an option under the 3rd party JDK Deployment Project options since September’s announcement, and since then Zulu v7 update 40 and now update 45 are available options that the Toolkit knows how to deploy automatically under the hood in Windows Azure, without you having to download them to your local computer first.
Here’s an example of the new JDK selection, showing a deployment being configured with the latest version of Zulu selected to be part of the deployment package:
In other OpenJDK news, Java developers working with the latest Azul Zulu OpenJDK v1.7 package on 64 bit Windows Server machines can now automate the process using the Microsoft Web Platform Installer (WebPI). Full details here.
New Features when Publishing: Select a Target OS
In the October release we moved the target OS from the project properties a more prominent place in the publishing process. In this release we’ve added Windows Server 2012 R2 support for the target OS:
A5 VM support
In this release we’ve also added support for the Windows Azure A5 instance configuration. The A5, A6, and A7 instances provide larger amounts of memory more suited for high-throughput applications. Detailed configurations of these instances are available here.
A new Toolbar button look, and a new button for creating Self-Signed Certificates
The engineering team engaged a designer to create new menu icons to match the standard eclipse “flat” look. They’re still all there in the same place, but they look a little different now. We’ve also added a button to the self-signed certificate creation wizard. The graphical face lift of the Toolkit is still a work in process with more coming later, primarily motivated by Windows Azure’s own latest graphical scheme updates.
Set private endpoint ports to Auto
Now you can set a private port to “auto” for input endpoints and internal endpoints, the equivalent of using “*” as the private port number in CSDEF. Previously, you could only assign a specific port number. The auto setting lets you rely on Windows Azure, when appropriate, to assign a free port number to that endpoint.
Getting the Toolkit
Here are the complete instructions to download and install the Windows Azure Toolkit for Eclipse, as well as updated documentation.
Ongoing Feedback
We listen and respond to the community, you are our compass to know we’re going in the right direction! We value your feedback on how we can make it make it easier to test and deploy your Java applications on Windows Azure and we appreciate code contribution proposals. As always, let us know how the latest release works for you and how you like the new features! To send feedback or questions, just use MSDN Forums or Stack Overflow.
Support for Disjunctive/OR Filter Clauses in the Graph Directory Service
We continue to listen to customers and improve the Graph Directory Service development experience. Today we'd like to announce support for queries with disjunctive/OR filter clauses. The previous version of the service allowed filter clauses to be joined solely by the AND logical operator, i.e. such that all filter clauses must evaluate to true for the directory objects to match. With the support of both AND and OR logical operators, more comprehensive filtered searches can be applied to the directory.
Disjunctive/OR Filter Clause Support
The logical OR operator allows for a series of filter clauses to evaluate to true if at least one of the clauses evaluates to true. A simple example of such a filter is as follows:
GET /contoso.com/users?api-version=2013-11-08&$filter=startswith(displayName,'james') or startswith(givenName,'james')
This request will return all User objects that either have a displayName or givenName value starting with "james". Note the api-version argument in the above example is 2013-11-08, but the logical OR operator is usable across all service versions, e.g. 2013-04-05.
Paging
Please note that paging across filtered searches is not currently possible. The service will return the first page of results, along with a "odata.nextLink" property value, indicating that more results are available; this is to inform the client that additional pages are available, however following this relative URI will result in a 400 Bad Request error. We are actively working on addressing this limitation and will make an announcement when this issue is resolved.
Additional Examples
A common usage scenario for user interfaces is to offer a simple search using a single text field. Such an interface might make the following request, given an input of "mary":
GET /contoso.com/users?api-version=2013-11-08&$filter=startswith(displayName,'mary') or startswith(givenName,'mary') or startswith(surname,'mary') or startswith(mail,'mary') or startswith(userPrincipalName,'mary')
Note that there are many properties that might contain the text the user is searching for. Another similar case might process the input text "John Riddell" by searching for Contact objects either matching the displayName, or when the first token matches the givenName and the second token matches the surname:
GET /contoso.com/contacts?api-version=2013-11-08&$filter=startswith(displayName,'John Riddell') or (startswith(givenName,'John') and startswith(surname,'Riddell'))
To find an enabled user that has an email address or user principal name matching "jonlawr@contoso.com":
GET /contoso.com/users?api-version=2013-11-08&$filter=accountEnabled eq true and (userPrincipalName eq 'jonlawr@contoso.com' or mail eq 'jonlawr@contoso.com')
To find a user having a SMTP proxy address or userPrincipalName matching "william@contoso.com":
GET /contoso.com/users?api-version=2013-11-08&$filter=userPrincipalName eq 'william@contoso.com' or proxyAddresses/any(x:startswith(x,'smtp:william@contoso.com'))
Note the "any" function syntax, which must be used when searching against a multi-valued property such as proxyAddresses. As another example, if you have a number of userPrincipalName values and you would like to find any matching users:
GET /contoso.com/users?api-version=2013-11-08&$filter=userPrincipalName eq 'mary@contoso.com' or userPrincipalName eq 'jonlawr@fabrikam.com' or userPrincipalName eq 'james@contoso.com'
The above examples do not include all object types, but such queries can be applied to other object types, with their respective properties.
Filter Expression Constraints
There are a few constraints that must be adhered to when performing a filtered search. These are as follows:
- When the search expression contains only AND-joined filter clauses, the maximum allowed number of clauses is 25.
- When the search expression contains at least one set of OR-joined filter clauses, the maximum allowed number of clauses is 10.
- The maximum allowed expression height is 3, where an example of a tree of height 3 is ((A and B) or (C and D)).
These constraints are imposed to enable us to better maintain efficient response guarantees. If any one of these constraints are violated, an HTTP 400 BadRequest error response will be returned, along with an appropriate error message. We are actively working on improving the developer experience, so please let us know if any of these constraints impede your ability to obtain the desired search results.
Filterable Properties
The following sub-sections represent the list of filter properties for each object type that are possible to be used in a $filter query argument as of this posting (Feb 2014). These are irrespective of the service version, meaning that if the type and property are present in a given service version, it should be filterable via $filter.
Application
- appId
- availableToOtherTenants
- identifierUris
Contact
- city
- country
- department
- dirSyncEnabled
- displayName
- givenName
- jobTitle
- lastDirSyncTime
- proxyAddresses
- state
- surname
Device
- accountEnabled
- alternativeSecurityIds
- deviceId
- devicePhysicalIds
- dirSyncEnabled
- displayName
- lastDirSyncTime
Group
- dirSyncEnabled
- displayName
- lastDirSyncTime
- mailNickname
- proxyAddresses
- securityEnabled
ServicePrincipal
- accountEnabled
- appId
- displayName
- publisherName
- servicePrincipalNames
- tags
User
- accountEnabled
- city
- country
- department
- dirSyncEnabled
- displayName
- givenName
- immutableId
- jobTitle
- lastDirSyncTime
- proxyAddresses
- state
- surname
- usageLocation
- userPrincipalName
- userType
As always, we'd love to hear any feedback from the community!
Thanks,
Robert
Azure Active Directory Team
Microsoft Corporation
APPLICATION DEVELOPMENT – February 2014 Readiness Update
What’s new in Visual Studio 2013
Whether you develop for the web, Windows, or Windows Phone, and whether you are using C#, XAML, HTML, or Visual Basic, this no-cost, on-demand course is for you.
You’re Invited to the Microsoft AX Enterprise Academy Presales Advanced Workshop (Sydney)
We are pleased to invite you to a workshop designed to equip presales consultants to successfully match the product capabilities to the prospect customer’s needs.
This 3-day workshop is designed to equip the presales consultant with:
· The ability to understand and apply Microsoft Dynamics AX 2012 technology and functionality to customer scenarios and needs.
· The ability to demonstrate the technology and functionality in Microsoft Dynamics AX 2012 allowing the customer to innovate in their business with a modern architecture and user-friendly ERP solution
· Articulate the value of Microsoft Dynamics AX 2012 over competitive ERP products.
Registration: CLICK TO REGISTER
Date: May 6-8, 2014
Time: 8:00am – 5:00pm
Location: Microsoft North Ryde Sydney
Fee: $950
Audience: Microsoft Dynamics AX Presales Specialists
Level: 300
Pre-Requisites
· Students must have completed training and master one of these functional areas of the solution: Financials, Supply Chain Management, and Reporting.
· At least one Microsoft Dynamics AX 2012 certification
· Experience in a Microsoft Dynamics AX 2012 project. You may have been a lead or had a supporting role; however real-life experience is highly recommended in order to maximize the benefit of attending the workshop.
How to Register
To register for the course, you will be asked to first complete an assessment. Your responses will be evaluated and based upon approval you will be able to complete your registration. Please do not make travel arrangements until your participation has been confirmed.
Dynamics AX Workflow Processor Demo Shortcut
If you work with Dynamics AX, you are probably familiar with the Tutorial_WorkflowProcessor form. Many processes are driven by the workflow engine within Dynamics AX and in a production system, workflow tasks are executed by the"Workflow messaging processing" batch job. However, the minimum recurrence interval supported by the AX batch job infrastructure is 1 minute. If you are doing a demonstration, this means you could wait up to a minute for a workflow to complete. Fortunately, you can open the Tutorial_WorkflowProcessor form that ships with AX from the AOT to speed up the process. In the past, I kept a development workspace open to launch this form. This week, someone gave me a tip to make it easier to access this form.
First, open the development environment and add a new node under AOT\Menu Items\Display. Set the Label property to "Workflow processor" and the Object property to "Tutorial_WorkflowProcessor". Now go to AOT\Menus\GlobalToolsMenu. Add a new Menu Item and set the MenuItemName property to "Tutorial_WorkflowProcessor" (the one you just created). Now, you have easy access to the form everywhere in AX via File > Tools > Workflow processor.
How to relocate the Package Cache
Visual Studio can require a lot of space on the system drive. Based on years of data collected from customers’ installations from the Customer Experience Improvement Program, we took advantage of this feature in Burn – the Windows Installer XML (WiX) chainer – to eliminate most errors during repair, servicing, and even uninstall. This was not a popular decision with some customers. For years even I pushed against caching Visual Studio deployment packages because of the impact to drive space as well, but as HDD space increased market studies showed little reason not to cache for the increased reliability of deployment.
We understand, however, that many customers have smaller SSDs and while size will increase and cost decrease over time, some customers are blocked from installing Visual Studio or have little space left over after a successful installation.
I have submitted a feature proposal to WiX to allow control over the Package Cache location, but until then there is a workaround that really highlights some of the virtualization features in Windows.
Disclaimer
This practice has received some testing and has been running under load for a reasonable amount of time. While it uses documented features of Windows, this is not an officially supported practice and may leave your machine in a corrupted state should you disconnected the secondary drive or fail to properly secure the folder mount point and its contents.
Workaround
You can move the contents of the Package Cache to a partition on another drive – copying the ACL and owner, which is very important for both security and because some programs may not trust any ACL or owner that is different than what is expected – and then mount that partition into an empty folder. But rather than dedicate an entire partition on a physical disk – which isn’t always as easy to reconfigure on the fly – we will create a virtual disk (VHD) on another drive.
By creating an expandable virtual disk we can declare a maximum size that can be much larger than necessary – even larger than the host drive itself – but takes up only as much room as necessary and will grow as the content grows. This way, should you ever need to allocate more space you can simply dismount and move the VHD to another disk, then remount it. No need to recopy files.
Mounting a VHD into an empty directory also maintains the mount across reboots – something not currently supported when mounting a VHD for drive access.
System requirements
Support for creating and mounting VHDs was built into Windows Vista with support for VHDs larger than 2TB added in Windows 8 using the newer VHDX format.
Manual walkthrough
To show more in depth how this works – and how you might adapt it for your own use – I will use a couple of built-in programs: diskpart.exe and mountvol.exe. We can also do all this in PowerShell with the right Windows Features enabled but I will cover that in the scripted section.
- Open an elevated command prompt.
- Run diskpart.exe to start the disk partitioning utility:
diskpart
- Create a large (ex: 1TB), expandable VHD on whatever secondary disk (ex: X:) you prefer with security matching the source directory’s security:
create vdisk file="X:\Cache.vhd" type=expandable maximum=1048576 sd="O:BAG:DUD:P(A;;FA;;;BA)(A;;FA;;;SY)(A;;FRFX;;;BU)(A;;FRFX;;;WD)"
- Select the VHD and create a partition using all available space:
select vdisk file="X:\Cache.vhd"
attach vdisk
create partition primary - Format the volume that was created automatically and temporarily assign a drive letter (ex: P:):
format fs=ntfs label="Package Cache" quick
assign letter=P
exit - After exiting diskpart.exe, move any existing per-machine payloads from the Package Cache with security:
robocopy "%ProgramData%\Package Cache" P:\ /e /copyall /move /zb
- Recreate the Package Cache directory and set up the ACL and owner as before:
mkdir "%ProgramData%\Package Cache"
echo y | cacls foo /s:"O:BAG:DUD:PAI(A;OICIID;FA;;;BA)(A;OICIID;FA;;;SY)(A;OICIID;FRFX;;;BU)(A;OICIID;FRFX;;;WD)" - Run mountvol.exe without any parameters first and look for the volume name that has the drive letter you assigned to the VHD, then use that with mountvol.exe again to mount that volume into the empty Package Cache directory.
mountvol
mountvol "%ProgramData%\Package Cache" \\?\Volume{a525b826-8a0c-11e3-be94-00249b0716f5}\ - Run diskpart.exe again and remove the drive letter assignment from the volume (should be in partition 1 of the VHD):
select vdisk file="X:\Cache.vhd"
select partition 1
remove letter=P
exit - Non-boot VHDs are not automatically mounted, so before you reboot you need to make sure the VHD is mounted again whenever the machine is started. Write a simple script for diskpart.exe to execute on startup. If you’re doing this on a laptop, you should edit the scheduled task afterward to allow it to run on batteries.
echo select vdisk file=X:\Cache.vhd > X:\Cache.txt
echo attach vdisk >> X:\Cache.txt
schtasks /create /ru system /sc onstart /rl highest /tn "Attach Package Cache" /tr "%SystemRoot%\System32\diskpart.exe /s X:\Cache.txt"
After exiting diskpart.exe you now have mapped a VHD on another drive into the new Package Cache directory. The VHD will be remounted into the directory whenever the machine is rebooted. If you look at the mount point, you will also see its icon, description, and size are different.
The size reported in Windows Explorer is merely the maximum and not how much space is consumed within the virtual disk. In fact, you probably would care less about that than how much space the VHD itself is consuming. Browse to the folder were you created the VHD (ex: X:\Cache.vhd) and you will see the actual file size.
When bundles have been uninstalled and packages removed from the Package Cache, you can attempt to compact the VHD to reclaim space on the host disk.
- Open an elevated command prompt.
- Run diskpart.exe:
diskpart
- Select the VHD and attempt to compact it:
select vdisk file="X:\Cache.vhd
compact vdisk
exit
Should you ever need to move the file, you can use mountvol.exe to dismount the VHD, copy it to another attached drive, and remount the VHD.
Scripted solution
Once the Hyper-V Module for Windows PowerShell is installed on supported Windows platforms, you can easily script this and execute it on remote machines running an elevated WinRM endpoint. PowerShell is a powerful object-oriented shell that provides for the same compositional techniques of any modern programming language.
I have created an example PowerShell script you can use locally on any Windows 7 machine or newer, or run on remote machines using the Invoke-Command
cmdlet.
When bundles have been uninstalled and packages removed from the Package Cache, you can attempt to compact the VHD to reclaim space on the host disk.
- Open an elevated PowerShell prompt.
- Run the following command to get and optimize (compact) the VHD you passed as a parameter to
Move-WixPackageCache.ps1
(ex: X:\Cache.vhd):get-vhd X:\Cache.vhd | dismount-vhd -passthru | optimize-vhd -passthru | mount-vhd
Summary
Mounting virtual disks hosted on other attached disks can be an effective workaround to reducing space on the system drive. We will continue to explore options for locating more data on the chosen installation drive but there will always be some components that need to be installed to the Windows or Program Files directories. There are some known issues when redirecting those and other directories to another drive other than where Windows is installed, so leaving plenty of space on your system drive is always recommended.
TurboTax 2013 installation Error: ".NET Framework Verification Tool can't be found"
//This is not really a technical blog but I decided to write something because I was frustrated.
It is tax season again and being eager to get some money back I bought TurboTax 2013 Premier in Jan but found out that I couldn't install it. The error is ".NET Framework Verification Tool can't be found". I guess that it should be common so I searched it. I did find a few links on the top. I tried everything all the below popular links and none of them worked. I even uninstalled .net frameworks and reinstalled them. I also tried the blog suggested by InTuit. I tried it for a couple of hours but still the error persisted. I almost gave up and thought about installing it on my corporate machine. But as a software engineer I believe that there must be a way and I want to solve this issue since I know that I can use the tool Orca to modify .msi files. So I started to explore the files on the CD and it turned out that the solution is super simple. I don't even need to use Orca.
https://ttlc.intuit.com/questions/1900095-troubleshooting-problems-with-the-microsoft-net-framework
http://blog.crosbydrive.com/?p=281
The solution is executing the file "TurboTax 2013\TurboTax 2013 Installer.exe" directly and it will skip the annoying step. I believe that this is a bug on Turbo Tax 2013 and I am disappointed Intuit didn't fix it or even didn't provide a solution for it.
Updated Yammer App for SharePoint
I have had a few customers ask me why their Yammer App for SharePoint is no longer loading the initial configuration screen that allows you to choose what the app is connected to
After adding the Yammer App to a page on your site you should see…
but instead you see this screen forever…
All you need to do is install the updated Yammer App for SharePoint from the SharePoint App Store.
After the latest update is installed you should have version 1.0.0.5 as shown in the App Details pane…
and your embedded Yammer apps should load as expected.
DevCon 2014: самая курортная конференция
Уважаемые разработчики и тестировщики ПО! Подготовка DevCon 2014, нашей крупнейшей специально предназначенной для вас конференции, идет полным ходом! Сегодня мы готовы поделиться с вами описанием места проведения– природным курортом Яхонты.
Конференция DevCon 2014 – уникальное мероприятие, которое уже третий год будет проводиться за городом в живописном пригородном курорте Яхонты.
Вспомнить или узнать как это было в прошлом году на DevCon 2013 вам поможет фото и видео отчет 2013 года.
До скорой встречи на конференции! Напоминаю, что билеты на прошлый DevCon закончились за три месяца до конференции. Торопитесь занять свое место на DevCon 2014!
What’s new in the Azure Mobile Services client SDKs 1.1.3
The version 1.1.3 for the following platforms have been released:
Managed (C# / VB)
- New authentication provider: WindowsAzureActiveDirectory
- When logging in with the overload that takes the provider as a string, both "WindowsAzureActiveDirectory" and "AAD" are supported (case-insensitive).
- Fixed issue 213, which prevented custom APIs from sending query string parameters starting with ‘$’
iOS
- Added support for both versions of the provider name for Windows Azure Active Directory (@"WindowsAzureActiveDirectory" and @"aad")
JavaScript (Windows Store / HTML)
- Added support for both versions of the provider name for Windows Azure Active Directory ('WindowsAzureActiveDirectory' and 'aad')
- The HTML/JS SDK can be found in the CDN at http://ajax.aspnetcdn.com/ajax/mobileservices/MobileServices.Web-1.1.3.min.js.
Android
- New authentication provider: WindowsAzureActiveDirectory
- When logging in with the overload that takes the provider as a string, both "WindowsAzureActiveDirectory" and "AAD" are supported (case-insensitive).
- Support for system properties in tables: __createdAt, __updatedAt and __version.
- Support for conditional updates, triggered by the version member of types. Two new exception types can be returned in the case of failed conditional updates: MobileServicePreconditionFailedException and MobileServicePreconditionFailedExceptionBase, which have a property containing the object returned by the service on the failed updates.
DSC Diagnostics Module– Analyze DSC Logs instantly now!
Introduction
xDscDiagnostics is a PowerShell module that consists of two simple operations that can help analyze DSC failures on your machine – Get-xDscOperation and Trace-xDscOperation. These functions help in identifying all the events from past DSC operations run in your system, or any other computer (Note: you need a valid credential to access remote computers). Here, we use the term DSC Operation to define a single unique DSC execution from its start to its end. For instance, Test-DscConfiguration would be a separate DSC Operation. Similarly, every other cmdlet in DSC (such as Get-DscConfiguration, Start-DscConfiguration, etc.) could each be identified as a separate DSC operation.
The two cmdlets are explained here and in more detail below. Help regarding the cmdlets are available when you run get-help <cmdlet name>.
Get-xDscOperation
This cmdlet lets you find the results of the DSC operations that run on one or multiple computers, and returns an object that contains the collection of events produced by each DSC operation.
For instance, in the following output, we ran three commands, the first of which passed, and the others failed. These results are summarized in the output of Get-xDscOperation.
Figure 1 : Get-xDscOperation that shows a simple output for a list of operations executed in a machine
Parameters
- Newest– Accepts an integer value to indicate the number of operations to be displayed. By default, it returns 10 newest operations. For instance,
Figure 2 : Get-xDscOperation can display the last 5 operations’ event logs
- ComputerName– Parameter that accepts an array of strings, each containing the name of a computer from where you’d like to collect DSC event log data. By default, it collects data from the host machine. To enable this feature, you must run the following command in the remote machines, in elevated mode so that the will allow collection of events
New-NetFirewallRule -Name "Service RemoteAdmin" -Action Allow
- Credential– Parameter that is of type PSCredential, which can help access to the computers specified in the ComputerName parameter.
Returned object
The cmdlet returns an array of objects each of type Microsoft.PowerShell.xDscDiagnostics.GroupedEvents. Each object in this array pertains to a different DSC operation. The default display for this object has the following properties:
- SequenceID: Specifies the incremental number assigned to the DSC operation based on time. For instance, the last executed operation would have SequenceID as 1, the second to last DSC operation would have the sequence ID of 2, and so on. This number is another identifier for each object in the returned array.
- TimeCreated: This is a DateTime value that indicates when the DSC operation had begun.
- ComputerName: The computer name from where the results are being aggregated.
- Result: This is a string value with value “Failure” or “Success” that indicates if that DSC operation had an error or not, respectively.
- AllEvents: This is an object that represents a collection of events emitted from that DSC operation.
For instance, if you’d like to aggregate results of the last operation from multiple computers, we have the following output:
Figure 3 : Get-xDscOperation can display logs from many other computers at once.
Trace-xDscOperation
This cmdlet returns an object containing a collection of events, their event types, and the messages output generated from a particular DSC operation. Typically, when you find a failure in any of the operations using Get-xDscOperation, you would want to trace that operation to find out which of the events caused a failure.
Parameters
- SequenceID: This is the integer value assigned to any operation, pertaining to a specific computer. By specifying a sequence ID of say, 4, the trace for the DSC operation that was 4th from the last will be output
Figure 4: Trace-xDscOperation with sequence ID specified
- JobID: This is the GUID value assigned by LCM xDscOperation to uniquely identify an operation. Hence, when a JobID is specified, the trace of the corresponding DSC operation is output.
Figure 5: Trace-xDscOperation taking JobID as a parameter – to output the same record as above – they just have two identifiers- job id and SequenceID
- Computer Name and Credential: These parameters allow the trace to be collected from remote is necessary to run the command :
New-NetFirewallRule -Name "Service RemoteAdmin" -Action Allow
Figure 6: Trace-xDscOperation running on a different computer with the -ComputerName option
Note: Since Trace-xDscOperation would aggregate events from Analytic, Debug, and operational logs, it will prompt the user to enable these logs. If the logs are not enabled, an error message is displayed stating that these events cannot be read until it has been enabled. However, the trace from other logs are still displayed. This error can be ignored.
Returned object
The cmdlet returns an array of objects, each of type Microsoft.PowerShell.xDscDiagnostics.TraceOutput. Each object in this array contains the following fields:
- ComputerName: The name of the computer from where the logs are being collected.
- EventType: This is an enumerator type field that contains information on the type of event. It could be any of the following :
a. Operational : Indicates the event is from the operational log
b. Analytic : The event is from the analytic log
c. Debug : This would mean the event is from the debug log
d. Verbose: These events are output as verbose messages during execution. The verbose messages make it easy to identify the sequence of events that are published.
e. Error: These events are error events. Please note that by looking for the error events, we can immediately find the reason for failure most of the times.
- TimeCreated : A DateTime value indicating when the event was logged by DSC
- Message: The message that was logged by DSC into the event logs.
There are some fields in this object that are not displayed by default, which can be used for more information about the event. These are:
- JobID : The job ID (GUID format) specific to that DSC operation
- SequenceID: The SequenceID unique to that DSC operation in that computer.
- Event: This is the actual event logged by DSC, of type System.Diagnostics.Eventing.Reader.EventLogRecord. This can also the obtained from running the cmdlet Get-Winevent, as in the blog here. It contains more information such as the task, eventID, level, etc. of the event.
Hence, we could obtain information on the events too, if we saved the output of Trace-xDscOperation into a variable. To display all the events for a particular DSC operation, the following command would suffice:
(Trace-xDscOperation-SequenceID3).Event
That would display the same result as the Get-Winevent cmdlet, such as in the output below.
Figure 7 : Output that is identical to a get-winevent output. These details can be extracted using the xDscDiagnostics module as well
Ideally, you would first want to use Get-xDscOperations to list out the last few DSC configuration runs on your machines. Following this, you can dissect any one single operation (using its sequenceID or JobID) with Trace-xDscOperation to find out what it did behind the scenes.
In summary, xDscDiagnostics is a simple tool to extract the relevant information from DSC logs so that the user can diagnose operations across multiple machines easily. We urge the users to use this more often to simplify their experience with DSC.
Inchara Shivalingaiah
Software Developer
Windows PowerShell Team