Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Changes coming soon to the MPN for IoT partners


Retail Location Analytics using Power BI

$
0
0

This blog post was authored by Shish Shridhar, Director of Business Development - Retail, Microsoft

By combining demographic data like Median Income, Education Levels, Median Age and customer purchasing data such as preferences, past purchases, and online behavioral data, retailers gain a more in-depth understanding of customer needs and wants than with just past purchase data. Power BI provides powerful capabilities for combining data from various sources and enabling visual correlation: you can certainly use the Data Analysis Tookpak in Excel and run a Correlation Coefficient on the combined data as well.

To test this out, I came with the assumption that Seattle has the most Starbucks and that demographics affect the number of stores. To learn what the answer was, I used Power BI and Excel. Here is what I did:

I looked for oData sources relevant to retail and found one via Socrata: https://opendata.socrata.com/

 

I ran a search for Starbucks, to see if there was a list of Starbucks Restaurants from around the world. I did find a oData source of with a listing of all the Starbucks Restaurants around the world: https://opendata.socrata.com/Business/All-Starbucks-Locations-in-the-World/xy4y-c4mk

 

Here is the oData source link to access the data: http://opendata.socrata.com/OData.svc/xy4y-c4mk

Using Power Query for Excel, I was able to access the data using the oData option. This returns 20, 621 rows of data containing details of Starbucks restaurants around the world:

 

To get a better insights from the data, I used Power View for Excel to create a visualizations. A quick drag & drop of the Brands against the count of the StoreIds showed me the brands represented by the data:

 

I was curious about the countries with the most Starbucks, so I dragged in the Country information along with a count of the StoreIds. Here is the result:

 

And interestingly, Seattle is not the city with the most Starbucks, as I’d assumed:

 

Power Map for Excel enables visualizing this data on a Map as a layer of information:

I was able to obtain US Census Data from Neustar and I imported this data into Excel. This data included Zip codes as well as detailed information about every Zip code. I could potentially use this information to correlate things like median age, median income, population around each of the Starbucks in the US. The Data looks like this:

 

When I overlay the Census Data on top of the Starbucks Store Locations, I get a visual correlation between demographics data and Starbucks locations:

 

Here is a Power Map for Excel video of two layers: Starbucks store locations with Median Income by Zip code:

There are several sources of interesting public data that you can use to analyze Retailers: proximity analysis of retailers and their competition using data from Yelp and Foursquare; Correlating retail yelp rating and FourSquare CheckIns against Demographic data; Correlating Weather data against store performance.

Here's the actual live visualization I created with Power View:

You can check out some more examples at my blog.

.NET Micro Framework now supports Visual Studio 2013

$
0
0

Today the .NET Micro Framework team is releasing a beta update of the .NET Micro Framework SDK that adds support for Visual Studio 2013. The release also contains other improvements that will benefit developers and hardware partners, making the install and update experience better.

Check out the .NET Micro Framework Team blog, and the Netmf.com site to learn more about .NET Micro Framework and this release. Read the Microsoft Open Technologies blog to learn more about this open source project and community engagement.

You can download the .NET Micro Framework SDK 4.3.1 (SDK R2 Beta) update from our Codeplex site. Please try it out, provide feedback and start contributing to the open source project.

Supporting for Visual Studio 2013

The .NET Micro Framework SDK now supports Visual Studio 2013. That’s welcome news, since we’ve heard many requests for this support. In the process of integrating the SDK into Visual Studio 2013, we adopted a new architectural approach that decouples the .NET Framework SDK from Visual Studio. You can now use the Visual Studio version that works best for you, or multiple versions at once. .NET Micro Framework Visual Studio integration is delivered via a Visual Studio VSIX package, which is independent of a particular Visual Studio version.

The new approach also helps hardware partners. .NET Micro Framework hardware vendors can now support multiple Visual Studio versions with a given piece of hardware and firmware. That also streamlines the overall experience for app developers, too.

A first glimpse at the upcoming support for Visual Studio “14”

The .NET Micro Framework team is looking ahead and has already started to enable support for Visual Studio “14”. There still is more work to do to fully support Visual Studio “14” but you can already give it a try if you are an early adopter. Please let us know what you think.

.NET Micro Framework is Open Source

The .NET Micro Framework is an open source project from Microsoft, licensed as Apache 2. It is developed by Microsoft engineers assigned to Microsoft Open Technologies and by others in the maker community. Hardware makers are able to use the .NET Micro Framework code from the Codeplex project without any additional license or paying any fee to Microsoft.

Next Steps

The .NET Micro Framework SDK 4.3.1. (R2 Beta) release brings key improvements and updates. The Visual Studio 2013 support has been a common request which the .NET Micro Framework team is glad to deliver. But the team doesn’t want to stop here and is already at work on many more updates and improvements. We want to hear about you experience with the new release, with the work in progress and what you would like to see coming. Drop the team a note, either commenting here, the .NET Micro Framework Team Blog or on the Codeplex discussion forum.

Issues with OneDrive for business and Document Cache–Don’t mix C2R and MSI installs

$
0
0

With the latest updates to Office, an issue that rears it’s ugly head if you’ve mixed both C2R and MSI installs of any Office product (2013).  That means Office, Visio, Project, SharePoint designer, and OneDrive for Business Sync Client.

 

If you get into this mess, try to uninstall all the C2R or MSI – then get them all consistent;

 

#1 – can’t mix click-to-run and MSI installs on the same machine:

If any are mixed, you need to uninstall.

To start fresh:

1. Run http://support.microsoft.com/kb/2739501

a. Both MSI then C2R

2. Run ROIScan to ensure nothing is left:

a. http://gallery.technet.microsoft.com/office/68b80aba-130d-4ad4-aa45-832b1ee49602

3. Once you’re all clean:

a. C2R

i. Use the https://portal.office.com/OLS/MySoftware.aspx

ii. SharePoint designer is under “Tools”

iii. Visio is there if your org has a license..

b. MSI

i. Get them all in FULL downloads from MSDN, or wherever you obtain your installs from

# for internal, step #1 – in toolbox there is OffScrub

Global configuration for WebAPI services to add custom extensions

$
0
0

When you create a WebAPI service, you must configure it in code by adding to the generated Register method. There does not seem to be a way to establish a policy-based configuration for ALL WebAPI services on the box. After coming up empty on Bing (and Google!) I decided to read up on Katana and found a hook at the OWIN host level which can be used to inject configuration onto these WebAPI apps.


First, write a simple class as follows and make it strongly-typed by signing it and then GAC it.  


namespace WebApi

{

    publicclassWebApiConfiguration

    {

        publicvoid Configuration(IAppBuilder app)

        {

            var config = GlobalConfiguration.Configuration;

            ConfigureAuth(app);

            ConfigureExtensions(config);

        }

 

        publicstaticvoid ConfigureExtensions(HttpConfiguration config)

        {

            config.Filters.Add(newMyAuthFilter());

            config.MessageHandlers.Add(newMyDelegatingHandler());

            config.Filters.Add(newMyExceptionFilterAttribute());

            config.Filters.Add(newMyActionFilterAttribute());

            config.Formatters.Add(newMyMediaTypeFormatter());

        }

 

        publicvoid ConfigureAuth(IAppBuilder app)

        {

            app.UseCookieAuthentication(newCookieAuthenticationOptions());

            app.UseExternalSignInCookie(DefaultAuthenticationTypes.ExternalCookie);

        }

 

    }

 

… other classes left out for brevity.


The main method is Configuration which gets called by the OWIN host – but only if it finds a particular setting in the appSettings configuration. What was surprising (to me) is that you can have a ‘global’ set of appSettings in the global web.config file. Just add the following to the web.config file located in c:\windows\microsoft.net\framework\v4.0.30319\config:

 

  <appSettings>

                <add key="owin:appStartup" value="WebApiConfiguration, WebApiExtension, Version=1.0.0.0, Culture=neutral, PublicKeyToken=fc3a6ca1f8e86de8" />

    </appSettings>


Now any WebAPI app on that box will automatically pull in your custom extensions so you can have your own hooks into these apps. From an enterprise perspective, you can plugin your own tracing, exception handling, Action filters, authentication etc. And the app developer needs not do anything special so there is nothing new to learn. It was pointed out to me that this will setting will override the Startup class in the app. In our case this is a good thing as we want to enforce certain behaviors on the app.

BTW I used Web API 2.2 for this POC.

Using signed PowerShell scripts with configuration items and applications

$
0
0

In the past if you wanted to use a signed PowerShell script as a configuration item detection method or deployment type detection method, you would see an error on the client when it tried to process the script. Configuratoin Manager 2012 R2 CU2 has a fix in place to mitigate this problem and allow for using signed PowerShell scripts in configuration items and detection methods. You can download CU2 from here: http://support.microsoft.com/kb/2970177. Note that the signer of the script must be pre-trusted or else the script will continue to fail.

For this new functionality to work both the administrator console and client needs to be updated to R2 CU2 as this fix requires changes to both pieces of the product.

First let's talk about configuration item detection methods. To add a signed PowerShell script, when editing a discovery or remediation script you must use "Open" to add the signed script. You cannot copy and paste it. There's a UI change in that the script becomes read only in the admin console until you "Clear" it or change the data type. This is by design. Once the configuration item gets down to the client it should process the signed PowerShell script without any issues.

Things are more complicated with deployment type detection methods. At this time there is no UI code to allow for signed PowerShell scripts but fortunately the client-side fix to allow for signed PowerShell scripts applies both to applications and configuration items. You can use the PowerShell script attached to this blog posting to add a signed PowerShell script to a deployment type in a way the client will be able to properly process. The way it works is you edit the top few lines in the script to match your environment's site code and provider host name. Please also read the information in the header of the script carefully. Notably it will overwrite any previous script in your deployment type. If you edit the script in the admin console after adding it using this script, the script will fail on the client. There's examples in the header of the PowerShell script showing how to use it.

Feel free to let me know if you have any further questions!

Unity Microsoft Virtual Academy now Available!

$
0
0
The Microsoft Virtual Academy for Developing 2D & 3D Games with Unity is now online! Fellow Microsoft Evangelists Adam Tuliper, David Crook, Dave Voyles, and Jason Fox, Indie Game Designer/Artist Matt Newman, Lead Unity Evangelist Carl Callewaert, and of course myself got together and recorded this live about two weeks ago. If you weren't able to catch it then, it's now available for everybody to check out! Read more for the link. You can read more by going to the " Unity Microsoft Virtual...(read more)

Game Design: Introduction to C++ and DirectX Game Development Jump Start

$
0
0
Although I am not sure that the Introduction to C++ and DirectX Game Development Jump Start is really a 100 level course, but it isn’t a 200 class either, so I would say it’s a 150 level.  The nice thing about the training at the Microsoft Virtual Academy is that it doesn’t have click traps or a lot of commercials.  (And yes, it is ONE big commercial, but no annoying pop-up ads.) This great class is awesome and when you are done you have a nice shell that you can use to make a Windows Store...(read more)

Vídeos introdutórios sobre principais recursos do Azure

$
0
0
Nosso time está utilizando um estúdio de gravação para produzir vídeos para o Microsoft Virtual Academy em um formato melhor, mais interativo. Para quem já assistiu algum vídeo do MVA normalmente é um PPT narrado e vídeo de demos. Nosso objetivo é elevar ...read more...(read more)

Run Profile Script Builder

$
0
0

The Following script can be used to build a basic Run Profile Script around your current environment. Included in run profile script that is built for you is a method for a single run profile script to run Delta Import sync during the week and the first run every Saturday would be a Full Sync Cycle and a delta every sync after.

There are 2 Files required to build your Basic Run Profile Script.

1. The MAScriptCreator.ps1 file which does a WMI call to gather the Management Agents which are defined in the WMI Namespace. The script than builds the script using the MAScript.txt file.

2. The MAScript.txt file is use as a base file for the MAScriptCreator.ps1 file.

Both files can be found in the attached ZIP File

MAScript.ps1

function Get-ScriptDirectory
{
 $Invocation = (Get-Variable MyInvocation -Scope 1).Value
 Split-Path $Invocation.MyCommand.Path
}
$dir = Get-ScriptDirectory
$input = [System.IO.File]::ReadAllText("$dir\MAScript.txt")

##$names = get-wmiObject where -query = "Select  * from MIIS_ManagementAgent" -Namespace "root/microsoftidentityintegrationserver" | select Name
$names =get-wmiObject -query "Select  * from MIIS_ManagementAgent" -Namespace "root/microsoftidentityintegrationserver" | select Name
$sb = New-Object System.Text.StringBuilder
foreach($name in $names)
{
 $sb.Append('"')
 $sb.Append($name.Name)
 $sb.Append('"')
 $sb.Append(', ')
}

$sb.Length = $sb.Length - 2

$input = $input.Replace("[MANAMES]", $sb.ToString())
echo $input > "MyNewMAScript.ps1"

 

MAScript.txt

############
# PARAMETERS
############

$params_ComputerName = "."          # "." is the current computer
$params_delayBetweenExecs = 5       #delay between each execution, in seconds
$params_numOfExecs = 1              #Number of executions 0 for infinite
$params_keepRunHistoryDays = 7   #Number of days to keep Run History
$params_runProfilesOrder =

$MAS = @([MANAMES])
$FullCycleprofilesToRun = @("FI", "FS", "EX", "DI")
$DeltaCycleprofilesToRun = @("DI", "DS", "EX", "DI")

############
# FUNCTIONS
############

$line = "-----------------------------"

function Write-Output-Banner([string]$msg)
{
 Write-Output $line,("- "+$msg),$line
}

function Clear-Run-History
{
 #--------------------------------------------------------------------------------------------------------------------
 Clear-Host
 $DeleteDay = Get-Date
 $DayDiff = New-Object System.TimeSpan $params_keepRunHistoryDays, 0, 0, 0, 0
 $DeleteDay = $DeleteDay.Subtract($DayDiff)
  
 Write-Host ""
 Write-Host "Deleting run history earlier than or equal to:" $DeleteDay.toString('MM/dd/yyyy')
 $lstSrv = @(get-wmiobject -class "MIIS_SERVER" -namespace "root\MicrosoftIdentityIntegrationServer" -computer ".")
 Write-Host "Result: " $lstSrv[0].ClearRuns($DeleteDay.toString('yyyy-MM-dd')).ReturnValue
 Write-Host ""
 #--------------------------------------------------------------------------------------------------------------------
 Trap
 {
   Write-Host "`nError: $($_.Exception.Message)`n" -foregroundcolor white -backgroundcolor darkred
   Exit
 }
 #--------------------------------------------------------------------------------------------------------------------
}

function Get-IsSomebodyRunning
{
    foreach($maName in $MAS)
    {
        $MA = $maObjects | ? {$_.Name -eq $maName}
        $myMas += $MA
        if($MA -ne $null)
        {
            $MARunStatus = $MA.RunStatus()
            If ($MARunStatus.ReturnValue -eq "in-progress")
            {
                Write-Host "$MA.Name is running"
                return $true
            }
         }  
    }
}

############
# DATA
############
$numOfExecDone = 0
############
# PROGRAM
############
$maObjects = get-wmiObject -query "Select  * from MIIS_ManagementAgent" -Namespace "root/microsoftidentityintegrationserver"
$myMas = @()

while(Get-IsSomebodyRunning -eq $true)
{
    Start-Sleep 10
}

Clear-Run-History
do
{
 Write-Output-Banner("Execution #:"+(++$numOfExecDone))
 foreach($MA in $myMas)
 {              
  if($MA -ne $null)
  {
   Write-Output-Banner("MA: "+$maName)
            $date = [DateTime]::Now

            $file = Get-ChildItem "lastRun.txt" -EA SilentlyContinue
            if($file -ne $null)
            {
                $lastRun = [DateTime]::Parse([System.IO.File]::ReadAllText("lastRun.txt"))
            }

            $needsFullImort = $file -eq $null -or $lastRun.Date -lt [DateTime]::Now.AddDays(-6)

            if($date.DayOfWeek -eq "Saturday" -and $needsFullImort)
            {
                $profilesToRun = $FullCycleprofilesToRun
                $dateString = [DateTime]::Now.Date.ToString()
                echo $dateString > "lastRun.txt"
            }
            else
            {
                $profilesToRun = $DeltaCycleprofilesToRun
            }

            $maType = $MA.Type
            #Do something with this

   foreach($profileName in $profilesToRun)
   {
    Write-Output (" "+$profileName),"  -> starting"
    $datetimeBefore = Get-Date;
    $result = $MA.Execute($profileName);
    $datetimeAfter = Get-Date;
    $duration = $datetimeAfter - $datetimeBefore;
    if("success".Equals($result.ReturnValue))
    {
     $msg = "done. Duration: "+$duration.Hours+":"+$duration.Minutes+":"+$duration.Seconds
    } else
    {
     $msg = "Error: "+$result
    }  
    Write-Output ("  -> $msg")
   }
  }
  else
  {
   Write-Output ("Not found MA type :"+$maName);
  }
 }
  
 $continue = ($params_numOfExecs -EQ 0) -OR ($numOfExecDone -lt $params_numOfExecs)

 if($continue)
 {
  Write-Output-Banner("Sleeping "+$params_delayBetweenExecs+" seconds")
  Start-Sleep -s $params_delayBetweenExecs
 }

}
while($continue)

Microsoft Student Partners (MSP) Asia-Pacific で大集合!

$
0
0

 

こんにちは! Microsoft Student Partners (MSP) の松原です。

 

早速ですが、質問です。
この写真は何をしているところだか わかりますか?



 

Microsoft Student Partners (MSP) のアジア太平洋が大集合したオンラインミーティングの様子です!

 

9月20日(土)にMSP Asia-Pacific Meetingの第一回目が行われ、日本各地そして世界各地からMSP約60名が集まりました。

60人参加しているもののオンラインであるため、東京のオフィスからはその壮大さが今一つ伝わらないかも知れませんね・・・。

しかしMSPの発案から始まり、MSPだけでまとめた初の試みであり、画面の向こう側にたくさんのMSPが集まったすごいことなんです。

 

参加国は下記です。(順不同)

 日本

 中国

 シンガポール

 マレーシア

 フィリピン

 ベトナム

 タイ

 ネパール

 ニュージーランド

 

各国が自国でのMSP活動について報告し、今後のMSPアジア太平洋での活動についてディスカッションを行いました。

アジア各地でMSP仲間が活躍しているのを聞くのは嬉しいです!

「その取り組み/イベント面白いな。日本でもできるかな?」と思える点も多々あり、良い勉強になりました。

 

これからは「MSPAsia-Pacific」の連携は続きます。どう連携していくかは・・・まだ秘密です!

 

皆さん、楽しみにしていてください!

 

Top 10 Microsoft Developer Links for Thursday, September 25, 2014

$
0
0

If a process crashes while holding a mutex, why is its ownership magically transferred to another process?

$
0
0

A customer was observing strange mutex ownership behavior. They had two processes that used a mutex to coordinate access to some shared resource. When the first process crashed while owning the mutex, they found that the second process somehow magically gained ownership of that mutex. Specifically, when the first process crashed, the second process could take the mutex, but when it released the mutex, the mutex was still not released. They discovered that in order to release the mutex, the second process had to call Release­Mutextwice. It's as if the claim on the mutex from the crashed process was secretly transferred to the second process.

My psychic powers told me that that's not what was happening. I guessed that their code went something like this:

// code in italics is wrong
bool TryToTakeTheMutex()
{
 return WaitForSingleObject(TheMutex, TimeOut) == WAIT_OBJECT_0;
}

The code failed to understand the consequences of WAIT_ABANDONED.

In the case where the mutex was held by the first process when it crashed, the second process will attempt to claim the mutex, and it will succeed, and the return code from Wait­For­Single­Object will be WAIT_ABANDONED. Their code treated that value as a failure code rather than a modified success code.

The second program therefore claimed the mutex without realizing it. That is what led the customer to believe that ownership was being magically transferred to the second program. It wasn't magic. The second program misinterpreted the return code.

The second program saw that Try­To­Take­The­Mutex "failed", and it went off and did something else for a while. Then the next time it called Try­To­Take­The­Mutex, the function succeeded: It was a successful recursive acquisition, but the program thought it was the initial acquisition.

The customer didn't reply back, so we never found out whether that was the actual problem, but I suspect it was.

How to always open the Role Center on startup

$
0
0

Role Centers are role-specific home pages that provide an overview of information that pertains to a user's job function in the business or organization. With Dynamics AX 2012, when a user logs in, the application displays the area page of the last visited module, and not the Home/Role Center page.

Dynamics AX 2012 was designed this way as most of the users prefer to find the application where they left it, and don't want to have to navigate back to the module they were using.

Unfortunately there isn't any parameter to change this behavior and some customers would like to always have the Home/Role Center displayed when they log in.

We thought about various workarounds to make this working so the Role Center always displays on login, and the simplest seems to be the following:

  1. Open a Developer workspace
  2. In the AOT, find the class named Application
  3. Double-click the startup method
  4. Add the following line of code at the end of the method: infolog.navPane().selectedGroup('Home');
  5. Save and compile

I hope this is helpful!

Bertrand

 

 

Controlled Vocabulary 101 … typed at the most stunning office!

$
0
0

Yesterday I enjoyed listening to Hyper-V, PowerShell and other 933>|m (aka geeky) MVPs sharing their knowledge, experience and passion at the Canadian MVP Days Community Roadshow 2014.
WP_20140924_003WP_20140924_005WP_20140924_021WP_20140924_038

During one of the breaks, I needed a quick reboot break and sat at the harbour, in what must be the most beautiful office I have ever had the pleasure to work in Smile 
WP_20140924_031

While enjoying the tranquil beauty, I decided to answer a question I received from a colleague about CV Tags.

controlled vocabulary 101

We use the Controlled Vocabulary (CV) Outlook Add-In to decorate our emails with a CV tag, which (if consistent) can be used to effectively drive mail rules. Email rules allows you to organise your email into folders, raise triggers and increase your productivity even when having to deal with (lots) of email.

Our project teams are geographically distributed, part-time, volunteer driven and competing with family and job responsibilities. As we cannot simply pick-up the phone (we could, but waking Brian at 3AM is not generally a good idea), or fire up a messenger conversation, our core collaboration tool of choice is therefore email, resulting in email, lots of email. To process the “normal” email and the “Rangers” email effectively we have become reliant on the Controlled Vocabulary and consistent tags.

Peruse FAQ – How can I determine which of the 100’s of ALM Ranger emails is important to “Gregg”? for more details on why we use it.

if you are an alm ranger, where do you find the bits?

Simple:

  1. Close Outlook
  2. Install Controlled Vocabulary
  3. Download and run this configuration file: MSCommunity
  4. Select the buttons (i.e. champs, rangers) you wish to add and click Add Selected
  5. Start Outlook and use the Controlled Vocab menu to create emails and meeting invites

if you are an alm ranger, how do you use it?

To …  
Send a general email or get email usage guidance.Screenshot (185)
  • Select Controlled Vocab tab.
  • Select ALM Rangers button.
  • Select Email Usage Guide to get guidance or a tag, i.e. Chatter, to send a general chatting-type email.
  • Note that list of addressees and priority of email can be preconfigured.
  • Revise the CV tag style subject [ Chatter Rangers ] PleaseCompleteSubject and replace the PleaseCompleteSubject placeholder with your subject.
  • HINT:
    Create an email rule to delay emails with the PleaseCompleteSubject tag in the subject line to ensure you do not forget to update the subject.
Schedule a meeting.Screenshot (183)
  • Select Controlled Vocab tab.
  • Select ALM Rangers button.
  • Revise the CV tag style subject [ Chatter Rangers ] PleaseCompleteSubject and replace the PleaseCompleteSubject placeholder with your subject.
  • Select Meeting and the type, i,e. kick-off, to create a meeting.
  • Revise the CV tag style subject [ Kick-off Rangers ] PleaseCompleteSubject and replace the PleaseCompleteSubject placeholder with your meeting subject.
  • EXAMPLE:
    [ Kick-off Rangers ] vsarDevOps – Unicorn
Send an ALM technology email.Screenshot (184)
  • Select Controlled Vocab tab.
  • Select ALM Rangers button.
  • Select Visual Studio ALM.
  • Select the relevant technology, i.e. Build.
  • Revise the CV tag style subject [ ALM Build Rangers ] PleaseCompleteSubject and replace the PleaseCompleteSubject placeholder with your meeting subject.
  • EXAMPLE:
    [ ALM Build Rangers ] How about updating the guidance?
Send a Ranger project email.Screenshot (186)
  • Select Controlled Vocab tab.
  • Select ALM Rangers button.
  • Select Project Collaboration.
  • Revise the CV tag style subject [ vsar@@ ] PleaseCompleteSubject, replace the PleaseCompleteSubject placeholder with your subject and the @@ with the project code.
  • EXAMPLE:
    [ vsarDevOps ] Unicorn rocks!
    HINT:
    vsar prefix = Visual Studio ALM Rangers

Most importantly, do not forget to create mail rules to filter and/or prioritise emails, based on the CV tag. For example, move all incoming and outgoing emails with the CV Tag [ vsarDevOps ] to the vsarDevOps mailbox folder.

Common questions we get:

  • Where do I find the project code to replace the @@ in the project collaboration vsar@@ placeholder?
    The project codes are shared at the kick-of meetings, are the same as the folder name in source control and worst case, can be queried with the project lead or program manager of the team.
  • Why do we not have the project codes in the vocabulary? Why must I replace @@ with every email?
    Simplicity! We have numerous projects, which would result in a long list. We also have a lot of project code churn, which would result in continuous vocabulary maintenance and require users to refresh the vocabulary.

… what if you do not use CV?

Generally not much happens … unless you email someone who has managed to effectively reduce mail inbox maintenance using a platter of mail rules, reliant on CV tags.

Initially replies may mention “+ CV tag” and get progressively more aggressive. Worse, your emails may be get lost in a lo……………………………………………………………..ng queues of “untagged email”, resulting in delayed responses.

Remember … tag it!

… what if you are not an alm ranger?

Download it, evaluate it and enjoy the productivity gain it delivers in high-volume email collaboration environments.

last but not least

Thank you Michael Fourie for this great tool!


FREE Game Templates in Construct 2

$
0
0

Hello guys, I have been creating several game templates that should help you to get a faster and polished products, or help you to learn Construct 2 much faster.

In the following weeks I should find some time to polish these templates and get them all up to date. You will notice that some are more polished in their behaviors and graphics. Also, to get them ready for Github I should make certain that I have translated all comments and should create a nice description of the project.

In any case, I hope you enjoy the templates, many more should be available soon.

• Roguelike Alien

http://bit.ly/myrogue

Top down adventure with randomly generated scenarios. Virtual thumbstick for touch controlled devices. Keyboard support. Polished art from @KenneyWings. Shadow management for Mobs. Full game already published in the store: http://bit.ly/roguealien

• Chili Zombies
http://bit.ly/myChiliZombies
Side scrolling shooting. Gamepad implementation. Ready to use mouse, keyboard and touch.

• The Falling
http://bit.ly/myFalling
Game that I reserved to explain in around 45 mins to students. Many of the videos in my channel (https://www.youtube.com/user/kanedarkon/videos ) explain the details of this one.

• Doodle Bombs
http://bit.ly/myDBombs
Platformer with my own kind of twist. Perfect to explain the bullet behavior.

• Falling Xmas
http://bit.ly/myFallingXmas
Xmas themed platformer.

• Flappy In the Storm
http://bit.ly/myFlappyStorm
Riding the Flappy Bird wave. Using a C2 template I polished and completed a full game.

• Pumpkin Escape
http://bit.ly/myPEscape
My take on Doodle Jump. Infinite jumper with a few twists like falling zombies.

• Santa Vs Zombies
http://bit.ly/mySantaVSZombies
My kind of Xmas. Infinite Runner.

• S G Runner
http://bit.ly/mySGRunner
One of my first infinite runners. Not the best, but simple to modify.

• S G Storm
http://bit.ly/mySGStorm
Copter like game.

• Super G
http://bit.ly/mySG
My first Infinite Jumper. It has inclinometer support.

• Tainted Love
http://bit.ly/myTainted   (Yes, I do like to created stupid links that put weird ideas in your head. I have issues, I know :)
Valentine’s themed platformer.

• Super G Invaders
http://bit.ly/mySGInvaders
Not the best graphics, but it was my take on Space Invaders.

 

You may see that some games are incredibly much more polished than others, and that is because I was learning about C2 at the same time I was publishing the games. So, you should be able to find games for all tastes and expertise levels.

 

Let me know what you think of them.

Power BI August Roundup

$
0
0

We are a little bit late with this roundup (just like the Power Query update). August was a great month for Power BI and Excel updates. We have lots of great content to share with you. To start off, we received tons of comments on our blog posts. Here's our favorite from the latest Power Query update:

We love it too! Thank you all for your thoughts. For our August Roundup, we’ve gathered a number of great articles for you to read at your leisure including content on data visualization and Power BI demos:

 

August Product Updates

08/19/14 - Scheduled Data Refresh Update: New Data Sources

09/02/14 - 7 new updates in Power Query

August Power BI Articles

08/13/14 - Data Visualization for the 2014 World Cup results using Excel and Power BI: Marc Reguera updates his analysis on the World Cup history using information from the 2014 World Cup. Looks like there's a new world order in soccer

08/14/14 - Power BI is changing the way health services are provided: Tom Lawry gives us several examples on how Power BI is being used to change the way health services are provided around the world

08/25/14 - Visualizing the Primetime Emmy History: with the Emmys happening in August, I wanted to get more insights on the history of these awards. This is the result

08/26/14 - Best practices for building hybrid business intelligence environments:  Joseph D'Antoni and Stacia Misner illustrate on their white paper the power of hybrid solutions using Power BI

08/26/14 - Power BI Data Management Gateway 1.2 Changes the Game: John P. White explains why he thinks the new capabilities of the data management gateway that allows Power BI to connect to on-premise data sources is so important

08/26/14 - Waterfall chart with Power Pivot: Philipp Lenz shows us a very creative way to build waterfall charts using Power Pivot

 

Hope these articles are useful to you. As always, don't forget to send our way any interesting posts or articles you find about Excel and Power BI! We are always looking to share great Power BI content with our community.

Reach us @MSPowerBI with the hashtag #PowerBIroundup

All About Load Test Planning (Part 5-Load Profile Additional Considerations)

$
0
0

In the previous post, I showed you how to come up with the profiles to use in a test as well as the numbers to plug into the profile. I also showed two fairly simple examples of load profiles that you might generate. In this post, I will show you some more examples of profiles, as well as some of the gotchas from these profiles. All of these are taken from real engagements I have performed, although the data and information is completely sanitized.

Example 1: Too Many Use Cases

This customer had defined eight different use cases to use for the load profile, but they provided FIVE sets of numbers to use for the loads on each use case. The five different sets of numbers represented five different business cycles in their industry and they felt that it was important to see if the system could handle the load expected in each of the cyclers:

Use CaseProfile 1Profile 2Profile 3Profile 4Profile 5
Read Only10030505050
Active00202020
Generate12050606060
Regenerate0150202020
Sign Off005020050
Archive0025100300
Modify20005000400020002000
No Change20001500400020002000

As we looked at the table, and we started adding up all of the different load tests we would need to execute, we realized that we would not have enough time to complete every one of the desired tests. When I looked at the numbers, I noticed that there wasn’t too much difference between the load in Profile 3 and other profiles except for the last two use cases. I suggested that we build a new profile that used the highest count from each use case and run that. If it passed our criteria, then we knew that all of the individual profiles would pass. We could do this because we were testing specifically to see “If the System can handle the expected peak load.” Below was our final profile. The system could handle this load, so we could easily assume that the system could handle any of the loads specified in the profiles above.

Use CaseFinal Profile
Read Only100
Active20
Generate120
Regenerate150
Sign Off200
Archive300
Modify5000
No Change4000

 

Example 2: Too Fast

I was brought into an engagement that was already in progress to help a customer who was trying to figure out why the system was so slow when we pushed the load to the “expected daily amount.” The system was taking as long as 120 seconds for some requests to respond and the maximum allowed time was 60 seconds. They said that they were used to seeing faster times when the system was not under load. I started asking them about the load profile and I learned two things that they had not done properly.

  1. They were using the wrong type of load pattern to drive load. They had chosen to use the “Based on number of tests” pattern when they should be using the “based on user pace” pattern. By selecting the Based on number of tests, they were pushing the load harder than they should have (explanation below)
  2. They were using the wrong numbers for the amount of work that an actual user would be expected to perform.

Because of these two items, the work load they were driving was about 6 times higher than expected peak load. No wonder the system was being slow. I showed them how to rework the numbers and we switched the test profile to user pace. When we ran the tests again, the system behaved exactly as it should.

Comparing “Number of Tests” to “User Pace”

The reason that using the “Based on the number of tests” (or “based on the number of virtual users” is NOT good when trying to drive a specific load is that Visual Studio will not throttle the speed of the tests. When a test iteration is completed in either of these modes, Visual Studio will wait for the amount of time defined by the “think time between test iterations” setting and then execute the next test it is assigned.. Now, if you assume that you know how long a given iteration of a test should take and you use that number to work backwards to a proper pace, you still may not get the right load. Consider this:

  • A given web test takes 2 minutes to complete.
  • You want to have that web test execute 12,000 times in an hour.
  • If you work it backwards, you would see that you could set the test to use 1,000 vUsers and set a “think time between test iterations” of 3 minutes.

This will give you the user pace you want….. Until you fire up those 1,000 users and realize one of two things that could cause the pace to be wrong:

  • the load slows down the test so that it takes 3 minutes. Now your pace is not 12,000/hour, but 10,000/hour.
  • the test is being run on a faster system (or something else causes the test to run faster, including performance tuning) and the time for an iteration is 1 minute. Your pace is now 15,000/hour.

If you set the model to “Based on User Pace” Visual Studio will ignore the “think time between test iterations” setting and will create the pacing on the fly. In this case, you set 1,000 vUsers and tell each one to do 12 iterations/hour. Visual Studio will target a total time of 5 minutes for each iteration, including the think time. If the iteration finished in less than five minutes, it will wait the right amount of time. If the iteration takes longer than 5 minutes Visual Studio will throw a warning and run the next iteration with no think time between iterations.

Example 3: Need to use multiple scenarios

Sometimes when you look at the rate that one test needs to execute compared to another test, you may find that you cannot have both tests in the same scenario. For instance if you have one test that needs to run once/hour and another that needs to run 120/hour, but the 120/hour takes 2 minutes to complete. You can no longer run that test with a single user . So you decide to decrease the rate to 30/user/hour and then increase the total number of users to 4. Now, the other test is running at four times the rate. For situations like this, I simply move the tests into two separate scenarios.

You may also find that you have too many tests in a scenario that has “Based on User Pace” to allow a user to complete them all. When you specify the User Pace model, Visual Studio will expect that a single vUser will execute EVERY test in the scenario at the pace defined. Let’s go back to the school test from the previous post. If you look at the scenario for Students, you will see that there are 75 vUsers. Each vUser will have to complete 29 test iterations in an hour to stay on track. Visual Studio does not create separate users for each webtest. Therefore you need to make sure that there is enough time for all of the tests to complete. If not, split them up into separate scenarios.

UserPaceScenarios - Copy

Example 4: Don’t Count It Twice

This one bites a lot of people. Let’s say I am testing my ecommerce site and I need to drive load as follows:

Use CaseQty to execute
Browse Site10,000
Add To Cart3,000
Checkout2,000

So you create your three tests and set the pacing up for each. However, you need to remember that *usually* in order to checkout, you have to already have something in the cart, and to add something to the cart, you have to have browsed. If you use the quantities above, you will end up with 15,000 browse requests to the site and 5,000 Add to Cart.

Bottom Line, if a test you execute contains requests that fulfill more than one of your target load numbers, account for that in the final mix.

Example 5: Multiple Acceptance Criteria for the Same Item

This is in response to a comment left on my previous post about Scenarios and Use Cases. In this situation, I may have a requirement for the response time for generating a report. Let’s assume that the requirements are:

  • Generation of the report must be < 2 seconds for <500 rows
  • Generation of the report must be < 7 seconds for <10,000 rows

First, I would need to get more info from the business partners.

  • Is the user type the primary reason for the big size difference? (a sales clerk checks the sales he/she has performed today vs. a store manager checking all of the sales by the entire staff?).
    • I would add a new use case to the manager scenario and a separate use case in the sales scenario of the test plan and move forward as normal.
  • Is a parameter passed in, or the query being executed the primary reason (same as the first example, but the same person runs both reports)
    • I would ask the business partner what the likelihood of either happening is and then I would devise a set of data to feed into the test that would return results close to each number. I would probably then create two different web tests, one for each query set and give them names that indicate the relative size of the response. Then I could easily see how long each one took.

It is also worth noting that you can have the same request show up multiple times in a webtest and change the way it gets reported by using the “Reporting Name” property on the request to display the relative size:

RequestReportingName

Example 6: To Think or not To Think

I covered this topic in a separate post, but I am adding a pointer to it here because it applies directly to this topic, and if you have not read my other post, you should. The post (“To Think or not to Think”) is here.

Nuevos recursos para problemas de performance

$
0
0

Buenas tardes,

Recientemente se han publicado varios artículos en el Blog global del equipo de soporte de Dynamics AX: http://blogs.msdn.com/b/axsupport

Contienen información muy completa e importante para analizar y resolver problemas de performance con Dynamics AX.

"Managing general performance issues in Microsoft Dynamics AX": http://blogs.msdn.com/b/axsupport/archive/2014/09/11/managing-general-performance-issues-in-microsoft-dynamics-ax.aspx

"AX Performance Troubleshooting Checklist Part 1A [Introduction and SQL Configuration]": http://blogs.msdn.com/b/axsupport/archive/2014/09/05/ax-performance-troubleshooting-checklist-part-1a-introduction-and-sql-configuration.aspx

"AX Performance Troubleshooting Checklist Part 1B [Application and AOS Configuration]": http://blogs.msdn.com/b/axsupport/archive/2014/09/05/ax-performance-troubleshooting-checklist-part-1b-application-and-aos-configuration.aspx

"AX Performance Troubleshooting Checklist Part 2": http://blogs.msdn.com/b/axsupport/archive/2014/09/08/ax-performance-troubleshooting-checklist-part-2.aspx

Vale la pena tener estos artículos como "favoritos" en sus navegadores.

Bertrand

Working with Names and Name Based Attributes

$
0
0

I’d like to take a minute to discuss something that can be a real pain when deploying an identity management solution: names. As anyone who has deployed or managed a large scale IdM solution can attest to, names can be a real hassle. Proper casing, uniqueness, length limits and titles/preferred names all make for a real challenge sometimes. So, if we have decided to deploy an IdM solution (such as FIM) to programmatically handle our user management, can we, from a fully autonomous approach, overcome these hurdles without drawing the ire of our user base? The answer is, yes, we can…for the most part. It’s important to remember that this really is a “numbers game”. It’s easy to write logic to make everyone happy in an organization of 500 users. This may not, however, be the case in an organization with 500,000 users. As the number of our user base increases, so does the potential complexity of name/accountname logic. My personal feeling (and what I often convey to customers), is that, if out of an organization of 500,000 users, I still have to manually administer 100 users, that means I will never have to touch the other 499,900 ever again. To me, that is a win. The other thing I would urge you to ask yourself is, “is it worth it?”. By that I mean, if I can implement logic that handles 99.99% of all users, does it really make sense to spend hours (if not days) figuring out the logic to automate the management of a handful of people (.01%)?

 With that in mind, let’s start by talking about names. More specifically, let’s talk first, middle and last names. I’m a big fan of using these to build out other attributes (such as accountName, mailNickName, etc.). So before we even begin to look at those other attributes, let’s first get first, middle and last looking good. We are assuming this data is being fed from somewhere (such as an HR data feed). If this user data is coming from a database, it is very likely it might come in as all uppercase. When it comes to proper casing names, we have a few options on how to handle that. Option one is to use a set/workflow/MPR within the FIM portal. For example, you could create a “New User Attribute Builder” workflow that proper cases names, builds accountName, etc.. In this case, you might have a couple of activities that look something like:

In many cases, this might be fine. However, there is an issue that exists here. What happens if I have a defined precedence that goes something like this: HR -> FIM -> AD? Under this scenario, if the data coming from HR is always authoritative over the data in FIM, my (now properly cased) names in FIM will be overwritten the next time a sync job runs. This could cause an issue where a user has their name proper cased, HR sync runs and exports to FIM overwriting names as all upper case, workflow proper cases them and this cycle repeats endlessly. Some admins have overcome this by creating a custom attribute that essentially marks that user object as being “managed by FIM”. By doing so, after these values are set initially, they are not modified by HR (even though it is precedent).

 Another method of addressing this is to do the conversion directly in the inbound user synchronization rule. This can be easily done by use of a function on the “source” tab of the inbound attribute flow, as shown:

This method, however, is not without fault. The downside here is that this evaluation will occur every time the sync rule runs. This could theoretically slow down imports and syncs. At the end of the day, the decision here must be made by you based on your own environment.

 

However we do it, once we have arrived at the point where our first, middle and last names have been proper cased, we can then move on to building account names. The real trick here is to do so in a way that guarantees uniqueness across the organization. To use the example above, this may be relatively easy in an environment of 500 users, but what about 500,000? Please note that for the following scenarios, we are using a custom activity workflow to generate unique values. For smaller environments, an activity such as this may be sufficient:

Here, we are doing a simple first initial + last name. For user John Doe, the resulting AccountName would be

jdoe

With the addition of a “uniqueness key seed”, if jdoe is taken and another user (Jim Doe, for example) comes in, their AccountName would subsequently be jdoe2. For smaller environments, this may be a perfectly acceptable approach. For larger environments, however, this might possibly result in users with AccountName values of jdoe47. Likewise, this also fails to address uniqueness in Active Directory. Fortunately, however, we do have the ability to do LDAP queries directly, as illustrated here:

In this case, we are querying LDAP to determine uniqueness (and not just within FIM). Also, you may notice not only the inclusion of MiddleName, but also the proper casing occurring here (rather than in the sync rule). For user JOHN ADAM DOE, the above three value expressions would result in the following three account names:

JOHN.DOE

JOHN.A.DOE

John.A.Doe

 

In any of these cases, by also using a uniqueness key seed, Jim Doe no longer becomes an issue. The seed would only be used in cases such as:

JOHN.DOE2

JOHN.A.DOE2

John.A.Doe2

 

Specifically, in the third example, a user with the same first name and middle initial (John Allen Doe and John Adam Doe, for example) would have to exist within the organization.

 

Even with this approach, there is still, however, a potential issue. In examples 2 & 3 above, what happens if the middleName attribute is not present? The resulting AccountName would be:

 

John..Doe

 

This can be overcome with the addition of an “IsPresent” check. For example:

Since the entire Value Expression is not visible I the above image, here they are in full:

[//Target/FirstName] + "." + IIF(IsPresent([//Target/MiddleName]), Left([//Target/MiddleName],1), "") + IIF(IsPresent([//Target/MiddleName]), ".", "") + [//Target/LastName]

 [//Target/FirstName] + "." + IIF(IsPresent([//Target/MiddleName]), Left([//Target/MiddleName],1), "") + IIF(IsPresent([//Target/MiddleName]), ".", "") + [//Target/LastName]+[//UniquenessKey]

 

By doing so, user John A. Doe would receive an Account Name of John.A.Doe, while user John Doe would simply be John.Doe (and not John..Doe). You may also notice the use of “Left” in the above examples. “Left” is a function we can make use of to count over a certain numbers of characters. In the example of:

Left([//Target/FirstName],1)

We would start at the beginning and count over 1 character (producing the first initial). Technically, there is no limit on the number of spaces we can count (up to the full length of the value). For example:

Left[//Target/FirstName/Bartholomew],4)

 

Would return: Bart

 

There are also functions for “Right” (which counts backwards from the end) and “Mid” (which starts in the middle).

It is also worth noting at this point that this same logic can be used when building values such as DisplayName. In terms of Active Directory, DisplayName must be unique per container, but not forest wide. Also, depending on how your organization handles email addresses, it may be useful to recycle the bits above (since we’ve already determined AccountName to be unique. The activity here may be as simple as:

Finally, there are a few other considerations when it comes to handling names with FIM. The attribute sAMAcountName in Active Directory, for example, has a maximum length of 20 characters. For compliance, we can easily use the function “Trim” (or even “Left”) to grab the first 20, but this may be confusing for users whose name is far longer than 20 characters. Likewise, it may also be worth considering titles (such as “Dr.”) when handling names. Let’s say we’d like our DisplayName to be in the following format:

“LastName, FirstName Middle Initial” (i.e. Doe, John A.)

 

How do we handle it if Mr. Doe is a doctor? The cleanest solution, in my opinion, is to create a custom attribute in FIM to hold this title. Then, as shown above, we could use an “IIF(IsPresent” statement for the new attribute.

“LastName, Title (if present) FirstName Middle Initial” (i.e. Doe, Dr. John A.)

 

If the title attribute were not present, it would not be included (and neither would an additional whitespace).

 

Viewing all 35736 articles
Browse latest View live




Latest Images