Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

We have a new home


ROPC (Resource Owner Password Credentials Grant) Flow

$
0
0

Overview

This blog post is intended for the Exchange Administrators that are using Powershell scripts to consume the Outlook REST API v1.0 endpoint https://outlook.office365.com/api/v1.0 authenticating with user credentials.

With the REST API v1.0 with support to Basic Authentication deprecation, a transition is required to the Microsoft Graph API or to the REST API v2.0 EndPoint.

One way to acquire a token to consume restful services without user interaction is using the ROPC flow. The Resource Owner Password Credentials grant type is suitable in cases where the resource owner has a trust relationship with the client, such as the device operating system or a highly privileged application.

 

How it works

Azure Active Directory (Azure AD) supports the resource owner password credential (ROPC) grant which allows an application to sign in the user by directly handling their password. The ROPC flow requires a high degree of trust and user exposure and developers should only use this flow when the other, more secure, flows can’t be used.

 

Important

  • The Azure AD v2.0 endpoint only supports ROPC for Azure AD tenants, not personal accounts. This means that you must use a tenant-specific endpoint (https://login.microsoftonline.com/{tenantId_or_name}) or the organizations endpoint.
  • Personal accounts that are invited to an Azure AD tenant can’t use ROPC.
  • Accounts that don’t have passwords can’t sign in through ROPC. For this scenario, we recommend that you use a different flow for your app instead.
  • If users need to use multi-factor authentication (MFA) to log in to the application, they will be blocked instead.

 

Protocol Diagram

 

Example

Get a message collection from the entire mailbox of the signed-in user with Powershell and REST v2.0

Create an app registration

 

1. In the Azure Portal click on Azure Active Directory in the navigation menu on the left then click on App registrations then click on New application registration on the right

2. Fill in the Name and the Sign-on URL fields and click Create

3. Click on Settings

4. On the Settings page click on Required permissions

5. On the Required permissions page click Add

6. On the Add API access page click Select an API

7. On the Select an API page select Office 365 Exchange Online and click Select

8. On the Enable Access page, click the checkboxes for the appropriate access then click Select

9. Create a new Client Secret for the application, on the Settings page click on Keys

To generate a new Key (client secret) type in a Description, select the validity period in the Expires column and click Save

10. Make a copy of the key value as you won't be able to retrieve it if you navigate away from this page

* Application Permissions: Your client application needs to access the web API directly as itself (no user context). This type of permission requires administrator consent and is also not available for native client applications.

** Delegated Permissions: Your client application needs to access the web API as the signed-in user, but with access limited by the selected permission. This type of permission can be granted by a user unless the permission requires administrator consent.

 

The Powershell Script

Acquiring an Access Token

$client_id = [System.Web.HttpUtility]::UrlEncode("<<<< client id >>>>")

$client_secret = [System.Web.HttpUtility]::UrlEncode("<<<< client secret >>>>")

$tenant = [System.Web.HttpUtility]::UrlEncode("<<<< tenant >>>>")

$user = [System.Web.HttpUtility]::UrlEncode("<<<< user >>>>")

$password = [System.Web.HttpUtility]::UrlEncode("<<<< password >>>>")

$mailbox = [System.Web.HttpUtility]::UrlEncode("<<<< mailbox >>>>")




$AuthUri = "https://login.microsoftonline.com/" + $tenant + "/oauth2/v2.0/token"

$AuthBody =

    "grant_type=password" + "&" +

    "client_id=$client_id" + "&" +

    "client_secret=" + $client_secret + "&" +

    "username=" + $user + "&" +

    "scope=" + [System.Web.HttpUtility]::UrlEncode("https://outlook.office.com/mail.read") + "%20offline_access" + "&" +

    "password=" + $password



$Authorization =

    Invoke-RestMethod   -Method Post `

                        -ContentType application/x-www-form-urlencoded `

                        -Uri $AuthUri `

                        -Body $AuthBody

The AuthUri is the OAuth v2.0 token endpoint

The grant type must be password for the ROPC flow

The client id is the newly created application id

The client secret is the key created in our web application

The scope is as defined in the API access.

The user and password are the credentials of the user to consume the API

 

Query the REST API using the access token

$requestHeaders = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"

# For optimal performance when using the new Outlook REST endpoint,

# add an x-AnchorMailbox header for every request and set it to the user's email address.

$requestHeaders.Add('X-AnchorMailbox', $mailbox)

# Each message in the response contains multiple properties, including the Body property.

# The message body can be either text or HTML.

# If the body is HTML, by default, any potentially unsafe HTML (for example, JavaScript) embedded in the Body property would be removed

# before the body content is returned in a REST response.

# To get the entire, original HTML content, include the following HTTP request header

$requestHeaders.Add('Prefer', 'outlook.allow-unsafe-html')

$requestHeaders.Add('Authorization', "Bearer " + $Authorization.access_token)



# Get messages

# You can get a message collection or an individual message from a mailbox folder.

#

# Each message in the response contains multiple properties, including the Body property.

# The message body can be either text or HTML. If the body is HTML, by default,

# any potentially unsafe HTML (for example, JavaScript) embedded in the Body property would be removed before the body content is returned in a REST response.

#

# https://docs.microsoft.com/en-us/previous-versions/office/office-365-api/api/version-2.0/mail-rest-operations#GetMessages



$requestUri = "https://outlook.office.com/api/v2.0/users/" + $mailbox + "/messages"



$request =

    Invoke-RestMethod   -Headers $requestHeaders `

                        -Uri $requestUri `

                        -Method Get




$request.value

 

Attachments

Download the powershell script here

 

References

Azure Active Directory v2.0 and the OAuth 2.0 resource owner password credential

Resource Owner Password Credentials Grant

Use the Outlook REST API (version 2.0)

Transition to Microsoft Graph API

Outlook REST v2.0 GetMessages

 

Change Log

Date Author Type Description
2019-01-02 Pedro Tomás e Silva Original

Why am I getting mojibake when I try to create a shell link?

$
0
0


A customer couldn't get the IShellLink interface
to work.
They tried to set the shortcut target to a path, but it
came out as

Chinese mojibake
.



Here's a

reduction

of their code to its simplest form.



HRESULT CreateLink()
{
HRESULT hr;
IShellLinkA* psl;

hr = CoCreateInstance(CLSID_ShellLink, NULL, CLSCTX_INPROC_SERVER,
IID_IShellLink, (LPVOID*)&psl);
if (SUCCEEDED(hr)) {
IPersistFile* ppf;

psl->SetPath("C:\Windows"); // this comes out as mojibake

hr = psl->QueryInterface(IID_IPersistFile, (LPVOID*)&ppf);
if (SUCCEEDED(hr)) {
hr = ppf->Save(L"C:\Test\Test.lnk", TRUE);
ppf->Release();
}
psl->Release();
}
return hr;
}



(You can see that this customer used to be a C programmer,
because all variable declarations are at the start of blocks.
Also, because they aren't using RAII.)



The problem is hidden in the call to
Co­Create­Instance:



hr = CoCreateInstance(CLSID_ShellLink, NULL, CLSCTX_INPROC_SERVER,
IID_IShellLink, (LPVOID*)&psl);
// -------------- -------------


Observe that the requested interface is
IID_IShell­Link,
but the result is placed into a pointer to
IShell­LinkA.
This mismatch should raise a warning flag.



It appears that the program is being compiled with
Unicode as the default character set,
which means that
IID_IShell­Link
is really
IID_IShell­LinkW.
Consequently, the requested interface is
IShell­LinkW,
and the result is placed into a pointer to
IShell­LinkA.
As a result of this mismatch, the call to
psl->SetPath thinks that it's calling
IShell­LinkA::Set­Path,
but in reality it is calling
IShell­LinkW::Set­Path.
(The
IShell­LinkA and
IShell­LinkW interfaces have the same
methods in the same order, except that one uses ANSI strings
and the other uses Unicode strings.)



That is where the mojibake is coming from.
An ANSI string is passed where a Unicode string is expected.



Mismatches like this can be avoided by using the
IID_PPV_ARGS macro.
This macro looks at the type of the pointer you pass it and
autogenerates the matching REFIID,
as well as casting the pointer to void**.



hr = CoCreateInstance(CLSID_ShellLink, NULL, CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&psl));


While they're at it, the customer should consider
abandoning the ANSI version altogether and just
using the Unicode one.

NEW EBOOK: Deploy Agisoft PhotoScan on Azure with Avere vFXT for Azure or BeeGFS

$
0
0

This 30-page eBook provides step-by-step guidance for installing Agisoft PhotoScan photogrammetry software backed by either Avere vFXT storage or the BeeGFS parallel file system.

We show you how to set up PhotoScan on Azure Virtual Machines (VMs). High performance storage accelerates processing time, and the results of his benchmark tests are included. This environment can be scaled up and down as needed and supports terabytes of storage without sacrificing performance.

Download the 30-page eBook on Azure.com:

Table of contents (ToC):

  1. Introduction to PhotoScan on Azure
  2. Prerequisites
  3. Architecture with Avere vFXT storage
  4. Architecture with BeeGFS storage
  5. How the templates work
  6. Deploy the solution
  7. Benchmark results
  8. Download a sample dataset
  9. Learn more
  10. Appendix
    • BeeGFS parameter files
    • PhotoScan parameter files

 

This eBook was authored by AzureCAT Senior Program Manager, Paulo Marques da Costa. It was edited by Nanette Ray.

Download the eBook here.

 

Azure CAT Guidance

"Hands-on solutions, with our heads in the Cloud!"

Experiencing Data Access Issue in Azure and OMS portal for Log Analytics – 01/04 – Investigating

$
0
0
Update: Friday, 04 January 2019 11:21 UTC

We continue to investigate issues within Log Analytics. Some customers continue to experience data access issue. Initial findings indicate that the problem began at 01/04 ~10:00 UTC. 
  • Work Around: None
  • Next Update: Before 01/04 14:30 UTC

-Naresh


A trick for keeping an object alive in a C++ lambda while still being able to use the this keyword to refer to it

$
0
0


You may want to capture your this
pointer into a C++ lambda,
but that captures the raw pointer.
If you need to extend the object's lifetime,
you will need to capture a strong reference.
For plain C++ code, this would be a
std::shared_ptr.
For COM objects, this is usually some sort
of smart pointer class like
ATL::CComPtr,
Microsoft::WRL::ComPtr,
or
winrt::com_ptr.



// std::shared_ptr
auto callback = [self = std::shared_from_this(this)]() {
self->DoSomething(self->m_value);
self->DoSomethingElse();
};

// WRL::ComPtr
auto callback = [self =
Microsoft::WRL::ComPtr<ThisClass>(this)]() {
self->DoSomething(self->m_value);
self->DoSomethingElse();
};

// ATL::CComPtr
auto callback = [self =
ATL::CComPtr<ThisClass>(this)]() {
self->DoSomething(self->m_value);
self->DoSomethingElse();
};

// winrt::com_ptr
template<typename T>
auto to_com_ptr(T* p) noexcept
{
winrt::com_ptr<T> ptr;
ptr.copy_from(p);
return ptr;
}

auto callback = [self = to_com_ptr(this)] {
self->DoSomething(self->m_value);
self->DoSomethingElse();
};



A common pattern for the "capture a strong reference to yourself"
is to capture both a strong reference and a raw this.
The strong reference keeps the this alive,
and you use the this for convenient access to members.



// std::shared_ptr
auto callback = [lifetime = std::shared_from_this(this),
this]() {
DoSomething(m_value); // was self->DoSomething(self->m_value);
DoSomethingElse(); // was self->DoSomethingElse();
};

// WRL::ComPtr
auto callback = [lifetime =
Microsoft::WRL::ComPtr<ThisClass>(this),
this]() {
DoSomething(m_value); // was self->DoSomething(self->m_value);
DoSomethingElse(); // was self->DoSomethingElse();
};

// ATL::CComPtr
auto callback = [lifetime =
ATL::CComPtr<ThisClass>(this),
this]() {
DoSomething(m_value); // was self->DoSomething(self->m_value);
DoSomethingElse(); // was self->DoSomethingElse();
};

// winrt::com_ptr
auto callback = [lifetime = to_com_ptr(this),
this]() {
DoSomething(m_value); // was self->DoSomething(self->m_value);
DoSomethingElse(); // was self->DoSomethingElse();
};



I like to give the captured strong reference a name like
lifetime to emphasize that its purpose is to
extend the lifetime of the this pointer.
Otherwise, somebody might be tempted to "optimize" out
the seemingly-unused variable.

Check out 2019’s first Friday Five!

$
0
0

Getting Started with Azure Functions

Jaliya Udagedara is originally from Sri Lanka and currently based in New Zealand. He is a Microsoft MVP since January 2014, initially under Visual C# category, then .NET and now under Developer Technologies. His passion is on everything related to .NET. Jaliya is also a TechNet Wiki Ninja and a blog author at TNWiki Ninja’s official blog. Follow him on Twitter @JaliyaUdagedara

 

Custom Vision on the Raspberry Pi (ONNX & Windows IoT)

Henk Boelman works as a Cloud Solutions Architect in the Netherlands. He started out as a software developer in the late '90s and later moved on to the role of architect. He now guides organizations in their cloud adventure, with a strong focus on cloud native software development. During these years, Henk has built and designed numerous web-based platforms for small and large companies. He loves to share his knowledge on topics such as DevOps, Azure and Cognitive Services by providing training courses and he is a regular speaker at user groups and conferences. In June 2018 he received a Microsoft MVP award in the AI category. Follow him on Twitter @hboelman

Adventures of a Cloud Operator: Monitoring Azure Stack with SCOM

Daniel Apps is a Cloud and Datacenter MVP focusing on Azure Stack, System Center and Windows Server Software Defined. He is a Solutions Architect at Vigilant.IT with over 20 years experience in the industry. In recent years Daniel has spent most of his time helping clients master Hyper-V, S2D and SDN and System Center with a more recent shift to Azure Stack and Azure. He has spoken at Microsoft Ignite Australia and Experts Live and is a regular at local user groups. Follow him on Twitter @daniel_apps.

freek-berson-1

Windows 10 Multi-Sessions as the RDSH of the future (and running Edge as a published App!)

Freek Berson is an Infrastructure specialist at Wortell, a system integrator company based in the Netherlands. Here he focuses on End User Computing and related technologies, mostly on the Microsoft platform. He is also a managing consultant at rdsgurus.com. He maintains his personal blog at themicrosoftplatform.net where he writes articles related to Remote Desktop Services, Azure and other Microsoft technologies. An MVP since 2011, Freek is also an active moderator on TechNet Forum and contributor to Microsoft TechNet Wiki. He speaks at conferences including BriForum, E2EVC and ExpertsLive. Join his RDS Group on Linked-In here. Follow him on Twitter @fberson.

syed-shanu

Machine Learning DotNet for Clustering Model: Getting Started

Syed Shanu is a Microsoft MVP, a two-time C# MVP and two-time Code project MVP. Syed is also an author, blogger and speaker. He's from Madurai, India, and works as Technical Lead in South Korea. With more than 10 Years of experience with Microsoft technologies, Syed is an active person in the community and always happy to share his knowledge on topics related to ASP.NET, MVC, ASP.NET Core, Web API, SQL Server, UWP, Azure, among others. He has written more than 70 articles with on various technologies. He's also a several-time TechNet Guru Gold Winner. You can see his contributions to MSDN and TechNet Wiki here. Follow him on Twitter @syedshanu3. 

NEW WHITEPAPER: Microsoft Azure Government cloud for mainframe applications

$
0
0

Planning an application migration is the ideal time to add value and agility to even well-established mainframe workloads.

In this quick guide, Larry Mead of AzureCAT shows how United States government agencies and their partners can use Azure Government for mainframe applications—and migration may not be as difficult as you think. Azure Government delivers the advantages of a mainframe in a more cost-efficient and agile environment. In addition, Azure Government earned a Provisional Authority to Operate (P-ATO) for FedRAMP High Impact.

 

Download the whitepaper on Azure.com:

 

Customer example architecture:

 

Table of Contents (ToC):

  1. Why consider mainframe migration?
  2. Types of mainframes
  3. Types of mainframe applications
  4. Online applications on Azure Government
  5. Batch applications on Azure Government
  6. Scale and throughput for Azure Government
  7. Summary
  8. Learn more

 

Authored by AzureCAT Program Manager, Larry Mead. Edited by Nanette Ray.

Download the eBook here.

 

Azure CAT Guidance

"Hands-on solutions, with our heads in the Cloud!"


Python 2.7 Now Available for App Service on Linux

Drawing Many Shapes with Combination of Small Basic Shapes

$
0
0

Small Basic Shapes has only four types of shapes - rectangle, ellipse, triangle and line. To draw more types of shapes, we can combine basic four shapes. But some calculation is needed for each case. So I started to write sample programs to draw many kind of shapes. And I finished first five. I also write a document "Drawing Shapes A-Z Notebook" about that.

This version of document contains following shapes.

Connected Circles (JLD998)
Screen shot of a program Connected Circles

Round Triangle (TBS366)
Screen shot of a program Round Triangle and Round Hat

Leaf (SWQ334)
Screen shot of a program Leaf

Trapezoid (RSQ369)
Screen shot of a program Trapezoid

Parallelogram (QWF833)
Screen shot of a program Parallelogram

Enjoy Small Basic shapes!  Thanks.

Visual Studio サポート チーム ブログ終了のお知らせ

$
0
0

こんにちは、Visual Studio サポート チームです。

 

2019 年 1 月をもちまして、弊社システム刷新の都合により本ブログを終了いたします。

今後、以下の専用フォーラムから同様の情報公開を行ってまいります。

 

Visual Studio サポート チーム フォーラム

 

これまで多くのお客様に本ブログをご覧いただき、誠にありがとうございました。

開発者様のお役に立てるよう情報発信を続けてまいりますので、今後ともどうぞよろしくお願いいたします。

Webinar 1/8: How to effectively tell a story with your data and Power BI with Ruth Pozuelo Martinez

$
0
0

You have millions of rows of data stored in your data warehouse, you have it accessible and available for business users and report users, but how do you effectively tell a story using all that data? How do you unlock the secrets and the potential your data has?
In this session Ruth will go through tips & tricks on how to effectively visualize your data, what are the do’s and don'ts and how to present your data so questions can be asked and get answered in the same report. Empower your business with data using Power BI.

Where: https://www.youtube.com/watch?v=E8Mz18i_po4
When: 1/8/2019 10AM PST

Ruth Pozuelo Martinez

About the Presenter

Ruth Pozuelo Martinez is the MD and owner of Curbal AB, a BI consultant company based in Sweden. Ruth has a Mechanical Engineering degree by the University of Oviedo (Spain) and a Aeronautical Engineering degree by the University of Wales.
Ruth is also a Microsoft Data Platform MVP mainly for her contributions to the Power BI community.
She publishes weekly videos on her YouTube channel (http://aka.ms/Curbal) where she has more than 17k followers (at the time of writing). She also contributes on her own company blog: curbal.com as well as Microsoft Power BI community.
Ruth is the Power BI User Group leader and arranges around 4-5 meetups every year in Stockholm.
Besides from that, she also speaks at other meetups, the latest one was the IoT meetup in Stockholm and Microsoft TechDays.

Manage release permissions

$
0
0

Last week one of the customer reported that they are not able to manage the folder permissions of release definitions because they denied permissions to manage permissions and due to which the UI to manage security is not showing up even for project collection administrators. We fixed the UI bug in our latest code so that it always show up the security dialogue and Nishu Bansal wrote a powershell script which uses the security rest APIs to manage the security. I am pasting the script for you so that you can learn how to programmatically manage RM security.


$accountName = "<myaccount>"
$projectName = "myproject"
$personalAccessToken = "<my token>"
$user = "aseemb"
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $user,$personalAccessToken)))
function getProjectId() {
$projects=((Invoke-RestMethod -Method Get -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -ContentType "application/json" -Uri https://$accountName.visualstudio.com/_apis/projects?api-version=4.1).value | Select-Object id, name)
$projects | % {
if ($_.name -ieq $projectName) {
return $_.id
}
}
}
function getACEObject([string] $descriptor,[int] $allow, [int] $deny) {
$aceObject = New-Object PSObject
$aceObject | Add-Member -type  NoteProperty -name descriptor -Value $descriptor
$aceObject | Add-Member -type  NoteProperty -name allow -Value $allow
$aceObject | Add-Member -type  NoteProperty -name deny -Value $deny
return $aceObject
}

function Set-ACE([string] $folderPath) {
$projectId=getProjectId
$token=("{0}/{1}" -f $projectId, $folderPath)
$result=((Invoke-RestMethod -Method Get -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -ContentType "application/json" -Uri "https://$accountName.vsrm.visualstudio.com/_apis/AccessControlLists/c788c23e-1b46-4162-8f5e-d7585343b5de?token=$token&api-version=4.1&includeExtendedInfo=true").value)
$ace=($result | Select-Object acesDictionary).acesDictionary

Write-Output ("Existing ACLs")
foreach($info in $ace.PSObject.Properties) {
$aceObject = getACEObject -descriptor $info.Value.descriptor -allow $info.Value.allow -deny $info.Value.deny
Write-Output ($aceObject)
}
$aces = @()
foreach($info in $ace.PSObject.Properties) {
$descriptor=$info.Value.descriptor
#             if ($descriptor -match '-0-0-0-0-1') {
#$allowValue = $info.Value.allow -bor 512 # Enabling Allow bit for Release Administer permissions
$allowValue = $info.Value.allow
$denyValue = ($info.Value.deny -band (-bnot (1 -shl 9)))  #  Disabling Deny bit for Release Administer permissions
$aceObject = getACEObject -descriptor $descriptor -allow $allowValue -deny $denyValue
Write-Output ("Setting new ACE")
Write-Output ($aceObject)
$aces += $aceObject
#             }
}

if ($aces.count -gt 0 ) {
Write-Output ("Setting new ACLs")
Write-Output ($token)
$aclObject = New-Object PSObject
$aclObject | Add-Member -type  NoteProperty -name accessControlEntries -Value $aces
$aclObject | Add-Member -type  NoteProperty -name merge -Value false
$aclObject | Add-Member -type  NoteProperty -name token -Value $token

$request = $aclObject | ConvertTo-Json
$result=((Invoke-RestMethod -Method POST -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -ContentType "application/json" -Uri "https://dev.azure.com/$accountname/_apis/accesscontrolentries/c788c23e-1b46-4162-8f5e-d7585343b5de?api-version=5.0-preview.1" -Body $request).value)
Write-Output ("Output")
Write-Output ($result | ConvertTo-Json)
}
}

Set-ACE -folderPath "/"
Enjoy !!

The GetRegionData function fails if the buffer is allocated on the stack. Is it allergic to stack memory or something?

$
0
0


If you pass a NULL buffer to the
Get­Region­Data function,
the return value tells you the required size of the buffer
in bytes.
You can then allocate the necessary memory and
call
Get­Region­Data a second time.



DWORD bytesRequired = GetRegionData(hrgn, 0, NULL);
RGNDATA* data = (RGNDATA*)malloc(bytesRequired);
data->rdh.dwSize = sizeof(data->rdh);
DWORD bytesUsed = GetRegionData(hrgn, bytesRequired, data);


This version of the code works just fine.
We call the
Get­Region­Data function to obtain
the number of bytes required,
then allocate that many bytes,
and then call
Get­Region­Data again
to get the bytes.



However, this version doesn't work:



struct REGIONSTUFF
{
...
char buffer[USUALLY_ENOUGH];
...
};

REGIONSTUFF stuff;
DWORD bytesRequired = GetRegionData(hrgn, 0, NULL);
RGNDATA* data = (RGNDATA*)(bytesRequired > sizeof(stuff.buffer) ?
malloc(bytesRequired) : stuff.buffer);
data->rdh.dwSize = sizeof(data->rdh);
DWORD bytesUsed = GetRegionData(hrgn, bytesRequired, data);



The idea here is that we preallocate a stack buffer that
profiling tells us is usually big enough to hold the desired data.
If the required size fits in our preallocated stack buffer,
then we use it.
Otherwise, we allocate the buffer from the heap.
(Related.)



This version works fine in the case where
the number of bytes required is
larger than our preallocated stack buffer, so that the
actual buffer is on the heap.



But this version fails (returns zero) if we decide to use the
preallocated stack buffer.



Is Get­Region­Data allergic to stack memory?



No.
That's not the problem.



My psychic powers told me that the ...
at the start of struct REGIONSTUFF had
a total size that was not a multiple of four.
The buffer member therefore was at an
address that was misaligned for a RGNDATA,
causing the code to run afoul of one of the

basic ground rules for programming
:



  • Pointers are properly aligned.



And indeed, it turns out that the members at the
start of the structure did indeed have a total
size that was not a multiple of four.
Let's say it went like this:



struct REGIONSTUFF
{
HGRN hrgn;
char name[15];
char buffer[USUALLY_ENOUGH];
};


To fix this, you need to align the buffer
the same way as a RGNDATA.
One way to do this is with a union.



struct REGIONSTUFF
{
HGRN hrgn;
char name[15];
union {
char buffer[USUALLY_ENOUGH];
RGNDATA data;
} u;

};

REGIONSTUFF stuff;
DWORD bytesRequired = GetRegionData(hrgn, 0, NULL);
RGNDATA* data = (RGNDATA*)(bytesRequired > sizeof(stuff.u.buffer) ?
malloc(bytesRequired) : stuff.u.buffer);
data->rdh.dwSize = sizeof(data->rdh);
DWORD bytesUsed = GetRegionData(hrgn, bytesRequired, data);



Another way is to use an alignment annotation.
The appropriate annotation varies depending on which
compiler you are using.



// Microsoft Visual C++
__declspec(align(__alignof(RGNDATA)))
char buffer[USUALLY_ENOUGH];

// gcc
char buffer[USUALLY_ENOUGH]
__attribute__((aligned(__alignof__(RGNDATA))));

// C++11
alignas(RGNDATA)
char buffer[USUALLY_ENOUGH];

Setting up a Docker Containers Infrastructure with Azure DevOps

$
0
0

I frequently write code that is meant for public consumption. It might not be perfect code, but I make an effort to make it available for others to peruse either way.

The general distinction I use when it comes to choosing a repository for a given solution is pushing it to GitHub if it is public, while using Azure DevOps if it's intended for internal usage. (For home at least - for business there are other factors and other solutions.) If I go with a private repo it's also a natural fit to pipe it from Azure DevOps into Azure Container Registry (ACR) if I'm doing containers. This is nice and dandy for most purposes, but I sometimes want "hybrids". I want the code to be public, but I want to easily deploy it to my own Azure services at the same time. I want to provide public container images, but ACR only supports private registries. And I certainly do not want to maintain parallell setups if I can avoid it.

An example scenario would be doing code samples related to Azure AD. The purpose of the code is of course that everyone can test it as easily as possible, but at the same time I obviously have some parameters that I want to keep out of the code while still being able to test things myself. Currently my AAD samples is one big Visual Studio solution with a bunch of projects inside - great for browsing on GitHub. Less scalable for my own QA purposes if I want to actually deploy them other places than localhost 🙂

The beauty of Azure DevOps is that while it is one "product" there are several independent modules inside. Sure, you can use everything from a-z, but you can also be selective and only choose what you like. In my case this would be looking closer at the Pipelines feature.

I am aware of things like GitHub Actions and direct integration with Docker Hub, so it's not that I'm confused about that part. It's just that I like things like features like the boards in Azure DevOps as well as being able to easily deploy to my AKS clusters so I thought I'd take a crack at combining some of these things.

The high level flow would be something like this:
- Write code in Visual Studio. Add the necessary Docker config, Helm charts, etc.
- Push to a GitHub repo.
- Pull said GitHub repo into Azure DevOps.
- Build the code in Azure DevOps, and push images to Docker Hub while in parallell pushing to Azure Container Registry.
- Provided the Docker image was built to support it I can supply a config file, a Kubernetes secret, or something similar and push it to an AKS cluster for testing and demo purposes.

Easy enough I'd assume. I didn't easily locate any guides on this setup though, so I'm putting it into writing just in case I'm not the only one wondering about this 🙂 Not rocket science to figure out, but still nice to have screenshots of it.

I ran through the wizard in Visual Studio to create a HelloDocker web app. Afterwards I added Docker support for Linux containers. The default Dockerfile isn't perfect for Azure DevOps pipelines though so I created one for that purpose called Dockerfile.CI:

FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /app

# Copy csproj and restore
COPY *.csproj ./
RUN dotnet restore

# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out

# Build runtime image
FROM microsoft/dotnet:2.2-aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT [&quot;dotnet&quot;, &quot;HelloDocker.dll&quot;]

Next step is pushing it to GitHub - I'll assume you've got that part covered as well. I have also made sure I have a Docker Hub id, and the credentials for that ready.

So, let's move into Azure DevOps and create a project:

Then you will want to navigate to the Project Settings, and Service connections for Pipelines:

First let`s add a GitHub repo - you have two choices:
Use External Git:

Use GitHub:

External Git will let you add any random repo on GitHub, whereas the GitHub connection requires you to have permissions to the GitHub account. (The benefits are that you can browse the repo instead of typing in paths manually, you can report back to GitHub that releases should be bundled up, etc. so provided it is your repo you should consider using a GitHub connection.)

That means you have to login and consent for the GitHub connection:

Since we're already in the service connection let's add a Docker registry as well:

Then we will move on to creating a pipeline.

If you chose an External Git repo choose that as a source:

Or GitHub as a source if that's what you set up in the previous step - that requires you to choose the repository you want to work with:

There are a number of templates to choose from, but to make it easy you can opt for the Docker container template:

I chose to do builds both for ACR and Docker hub in the same pipeline by adding two more Docker tasks. (For simple customization needs this might not be required, but options are always great.)

The build task looks like this:

And pushing the image:

Kick off the build, and hopefully you will get something along these lines:

Which means you should also have an image available for everyone:

And there you have it - pull from Docker Hub into Azure Web Apps, do a replace a number of files and push a customized image into ACR only to pull into AKS afterwards. Well, you catch my drift 🙂


The PowerShell-Docs repo is moving

$
0
0

On January 16, 2019 at 5:00PM PDT, the PowerShell-Docs repositories are moving from the PowerShell
organization to the MicrosoftDocs organization in GitHub.

The tools we use to build the documentation are designed to work in the MicrosoftDocs org. Moving
the repository lets us build the foundation for future improvements in our documentation experience.

Impact of the move

During the move there may be some downtime. The affected repositories will be inaccessible during
the move process. Also, the documentation processes will be paused. After the move, we need to test
access permissions and automation scripts.

After these tasks are complete, access and operations will return to normal. GitHub automatically
redirects requests to the old repo URL to the new location.

For more information about transferring repositories in GitHub,
see About repository transfers.

  • If the transferred repository has any forks, then those forks will remain associated with the
    repository after the transfer is complete.
  • All Git information about commits, including contributions, are preserved.
  • All of the issues and pull requests remain intact when transferring a repository.
  • All links to the previous repository location are automatically redirected to the new location.

When you use git clone, git fetch, or git push on a transferred repository, these commands will
redirect to the new repository location or URL.

However, to avoid confusion, we strongly recommend updating any existing local clones to point to
the new repository URL. You can do this by using git remote on the command line:

git remote set-url origin new_url

For more information, see Changing a remote's URL.

Which repositories are being moved?

The following repositories are being transferred:

  • PowerShell/PowerShell-Docs
  • PowerShell/powerShell-Docs.cs-cz
  • PowerShell/powerShell-Docs.de-de
  • PowerShell/powerShell-Docs.es-es
  • PowerShell/powerShell-Docs.fr-fr
  • PowerShell/powerShell-Docs.hu-hu
  • PowerShell/powerShell-Docs.it-it
  • PowerShell/powerShell-Docs.ja-jp
  • PowerShell/powerShell-Docs.ko-kr
  • PowerShell/powerShell-Docs.nl-nl
  • PowerShell/powerShell-Docs.pl-pl
  • PowerShell/powerShell-Docs.pt-br
  • PowerShell/powerShell-Docs.pt-pt
  • PowerShell/powerShell-Docs.ru-ru
  • PowerShell/powerShell-Docs.sv-se
  • PowerShell/powerShell-Docs.tr-tr
  • PowerShell/powerShell-Docs.zh-cn
  • PowerShell/powerShell-Docs.zh-tw

Call to action

If you have a fork that you cloned, change your remote configuration to point to the new upstream URL.

Help us make the documentation better.

  • Submit issues when you find a problem in the docs.
  • Suggest fixes to documentation by submitting changes through the PR process.

 

Sean Wheeler
Senior Content Developer for PowerShell
https://github.com/sdwheeler

Goodbye

$
0
0

Microsoft is retiring this blogging platform, so the NDIS blog will be removed soon. If you have any articles that you need to reference, please save them locally.

GitHub Free nově s unlimited private repos

$
0
0

Ode dneška platí nové licenční podmínky GitHubu pro jednotlivce a malé týmy do třech lidí, zdarma jsou nejen veřejné repository, ale také i jejich privátní repa.

https://github.com/pricing

Zároveň se konsoliduje a unifikuje brand pro velké společnosti na "Enterprise".

Do budoucna se budete setkávat s těmito úrovněmi:

  • GitHub Free
  • GitHub Pro
  • GitHub Team
  • GitHub Enterprise

 

Nezapomeňte, že uživatelé GitHubu mají nově možnost využívat zdarma hostované Build servery (Linux, Mac, Windows) v rámci Azure Pipelines.

Buri

Introducing Azure IoT Tools for Visual Studio Code

$
0
0

Azure IoT Tools for VS Code is an extension pack for Visual Studio Code that lets you get all you need for Azure IoT development with 1-click installation. Microsoft Azure IoT support for Visual Studio Code is provided through a rich set of extensions that make it easy to discover and interact with Azure IoT Hub that power your IoT Edge and device applications. The Azure IoT Tools provides the following benefits:

Installation

By installing this extension you will install all of the extensions listed above. Some of these extensions will have a dependency on the Azure Account extension to provide a single Azure login and subscription filtering experience.

You can easily uninstall individual extensions if you are not interested in using them, without affecting other extensions provided by this pack. You can uninstall all of the extensions by uninstalling the Azure Tools extension.

Set up your Azure IoT hub in VS Code

Setting up your Azure IoT hub in VS Code is the first thing after installation. You will see the device list and interact with your IoT hub and devices after setting up.

  1. In Explorer of VS Code, click "Azure IoT Hub Devices" in the bottom left corner.
  2. Click "Select IoT Hub" in context menu.
  3. If you have not signed in to Azure, a pop-up will show to let you sign in to Azure.
  4. After signing in, your Azure Subscription list will be shown, then select an Azure Subscription.
  5. Your IoT Hub list will be shown, then select an IoT Hub.
  6. The device list will be shown.

    

You can access almost all Azure IoT development functionalities provided by these extensions through the Command Palette. Simply press F1, then type in IoT to find available commands. Specifically, if you are interested in IoT device application development, you can visit IoT DevKit website for more samples and tutorials. If you want to learn more about Azure IoT Edge development, you can always find the tutorials on Docs of how to use these commands to create new project, debug modules and deploy to your Azure IoT Edge devices.

Suggestions and Feedback

This project is open-sourced on Github. If you have any feature request or encounter any issues during your daily usage, don't hesitate to create issue on our Github repository. We are all ears.

PIX and playback adapter selection

$
0
0

This is probably my favorite feature of the release, announced here: a drop down to select which GPU to playback on.

So you can go with WARP, and more importantly, if you have something like a Surface Book with multiple GPUs for example, you can target the one you specifically want.

Enjoy!

 

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>