Hi everyone,
We are migrating the content of the blog site over to http://blogs.aaddevsup.xyz/. Please book mark this new link.
Hi everyone,
We are migrating the content of the blog site over to http://blogs.aaddevsup.xyz/. Please book mark this new link.
This blog post is intended for the Exchange Administrators that are using Powershell scripts to consume the Outlook REST API v1.0 endpoint https://outlook.office365.com/api/v1.0 authenticating with user credentials.
With the REST API v1.0 with support to Basic Authentication deprecation, a transition is required to the Microsoft Graph API or to the REST API v2.0 EndPoint.
One way to acquire a token to consume restful services without user interaction is using the ROPC flow. The Resource Owner Password Credentials grant type is suitable in cases where the resource owner has a trust relationship with the client, such as the device operating system or a highly privileged application.
Azure Active Directory (Azure AD) supports the resource owner password credential (ROPC) grant which allows an application to sign in the user by directly handling their password. The ROPC flow requires a high degree of trust and user exposure and developers should only use this flow when the other, more secure, flows can’t be used.
1. In the Azure Portal click on Azure Active Directory in the navigation menu on the left then click on App registrations then click on New application registration on the right
2. Fill in the Name and the Sign-on URL fields and click Create
3. Click on Settings
4. On the Settings page click on Required permissions
5. On the Required permissions page click Add
6. On the Add API access page click Select an API
7. On the Select an API page select Office 365 Exchange Online and click Select
8. On the Enable Access page, click the checkboxes for the appropriate access then click Select
9. Create a new Client Secret for the application, on the Settings page click on Keys
To generate a new Key (client secret) type in a Description, select the validity period in the Expires column and click Save
10. Make a copy of the key value as you won't be able to retrieve it if you navigate away from this page
* Application Permissions: Your client application needs to access the web API directly as itself (no user context). This type of permission requires administrator consent and is also not available for native client applications.
** Delegated Permissions: Your client application needs to access the web API as the signed-in user, but with access limited by the selected permission. This type of permission can be granted by a user unless the permission requires administrator consent.
$client_id = [System.Web.HttpUtility]::UrlEncode("<<<< client id >>>>") $client_secret = [System.Web.HttpUtility]::UrlEncode("<<<< client secret >>>>") $tenant = [System.Web.HttpUtility]::UrlEncode("<<<< tenant >>>>") $user = [System.Web.HttpUtility]::UrlEncode("<<<< user >>>>") $password = [System.Web.HttpUtility]::UrlEncode("<<<< password >>>>") $mailbox = [System.Web.HttpUtility]::UrlEncode("<<<< mailbox >>>>") $AuthUri = "https://login.microsoftonline.com/" + $tenant + "/oauth2/v2.0/token" $AuthBody = "grant_type=password" + "&" + "client_id=$client_id" + "&" + "client_secret=" + $client_secret + "&" + "username=" + $user + "&" + "scope=" + [System.Web.HttpUtility]::UrlEncode("https://outlook.office.com/mail.read") + "%20offline_access" + "&" + "password=" + $password $Authorization = Invoke-RestMethod -Method Post ` -ContentType application/x-www-form-urlencoded ` -Uri $AuthUri ` -Body $AuthBody
The AuthUri is the OAuth v2.0 token endpoint
The grant type must be password for the ROPC flow
The client id is the newly created application id
The client secret is the key created in our web application
The scope is as defined in the API access.
The user and password are the credentials of the user to consume the API
$requestHeaders = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" # For optimal performance when using the new Outlook REST endpoint, # add an x-AnchorMailbox header for every request and set it to the user's email address. $requestHeaders.Add('X-AnchorMailbox', $mailbox) # Each message in the response contains multiple properties, including the Body property. # The message body can be either text or HTML. # If the body is HTML, by default, any potentially unsafe HTML (for example, JavaScript) embedded in the Body property would be removed # before the body content is returned in a REST response. # To get the entire, original HTML content, include the following HTTP request header $requestHeaders.Add('Prefer', 'outlook.allow-unsafe-html') $requestHeaders.Add('Authorization', "Bearer " + $Authorization.access_token) # Get messages # You can get a message collection or an individual message from a mailbox folder. # # Each message in the response contains multiple properties, including the Body property. # The message body can be either text or HTML. If the body is HTML, by default, # any potentially unsafe HTML (for example, JavaScript) embedded in the Body property would be removed before the body content is returned in a REST response. # # https://docs.microsoft.com/en-us/previous-versions/office/office-365-api/api/version-2.0/mail-rest-operations#GetMessages $requestUri = "https://outlook.office.com/api/v2.0/users/" + $mailbox + "/messages" $request = Invoke-RestMethod -Headers $requestHeaders ` -Uri $requestUri ` -Method Get $request.value
Download the powershell script here
Azure Active Directory v2.0 and the OAuth 2.0 resource owner password credential
Resource Owner Password Credentials Grant
Use the Outlook REST API (version 2.0)
Transition to Microsoft Graph API
Date | Author | Type | Description |
2019-01-02 | Pedro Tomás e Silva | Original |
A customer couldn't get the IShellLink
interface
to work.
They tried to set the shortcut target to a path, but it
came out as
Chinese mojibake.
Here's a
reduction
of their code to its simplest form.
HRESULT CreateLink()
{
HRESULT hr;
IShellLinkA* psl;hr = CoCreateInstance(CLSID_ShellLink, NULL, CLSCTX_INPROC_SERVER,
IID_IShellLink, (LPVOID*)&psl);
if (SUCCEEDED(hr)) {
IPersistFile* ppf;psl->SetPath("C:\Windows"); // this comes out as mojibake
hr = psl->QueryInterface(IID_IPersistFile, (LPVOID*)&ppf);
if (SUCCEEDED(hr)) {
hr = ppf->Save(L"C:\Test\Test.lnk", TRUE);
ppf->Release();
}
psl->Release();
}
return hr;
}
(You can see that this customer used to be a C programmer,
because all variable declarations are at the start of blocks.
Also, because they aren't using RAII.)
The problem is hidden in the call to
CoCreateInstance
:
hr = CoCreateInstance(CLSID_ShellLink, NULL, CLSCTX_INPROC_SERVER,
IID_IShellLink, (LPVOID*)&psl);
// -------------- -------------
Observe that the requested interface is
IID_
IShellLink
,
but the result is placed into a pointer to
IShellLinkA
.
This mismatch should raise a warning flag.
It appears that the program is being compiled with
Unicode as the default character set,
which means that
IID_
IShellLink
is really
IID_
IShellLinkW
.
Consequently, the requested interface is
IShellLinkW
,
and the result is placed into a pointer to
IShellLinkA
.
As a result of this mismatch, the call to
psl->SetPath
thinks that it's calling
IShellLinkA::
SetPath
,
but in reality it is calling
IShellLinkW::
SetPath
.
(The
IShellLinkA
and
IShellLinkW
interfaces have the same
methods in the same order, except that one uses ANSI strings
and the other uses Unicode strings.)
That is where the mojibake is coming from.
An ANSI string is passed where a Unicode string is expected.
Mismatches like this can be avoided by using the
IID_
PPV_
ARGS
macro.
This macro looks at the type of the pointer you pass it and
autogenerates the matching REFIID
,
as well as casting the pointer to void**
.
hr = CoCreateInstance(CLSID_ShellLink, NULL, CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&psl));
While they're at it, the customer should consider
abandoning the ANSI version altogether and just
using the Unicode one.
This 30-page eBook provides step-by-step guidance for installing Agisoft PhotoScan photogrammetry software backed by either Avere vFXT storage or the BeeGFS parallel file system.
We show you how to set up PhotoScan on Azure Virtual Machines (VMs). High performance storage accelerates processing time, and the results of his benchmark tests are included. This environment can be scaled up and down as needed and supports terabytes of storage without sacrificing performance.
Download the 30-page eBook on Azure.com:
Table of contents (ToC):
This eBook was authored by AzureCAT Senior Program Manager, Paulo Marques da Costa. It was edited by Nanette Ray.
Azure CAT Guidance
"Hands-on solutions, with our heads in the Cloud!"
-Naresh
You may want to capture your this
pointer into a C++ lambda,
but that captures the raw pointer.
If you need to extend the object's lifetime,
you will need to capture a strong reference.
For plain C++ code, this would be a
std::shared_
ptr
.
For COM objects, this is usually some sort
of smart pointer class like
ATL::
CComPtr
,
Microsoft::
WRL::
ComPtr
,
or
winrt::
com_ptr
.
// std::shared_ptr
auto callback = [self = std::shared_from_this(this)]() {
self->DoSomething(self->m_value);
self->DoSomethingElse();
};// WRL::ComPtr
auto callback = [self =
Microsoft::WRL::ComPtr<ThisClass>(this)]() {
self->DoSomething(self->m_value);
self->DoSomethingElse();
};// ATL::CComPtr
auto callback = [self =
ATL::CComPtr<ThisClass>(this)]() {
self->DoSomething(self->m_value);
self->DoSomethingElse();
};// winrt::com_ptr
template<typename T>
auto to_com_ptr(T* p) noexcept
{
winrt::com_ptr<T> ptr;
ptr.copy_from(p);
return ptr;
}auto callback = [self = to_com_ptr(this)] {
self->DoSomething(self->m_value);
self->DoSomethingElse();
};
A common pattern for the "capture a strong reference to yourself"
is to capture both a strong reference and a raw this
.
The strong reference keeps the this
alive,
and you use the this
for convenient access to members.
// std::shared_ptr
auto callback = [lifetime = std::shared_from_this(this),
this]() {
DoSomething(m_value); // was self->DoSomething(self->m_value);
DoSomethingElse(); // was self->DoSomethingElse();
};// WRL::ComPtr
auto callback = [lifetime =
Microsoft::WRL::ComPtr<ThisClass>(this),
this]() {
DoSomething(m_value); // was self->DoSomething(self->m_value);
DoSomethingElse(); // was self->DoSomethingElse();
};// ATL::CComPtr
auto callback = [lifetime =
ATL::CComPtr<ThisClass>(this),
this]() {
DoSomething(m_value); // was self->DoSomething(self->m_value);
DoSomethingElse(); // was self->DoSomethingElse();
};// winrt::com_ptr
auto callback = [lifetime = to_com_ptr(this),
this]() {
DoSomething(m_value); // was self->DoSomething(self->m_value);
DoSomethingElse(); // was self->DoSomethingElse();
};
I like to give the captured strong reference a name like
lifetime
to emphasize that its purpose is to
extend the lifetime of the this
pointer.
Otherwise, somebody might be tempted to "optimize" out
the seemingly-unused variable.
Custom Vision on the Raspberry Pi (ONNX & Windows IoT)Henk Boelman works as a Cloud Solutions Architect in the Netherlands. He started out as a software developer in the late '90s and later moved on to the role of architect. He now guides organizations in their cloud adventure, with a strong focus on cloud native software development. During these years, Henk has built and designed numerous web-based platforms for small and large companies. He loves to share his knowledge on topics such as DevOps, Azure and Cognitive Services by providing training courses and he is a regular speaker at user groups and conferences. In June 2018 he received a Microsoft MVP award in the AI category. Follow him on Twitter @hboelman. |
||
|
||
Windows 10 Multi-Sessions as the RDSH of the future (and running Edge as a published App!)Freek Berson is an Infrastructure specialist at Wortell, a system integrator company based in the Netherlands. Here he focuses on End User Computing and related technologies, mostly on the Microsoft platform. He is also a managing consultant at rdsgurus.com. He maintains his personal blog at themicrosoftplatform.net where he writes articles related to Remote Desktop Services, Azure and other Microsoft technologies. An MVP since 2011, Freek is also an active moderator on TechNet Forum and contributor to Microsoft TechNet Wiki. He speaks at conferences including BriForum, E2EVC and ExpertsLive. Join his RDS Group on Linked-In here. Follow him on Twitter @fberson. |
||
Machine Learning DotNet for Clustering Model: Getting StartedSyed Shanu is a Microsoft MVP, a two-time C# MVP and two-time Code project MVP. Syed is also an author, blogger and speaker. He's from Madurai, India, and works as Technical Lead in South Korea. With more than 10 Years of experience with Microsoft technologies, Syed is an active person in the community and always happy to share his knowledge on topics related to ASP.NET, MVC, ASP.NET Core, Web API, SQL Server, UWP, Azure, among others. He has written more than 70 articles with on various technologies. He's also a several-time TechNet Guru Gold Winner. You can see his contributions to MSDN and TechNet Wiki here. Follow him on Twitter @syedshanu3. |
||
Planning an application migration is the ideal time to add value and agility to even well-established mainframe workloads.
In this quick guide, Larry Mead of AzureCAT shows how United States government agencies and their partners can use Azure Government for mainframe applications—and migration may not be as difficult as you think. Azure Government delivers the advantages of a mainframe in a more cost-efficient and agile environment. In addition, Azure Government earned a Provisional Authority to Operate (P-ATO) for FedRAMP High Impact.
Download the whitepaper on Azure.com:
Customer example architecture:
Table of Contents (ToC):
Authored by AzureCAT Program Manager, Larry Mead. Edited by Nanette Ray.
Azure CAT Guidance
"Hands-on solutions, with our heads in the Cloud!"
Python 2.7 has been added to the public preview of Python on Azure App Service (Linux). With this recent addition developers can enjoy the productivity benefits and easy scaling features of Azure App Service using Python 2.7, 3.6 or 3.7. More details on the public preview of Python support on Azure App Service (Linux) are available here: https://azure.microsoft.com/en-us/blog/native-python-support-on-azure-app-service-on-linux-new-public-preview/
Small Basic Shapes has only four types of shapes - rectangle, ellipse, triangle and line. To draw more types of shapes, we can combine basic four shapes. But some calculation is needed for each case. So I started to write sample programs to draw many kind of shapes. And I finished first five. I also write a document "Drawing Shapes A-Z Notebook" about that.
This version of document contains following shapes.
Connected Circles (JLD998)
Round Triangle (TBS366)
Leaf (SWQ334)
Trapezoid (RSQ369)
Parallelogram (QWF833)
Enjoy Small Basic shapes! Thanks.
こんにちは、Visual Studio サポート チームです。
2019 年 1 月をもちまして、弊社システム刷新の都合により本ブログを終了いたします。
今後、以下の専用フォーラムから同様の情報公開を行ってまいります。
これまで多くのお客様に本ブログをご覧いただき、誠にありがとうございました。
開発者様のお役に立てるよう情報発信を続けてまいりますので、今後ともどうぞよろしくお願いいたします。
You have millions of rows of data stored in your data warehouse, you have it accessible and available for business users and report users, but how do you effectively tell a story using all that data? How do you unlock the secrets and the potential your data has?
In this session Ruth will go through tips & tricks on how to effectively visualize your data, what are the do’s and don'ts and how to present your data so questions can be asked and get answered in the same report. Empower your business with data using Power BI.
Where: https://www.youtube.com/watch?v=E8Mz18i_po4
When: 1/8/2019 10AM PST
About the Presenter
Ruth Pozuelo Martinez is the MD and owner of Curbal AB, a BI consultant company based in Sweden. Ruth has a Mechanical Engineering degree by the University of Oviedo (Spain) and a Aeronautical Engineering degree by the University of Wales.
Ruth is also a Microsoft Data Platform MVP mainly for her contributions to the Power BI community.
She publishes weekly videos on her YouTube channel (http://aka.ms/Curbal) where she has more than 17k followers (at the time of writing). She also contributes on her own company blog: curbal.com as well as Microsoft Power BI community.
Ruth is the Power BI User Group leader and arranges around 4-5 meetups every year in Stockholm.
Besides from that, she also speaks at other meetups, the latest one was the IoT meetup in Stockholm and Microsoft TechDays.
Last week one of the customer reported that they are not able to manage the folder permissions of release definitions because they denied permissions to manage permissions and due to which the UI to manage security is not showing up even for project collection administrators. We fixed the UI bug in our latest code so that it always show up the security dialogue and Nishu Bansal wrote a powershell script which uses the security rest APIs to manage the security. I am pasting the script for you so that you can learn how to programmatically manage RM security.
Write-Output ("Existing ACLs")
foreach($info in $ace.PSObject.Properties) {
$aceObject = getACEObject -descriptor $info.Value.descriptor -allow $info.Value.allow -deny $info.Value.deny
Write-Output ($aceObject)
}
$aces = @()
foreach($info in $ace.PSObject.Properties) {
$descriptor=$info.Value.descriptor
# if ($descriptor -match '-0-0-0-0-1') {
#$allowValue = $info.Value.allow -bor 512 # Enabling Allow bit for Release Administer permissions
$allowValue = $info.Value.allow
$denyValue = ($info.Value.deny -band (-bnot (1 -shl 9))) # Disabling Deny bit for Release Administer permissions
$aceObject = getACEObject -descriptor $descriptor -allow $allowValue -deny $denyValue
Write-Output ("Setting new ACE")
Write-Output ($aceObject)
$aces += $aceObject
# }
}
if ($aces.count -gt 0 ) {
Write-Output ("Setting new ACLs")
Write-Output ($token)
$aclObject = New-Object PSObject
$aclObject | Add-Member -type NoteProperty -name accessControlEntries -Value $aces
$aclObject | Add-Member -type NoteProperty -name merge -Value false
$aclObject | Add-Member -type NoteProperty -name token -Value $token
$request = $aclObject | ConvertTo-Json
$result=((Invoke-RestMethod -Method POST -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -ContentType "application/json" -Uri "https://dev.azure.com/$accountname/_apis/accesscontrolentries/c788c23e-1b46-4162-8f5e-d7585343b5de?api-version=5.0-preview.1" -Body $request).value)
Write-Output ("Output")
Write-Output ($result | ConvertTo-Json)
}
}
If you pass a NULL
buffer to the
GetRegionData
function,
the return value tells you the required size of the buffer
in bytes.
You can then allocate the necessary memory and
call
GetRegionData
a second time.
DWORD bytesRequired = GetRegionData(hrgn, 0, NULL);
RGNDATA* data = (RGNDATA*)malloc(bytesRequired);
data->rdh.dwSize = sizeof(data->rdh);
DWORD bytesUsed = GetRegionData(hrgn, bytesRequired, data);
This version of the code works just fine.
We call the
GetRegionData
function to obtain
the number of bytes required,
then allocate that many bytes,
and then call
GetRegionData
again
to get the bytes.
However, this version doesn't work:
struct REGIONSTUFF
{
...
char buffer[USUALLY_ENOUGH];
...
};REGIONSTUFF stuff;
DWORD bytesRequired = GetRegionData(hrgn, 0, NULL);
RGNDATA* data = (RGNDATA*)(bytesRequired > sizeof(stuff.buffer) ?
malloc(bytesRequired) : stuff.buffer);
data->rdh.dwSize = sizeof(data->rdh);
DWORD bytesUsed = GetRegionData(hrgn, bytesRequired, data);
The idea here is that we preallocate a stack buffer that
profiling tells us is usually big enough to hold the desired data.
If the required size fits in our preallocated stack buffer,
then we use it.
Otherwise, we allocate the buffer from the heap.
(Related.)
This version works fine in the case where
the number of bytes required is
larger than our preallocated stack buffer, so that the
actual buffer is on the heap.
But this version fails (returns zero) if we decide to use the
preallocated stack buffer.
Is GetRegionData
allergic to stack memory?
No.
That's not the problem.
My psychic powers told me that the ...
at the start of struct REGIONSTUFF
had
a total size that was not a multiple of four.
The buffer
member therefore was at an
address that was misaligned for a RGNDATA
,
causing the code to run afoul of one of the
basic ground rules for programming:
And indeed, it turns out that the members at the
start of the structure did indeed have a total
size that was not a multiple of four.
Let's say it went like this:
struct REGIONSTUFF
{
HGRN hrgn;
char name[15];
char buffer[USUALLY_ENOUGH];
};
To fix this, you need to align the buffer
the same way as a RGNDATA
.
One way to do this is with a union.
struct REGIONSTUFF
{
HGRN hrgn;
char name[15];
union {
char buffer[USUALLY_ENOUGH];
RGNDATA data;
} u;
};REGIONSTUFF stuff;
DWORD bytesRequired = GetRegionData(hrgn, 0, NULL);
RGNDATA* data = (RGNDATA*)(bytesRequired > sizeof(stuff.u.buffer) ?
malloc(bytesRequired) : stuff.u.buffer);
data->rdh.dwSize = sizeof(data->rdh);
DWORD bytesUsed = GetRegionData(hrgn, bytesRequired, data);
Another way is to use an alignment annotation.
The appropriate annotation varies depending on which
compiler you are using.
// Microsoft Visual C++
__declspec(align(__alignof(RGNDATA)))
char buffer[USUALLY_ENOUGH];// gcc
char buffer[USUALLY_ENOUGH]
__attribute__((aligned(__alignof__(RGNDATA))));// C++11
alignas(RGNDATA)
char buffer[USUALLY_ENOUGH];
I frequently write code that is meant for public consumption. It might not be perfect code, but I make an effort to make it available for others to peruse either way.
The general distinction I use when it comes to choosing a repository for a given solution is pushing it to GitHub if it is public, while using Azure DevOps if it's intended for internal usage. (For home at least - for business there are other factors and other solutions.) If I go with a private repo it's also a natural fit to pipe it from Azure DevOps into Azure Container Registry (ACR) if I'm doing containers. This is nice and dandy for most purposes, but I sometimes want "hybrids". I want the code to be public, but I want to easily deploy it to my own Azure services at the same time. I want to provide public container images, but ACR only supports private registries. And I certainly do not want to maintain parallell setups if I can avoid it.
An example scenario would be doing code samples related to Azure AD. The purpose of the code is of course that everyone can test it as easily as possible, but at the same time I obviously have some parameters that I want to keep out of the code while still being able to test things myself. Currently my AAD samples is one big Visual Studio solution with a bunch of projects inside - great for browsing on GitHub. Less scalable for my own QA purposes if I want to actually deploy them other places than localhost
The beauty of Azure DevOps is that while it is one "product" there are several independent modules inside. Sure, you can use everything from a-z, but you can also be selective and only choose what you like. In my case this would be looking closer at the Pipelines feature.
I am aware of things like GitHub Actions and direct integration with Docker Hub, so it's not that I'm confused about that part. It's just that I like things like features like the boards in Azure DevOps as well as being able to easily deploy to my AKS clusters so I thought I'd take a crack at combining some of these things.
The high level flow would be something like this:
- Write code in Visual Studio. Add the necessary Docker config, Helm charts, etc.
- Push to a GitHub repo.
- Pull said GitHub repo into Azure DevOps.
- Build the code in Azure DevOps, and push images to Docker Hub while in parallell pushing to Azure Container Registry.
- Provided the Docker image was built to support it I can supply a config file, a Kubernetes secret, or something similar and push it to an AKS cluster for testing and demo purposes.
Easy enough I'd assume. I didn't easily locate any guides on this setup though, so I'm putting it into writing just in case I'm not the only one wondering about this Not rocket science to figure out, but still nice to have screenshots of it.
I ran through the wizard in Visual Studio to create a HelloDocker web app. Afterwards I added Docker support for Linux containers. The default Dockerfile isn't perfect for Azure DevOps pipelines though so I created one for that purpose called Dockerfile.CI:
FROM microsoft/dotnet:2.2-sdk AS build-env WORKDIR /app # Copy csproj and restore COPY *.csproj ./ RUN dotnet restore # Copy everything else and build COPY . ./ RUN dotnet publish -c Release -o out # Build runtime image FROM microsoft/dotnet:2.2-aspnetcore-runtime WORKDIR /app COPY --from=build-env /app/out . ENTRYPOINT ["dotnet", "HelloDocker.dll"]
Next step is pushing it to GitHub - I'll assume you've got that part covered as well. I have also made sure I have a Docker Hub id, and the credentials for that ready.
So, let's move into Azure DevOps and create a project:
Then you will want to navigate to the Project Settings, and Service connections for Pipelines:
First let`s add a GitHub repo - you have two choices:
Use External Git:
External Git will let you add any random repo on GitHub, whereas the GitHub connection requires you to have permissions to the GitHub account. (The benefits are that you can browse the repo instead of typing in paths manually, you can report back to GitHub that releases should be bundled up, etc. so provided it is your repo you should consider using a GitHub connection.)
That means you have to login and consent for the GitHub connection:
Since we're already in the service connection let's add a Docker registry as well:
Then we will move on to creating a pipeline.
If you chose an External Git repo choose that as a source:
Or GitHub as a source if that's what you set up in the previous step - that requires you to choose the repository you want to work with:
There are a number of templates to choose from, but to make it easy you can opt for the Docker container template:
I chose to do builds both for ACR and Docker hub in the same pipeline by adding two more Docker tasks. (For simple customization needs this might not be required, but options are always great.)
The build task looks like this:
Kick off the build, and hopefully you will get something along these lines:
Which means you should also have an image available for everyone:
And there you have it - pull from Docker Hub into Azure Web Apps, do a replace a number of files and push a customized image into ACR only to pull into AKS afterwards. Well, you catch my drift
On January 16, 2019 at 5:00PM PDT, the PowerShell-Docs repositories are moving from the PowerShell
organization to the MicrosoftDocs organization in GitHub.
The tools we use to build the documentation are designed to work in the MicrosoftDocs org. Moving
the repository lets us build the foundation for future improvements in our documentation experience.
During the move there may be some downtime. The affected repositories will be inaccessible during
the move process. Also, the documentation processes will be paused. After the move, we need to test
access permissions and automation scripts.
After these tasks are complete, access and operations will return to normal. GitHub automatically
redirects requests to the old repo URL to the new location.
For more information about transferring repositories in GitHub,
see About repository transfers.
When you use git clone, git fetch, or git push on a transferred repository, these commands will
redirect to the new repository location or URL.
However, to avoid confusion, we strongly recommend updating any existing local clones to point to
the new repository URL. You can do this by using git remote on the command line:
git remote set-url origin new_url
For more information, see Changing a remote's URL.
The following repositories are being transferred:
If you have a fork that you cloned, change your remote configuration to point to the new upstream URL.
Help us make the documentation better.
Sean Wheeler
Senior Content Developer for PowerShell
https://github.com/sdwheeler
Microsoft is retiring this blogging platform, so the NDIS blog will be removed soon. If you have any articles that you need to reference, please save them locally.
Ode dneška platí nové licenční podmínky GitHubu pro jednotlivce a malé týmy do třech lidí, zdarma jsou nejen veřejné repository, ale také i jejich privátní repa.
Zároveň se konsoliduje a unifikuje brand pro velké společnosti na "Enterprise".
Do budoucna se budete setkávat s těmito úrovněmi:
Nezapomeňte, že uživatelé GitHubu mají nově možnost využívat zdarma hostované Build servery (Linux, Mac, Windows) v rámci Azure Pipelines.
Buri
Azure IoT Tools for VS Code is an extension pack for Visual Studio Code that lets you get all you need for Azure IoT development with 1-click installation. Microsoft Azure IoT support for Visual Studio Code is provided through a rich set of extensions that make it easy to discover and interact with Azure IoT Hub that power your IoT Edge and device applications. The Azure IoT Tools provides the following benefits:
By installing this extension you will install all of the extensions listed above. Some of these extensions will have a dependency on the Azure Account extension to provide a single Azure login and subscription filtering experience.
You can easily uninstall individual extensions if you are not interested in using them, without affecting other extensions provided by this pack. You can uninstall all of the extensions by uninstalling the Azure Tools extension.
Setting up your Azure IoT hub in VS Code is the first thing after installation. You will see the device list and interact with your IoT hub and devices after setting up.
You can access almost all Azure IoT development functionalities provided by these extensions through the Command Palette. Simply press F1, then type in IoT to find available commands. Specifically, if you are interested in IoT device application development, you can visit IoT DevKit website for more samples and tutorials. If you want to learn more about Azure IoT Edge development, you can always find the tutorials on Docs of how to use these commands to create new project, debug modules and deploy to your Azure IoT Edge devices.
This project is open-sourced on Github. If you have any feature request or encounter any issues during your daily usage, don't hesitate to create issue on our Github repository. We are all ears.
This is probably my favorite feature of the release, announced here: a drop down to select which GPU to playback on.
So you can go with WARP, and more importantly, if you have something like a Surface Book with multiple GPUs for example, you can target the one you specifically want.
Enjoy!