Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Role of Web Based Technologies In Maximizing Code Sharing In Your Applications

$
0
0

In this post, Premier Developer Consultant Wael Kdouh shares ideas of the alternatives to reduce web complexity and increase code sharing. He does a side-by-side comparison of Cordova/Iconic, NativeScript, and React Native to help you explore some of your options to reduce complexity.


I was recently approached by a colleague of mine with the following question: "What are the available alternatives to reduce the current complexity of developing for multiple channels?". In a nutshell he was looking for ways to maximize the amount of client code shared among iOS, Android and Web channels while still being able to use native mobile features if/when needed. So here is the answer I shared with him which I thought would be beneficial for the wider community (not to mention that we love open sourcing everything here at Microsoft these days including our internal discussion threads).

The first question I typically start with is “What are they trying to build”? For example if they require squeezing out every bit of performance then native will always come on top.

The second question I ask is “What is their developers’ skillset”? Now in this case they are coming from native development so they may be open to either C#/XAML or JavaScript(or TypeScript)/CSS/html.

The fact that he threw in the word web presence mixed with requirement for a consolidated code base makes the web sound like a viable option (we still need to address the two aforementioned questions). There are a lot of options with the web today (this could be a good thing or bad thing depending on how you look at it).

Read more of Wael’s post here to explore these options.


Postmortem: Global VSTS CI/CD outage due to service bus failure – 13 April 2018

$
0
0

Customer Impact: 

On 13 April 2018, we had an incident which impacted CI/CD workflows in all data centers.  This was caused by a global Service Bus instance, which we use to orchestrate CI/CD workflows, to be unavailable due to authentication errors.  Users reported that their CI/CD pipelines were stuck at various stages including releases which were not kicked off after the builds completed.   

The incident lasted for four hours on 13 April from 1:15 UTC to 5:23 UTC.   

Due to missing telemetry, it is not entirely clear how many customers were impacted by this issue.  One important aspect of the alerting work called out in our next steps below is to enable our telemetry to more accurately identify the impact of service bus failures on our specific CI/CD workflows.  We know that overall, up to 1,081 accounts were impacted by these Service Bus errors during this timeframe. Additionally, we had only 703 releases queued from build during the impact window, compared 6,359 during the same time from the day before.   

What Happened: 

First, some background on how we utilize Service Bus which we use in two different ways:  

  1. Communication within a scale unit*. 
  2. Communication between scale units. 

To handle the first one, each scale unit has what we refer to as "Regional Service Bus Namespace".  These are independent from scale-unit to scale-unit.  To handle the second one, we have a "Global Service Bus Namespace" which all scale units need to know how to read from and write to.  This means that all scale units must know about their regional namespace and the shared global namespace and the keys that are required to access them. We rotate these secrets often, commonly with each deployment.   

In this scheme, one of the scale units is special.  We'll refer to it as SU1.  SU1 is the "master" of the Global Service Bus Namespace.  It is the scale unit that is responsible for the rotation of the keys.  When the Global Service Bus Namespace secret gets rotated, we must tell the other scale units about the new key. The other difference for SU1 is that the global and regional namespace are the same. 

With this sprint's rollout, we deployed changes that made it easier to automatically rotate these keys.  That new code made some assumptions about the naming scheme that we used to refer to the regional and global namespaces and expected them to be different.  Given that SU1 used the same namespace for both global and regional namespaces, our code attempted to rotate the secrets for the Global Service Bus Namespace on SU1, but actually rotated the secrets for the Regional Service Bus Namespace on SU1 instead and stored them as the Global Secrets.  By itself, this does not cause a problem.  The problem was caused when we broadcasted the new secrets to the other scale units and told them that it was the global key.  At that point, communication between scale units was entirely broken. 

To date, we have treated Service Bus as a non-critical dependency because, like any dependency, it can have problems. So, we’ve designed the system to continue functioning even when there is a problem.  However, through this incident we realized that one of the critical workloads that relies on Service Bus is our Build system which sends out a notification to Release Management when a build completes.  Release Management watches for those events and queues releases to run for those that have triggers configured.  With Service Bus down, none of the “build completed” events were delivered to Release Management and thus releases which were to be triggered by a build completion did not run. 

Because we think about Service Bus as a non-critical dependency, our alerting for Service Bus is configured to send "email only" alerts if something is wrong with it.  Unfortunately, this means that while our alerts did fire when Service Bus went down, they did not fire in a way that alerted people to immediately respond to the issue.  It was only after customer escalations that we realized the true impact of Service Bus going down. 

Next Steps: 

There are several areas that we need to improve to make sure this doesn't happen again: 

  1. Fix the Service Bus secret rotation code to correct the problem that triggered this incident. 
  2. Follow the standardized naming convention for Service Bus Namespaces across all scale units, including the special SU1 scale unit. 
  3. Configure our alerts around Service Bus to reflect the critical nature of the workload that Service Bus handles. 
  4. While investigating this issue we found that our Service Bus troubleshooting guide needs improvements.  We have a set of updates planned there. 
  5. Investigate if there is a way to make Release triggers resilient to Service Bus being down. 

We apologize for the impact this had on our users. This incident has highlighted areas of improvements needed in our telemetry, detection and our secret rotation. We are committed to delivering the improvements needed to avoid related issues in the future. 

Sincerely,
Taylor Lafrinere
Group Engineering Manager, VSTS Continuous Integration

Err.exe でエラーコードの定義を探す

$
0
0

NTSTATUS や、GetLastError() で得た Win32 エラーコードの定義がすぐにわからなくて困ったことはありますか?

 

今回は、そのような方のために、エラーコードの定義を探すのに便利なErr.exe をご紹介します。

 

    1. 入手手順

 

    1. Err.exe は、以下のサイトからダウンロードしていただけます。

 

Microsoft Exchange Server Error Code Look-up

< https://www.microsoft.com/en-us/download/details.aspx?id=985 >

 

    1. ダウンロードしたErr.EXE は自己解凍プログラムのため、実行すると、解凍先フォルダを求められます。

    1. 解凍先を指定して解凍すると、以下のファイルが確認できます。このErr.exe が目的のツールです。

 

2004/04/01  18:18         1,698,816
Err.exe


2004/04/01  18:26           505,344
Error Code Lookup Tool.doc


2004/04/01  17:43            13,372
eula.txt

 

 

    1. 使い方

 

使い方は、コマンドプロンプトを起動し、「err.exe< エラーコード>」を実行するだけです。

 

今回は、例として、0xe000023c というエラーコードで実行してみます。

 

> Err.exe 0xe000023c


# for hex 0xe000023c / decimal -536870340 :


  ERROR_NOT_AN_INSTALLED_OEM_INF                                setupapi.h


# 1 matches found for "0xe000023c"

 

上記のように、setupapi.h に定義されたERROR_NOT_AN_INSTALLED_OEM_INF であるということがわかりました。

 

WDK/SDK のヘッダファイルのある C:Program Files (x86)Windows Kits10Include{バージョン番号} のフォルダ内を、e000023c で検索しても見つかりませんが、上記のようにErr.exe で見つかったERROR_NOT_AN_INSTALLED_OEM_INF で検索すると、setupapi.h に以下のように定義されていることがわかります。

 

#define ERROR_NOT_AN_INSTALLED_OEM_INF           (APPLICATION_ERROR_MASK|ERROR_SEVERITY_ERROR|0x23C)

 

このような定義となっている場合でも、上記のツールを使うことで、エラーコードの定義を探していただくことができます。

 

より詳細な使い方や制限事項は、以下の弊社ブログをご参考にしていただければ幸いです。

 

    • 参考文献

重箱の隅のデバッグ(2) – エラーの意味を探る

< https://blogs.msdn.microsoft.com/japan_platform_sdkwindows_sdk_support_team_blog/2012/05/17/2/ >

 

上記の内容が、ドライバやアプリケーションを開発される方のお役に立てば幸いです。

 

WDK サポートチーム 津田

 

Women in Technology ~ 女性のキャリアにおける危機を乗り越えるヒント ~

$
0
0

皆さん、こんにちは。

de:code 2018 では、女性のパワーで日本社会の成長機会をさらに拡大いただくために、テクノロジを通して女性エンジニアの活躍を推進しています。本年は、スペシャル セッションの1つとして、女性エンジニアと女性エンジニアのキャリア開発に興味のある方を対象とした Women In Technology を実施します。

 

本セッションでは、Microsoft Corporation 開発部門、Azure 製品部門、HoloLens / Windows Mixed Reality 製品部門の第一線のマネージメントで活躍し、基調講演でも登壇する 3名の女性スピーカー ( Julia LiusonJulia WhiteLorraine Bardeen ) が、これまでのキャリアの中での最大の危機と、それをどうやって乗り越えたのか、この場だからこそお伝えできる「私のキャリアにおける最大の危機」をショートスピーチおよび皆様からの Q & Aで赤裸々に語ります。

 

講演者

モデレーター

また、セッション終了後、Expo 会場では、女性エンジニアのためのネットワーキングの場として利用できる「 Life with Tech Women ラウンジ」で、この 3 スピーカーに加えて、Breakout セッションに登壇する女性スピーカーや日本マイクロソフトで活躍する女性エンジニアたちが交代で、気軽にその場で質問できるトークタイムをご用意しています。女性エンジニアの皆様はもちろんのこと、女性エンジニアのキャリア開発に興味がある方もぜひご参加ください。

■ 受講対象:女性エンジニア、女性エンジニアのキャリア開発に興味のある方(男性歓迎)

Women In Technology セッションに参加された方には、限定ノベルティを進呈いたします。

--------------------------------------------------------------------------

  • 公式ウェブサイトはこちら
  • 早期割引申込はこちら
    • 早期申込割引締切日:2018 年 4 月 24 日(火)
  • セッション情報はこちら
  • SNS

--------------------------------------------------------------------------

Dynamics 365 – Prospect to Cash Integration

$
0
0

Introduction

Disconnected Sales (front office) and Operations (back office) systems is a problem that existed ever since business applications existed. Organizations often acquire these solutions from different software vendors and tries to stitch them together with an integration project. As time goes on, it becomes increasingly difficult to maintain. In this blog I will discuss how Microsoft is providing an integrated experience with Dynamics 365, connecting the Customer Engagement Platform (CRM / front office) to the Finance & Operations (ERP / back office) systems.

Prospect to Cash Pre-Requisites

The following are required for you to run through the rest of the blog.

  • Microsoft Dynamics 365 for Finance and Operations (F&O), Enterprise edition July 2017 update with Platform update 8.
  • Dynamics 365 Sales (CE), Enterprise Edition, version 8.2 or above.
  • Prospect to Cash Solution installed in CE, you can install the solution from AppSource from here
  • Refresh the entity list on F&O by navigating to Data Management --> Framework Parameters --> Entity Settings and by hitting "Refresh Entity List: button as below.

Setting up your connections

Before we can synchronize data between CE and F&O databases, we need to setup connections that maps between these two environments. Navigate to the Business Platform Admin Center .

Click on the Data Integration tab and then Connection Sets as below and create a New Connection Set.

Select One Connection For CE and another for F&O as below. It is also important to map the Organizations section, for Dynamics 365 for Operations (FO) this is the Legal Entity where you would like to have the synchronization and Dynamics 365 for Sales (CE) this is the Business Unit mapping.

  • Business Unit in CE: Settings --> Security --> Business Units.
  • Legal Entity in F&O: Organization Administration --> Organizations --> Legal entities

Setting up your Projects

Now that we have the connections made between CE and F&O, it is now time for us to setup synchronizations between them. Before we create the projects,  review the integrations that are supported out of the box.

Accounts (CE) to Customer (F&O)

Create a New Project in the Business Platform Admin Center . Click on the Projects tab and continue creating a new project. Select Account (Sales to Fin and Ops) - Direct template.

In the next steps, select the Connection Set and Organizations connections we created in the earlier step

Click Next to create the project. Once the project is created, before we run the project a few things needs to be checked.

  • Click on the "Refresh Entities" icon to make sure the mapping between CE and FO tasks are correct.
  • The Accounts synchronization templates expects the customer group that is in the mapping exists in F&O. By default a value of 10 is being sent, so either change the value (On the Tasks mapping) to a value that you are expecting or make sure the value exists in F&O (Accounts Receivables --> Setup --> Customer groups)

Now that the project is created and mapping is done, let's run the Accounts Synchronization Project by clicking on the "Run Project" Icon.

The execution history shows the status of the execution and any errors.

As you can see there were 4 upserts and 6 Errors. You may review the errors by clicking on the row, in my case it was duplicate accounts that caused the 6 errors to occur. Now let's take a look at how a successful synced account look in both systems.

Contacts (CE) to Contact (F&O)

Now that we have Accounts synchronized, let's synchronize Contacts.

Note: You may also synchronize Contacts from CE as Customers in FO through the template "Contacts to Customer (Sales to Fin and Ops) - Direct". In this example let's synchronize using "Contacts (Sales to Fin and Ops) - Direct.

Run the project and review errors if any. Otherwise let's take a look at a Synchronized Contact.

 

Products (F&O) to Products & Pricelists (CE)

Before you synchronize any sales documents, synchronize the products & pricelists from F&O to CE. There are quite a few variables in play before the synchronization can run. Let's review them.

  • Only Released Products from F&O are synchronized.
  • A one time job of Product Distinct Table needs to be populated . (Product Information Management --> Periodic Tasks --> Populate distinct product table).
  • If you like to have the products that are integrated from F&O to be automatically published in CE check the Settings --> Administration --> System Settings --> Sales tab and make sure "Created products in active state" is marked as Yes.
  • Unit of Measure is crucial in the product synchronization. These values has to match or the products that references to that unit will fail. An example below
    • F&O : Organization administration --> Setup --> Unit
    • CE : Settings --> Product Catalog --> Unit Group --> Click on the unit group --> Units

Run the project and review errors if any. If the products are not set to publish automatically, then go into products and Publish them so they can be used in Sales Documents such as Opportunity , Orders etc.

Quotes (CE) to Quotations (F&O)

Before we can synchronize Quotes to F&O, check these following settings.

  • Go to Settings > Administration > System settings > Sales, and make sure that the following settings are used:
    • The Use system prizing calculation system option is set to Yes.
    • The Discount calculation method field is set to Line item
  • Only Active Quotes are used for synchronization.

Run the project and review errors if any. Let's review a synchronized Quote.

Orders (CE) to Sales Orders (F&O)

  • Before we can synchronize Orders to F&O, the settings mentioned in the Quotes section above is applicable for Orders as well.
  • Only Active Orders are used for synchronization.

Run the project and review errors if any. Let's review a synchronized Order. Notice that the Processing Status on the Order in CE is Active

Sales Orders (F&O) to Orders (CE)

  • Once the Sales Order is processed in F&O, the details of the Sales Order is sent back to CE. In this example we will create an Invoice for this Sales Order in F&O. This sets the status of the Sales Order from Open Order to Invoiced in F&O. Running this job will update the status of the Order to be Invoiced in CE.
  • Go to Sales and marketing > Periodic tasks > Calculate sales totals, and set the job to run as a batch job. Set the Calculate totals for sales orders option to Yes. This step is important, because only sales orders where sales totals are calculated will be synchronized to Sales. The frequency of the batch job should be aligned with the frequency of sales order synchronization.
  • Go to Settings > Security > Teams, select the relevant team, select Manage Roles, and select a role that has the desired permissions, such as System Administrator

Run the project and review errors if any. Let's review the synchronized Order Status from F&O in CE. Notice the Processing Status on the order in CE is Invoiced

 

Invoice (F&O) to Invoices (CE)

The pre-reqs for Invoices to sync to CE is taken care of the Quotes and Orders section, then you should be able to run the project without any pre-reqs.

Run the project and review errors if any. Let's review the synchronized invoice in CE.

 

 

Scheduling, Templates & Monitoring Jobs

Now that we have gone through the solution templates / projects manually, let's look at scheduling, saving templates and monitoring the jobs.

Scheduling 

Navigate to the Business Platform Admin Center .Click on the Data Integration tab and then click on a Project that we setup and then the Scheduling tab. You can run as frequent as every 1 minute and be alerted if there are errors or warnings.

Example of the alert email below.

 

Templates

If you would like to save your settings on an existing project, you can save the projects tasks mapping, connection sets etc into a template. See below.

The save templates can be viewed on the Templates tab.

Monitoring Jobs

Even though the admin center provides status to the jobs, there will be times you need to know more of what is happening. Here are the areas that you need to check to see what is happening with your import/export job.

CE: Settings --> System Jobs

F&O: System Administration --> Workspaces --> Data Management. Click on the specific job you are interested in and look for the Execution Details and the "View Execution Logs" for more details.

 

I hope you enjoyed reading my blog on Dynamics 365 - Prospect to Cash Integration.

 

 

 

日本国別要件の支払方法「締日」をご利用のお客様へ

$
0
0

現在以下の問題の発生が報告されています。

 

ご利用環境のバージョンや適用済の Hotfix、ご利用の支払条件によって現象の発生状況が異なるため、以下のような現象を確認された際には Hotfixの適用とあわせ回避策による対応をご検討ください。なお、回避策の対応をする場合でも各 Dynamics AX のバージョンにあわせた Hotfix の適用が必要です。

 

===============

現象概要:

===============

支払方法「締日」を利用した場合、支払期日の計算が正しく行われない。

うるう年以外の2月に支払期日「28」/締日「31」の支払条件を利用すると、ひと月前倒しとなる。

支払期日「30」/締日「31」の支払条件を利用すると、計算結果が偶数月となる場合にひと月前倒しとなる。

 

現象の発生事例-1:

支払条件 = 31日締翌28日払い

請求書日付 = 2019年2月10日

計算される締日 = 2019年2月28日

(想定される期日 = 2019年3月28日)

 

 

 

 

 

 

 

 

 

 

 

 

 

現象の発生事例-2:

支払条件 = 31日締翌30日末払い

請求書日付 = 2018年9月10日

計算される締日 = 2018年9月30日

(想定される期日 = 2018年10月30日)

 

 

 

 

 

 

 

 

 

 

 

 

 

===============

発生条件(and 条件):

===============

- 日本国別要件を利用する環境(法人の住所が JPN)

- Dynamics AX 2012R3 環境に KB4099053 が適用されていない場合

(Dynamics AX 2012R2 環境の場合にはKB4131290が適用されていない場合)

- 支払期日に月の最終日となる日付(28日、29日、30日、31日)を設定し締日に「31」が設定されている場合

現象が発生しない設定例:

支払方法「締日」を利用した場合でも、支払期日「10」/締日「25」のように、締日に月末以外が設定されている場合

 

===============

対応概要

===============

Dynamics AX 2012R3 環境をご利用の場合には KB4099053を適用し、後述する回避策の設定を行います。

(Dynamics AX 2012R2 環境の場合にはKB4131290を適用します)

Product and version: AX 2012 R3

KB 4099053 Japan/JPN: Due date calculation is incorrect when the payment method is cutoff day

https://fix.lcs.dynamics.com/Issue/Details?kb=4099053&bugId=3935105

Product and version: AX 2012 R2

KB 4131290 Japan/JPN: Due date calculation is incorrect when using payment method = Cutoff day on Microsoft Dynamics AX 2012 R2

https://fix.lcs.dynamics.com/Issue/Details?kb=4131290&bugId=3940874

 

--------------------------------------

KB適用後に必要な設定変更(回避策)

--------------------------------------

上記の Hotfix を適用後も支払方法「締日(Cutoff day)」で支払条件:支払日= 31日/締日=31 とした場合に、月末日の判定が正しく動作ぜず下記現象が発生します。

 

KB 適用後に発生する現象:

日本国別要件の支払方法「締日(Cutoff day)」と支払期日= 31日/締日=31 の設定が利用され、計算された締日が31日の月以外(2月、4月、6月...)の場合、支払期日がひと月後となる現象が発生いたします。

 

現象の発生事例-3:

支払条件 = 31日締翌月末払い(支払方法 = 締日)

請求書日付 = 2018年5月1日

計算される締日 = 2018年7月31日

(想定される締日 = 2018年6月30日)

 

 

 

 

 

 

 

 

 

 

 

 

 

暫定対応:

上述のとおり、Hotfix 適用後も日本国別要件の支払方法「締日(Cutoff day)」 で支払日= 31日/締日=31 の支払期日が正しく計算されません。

支払方法「当月」を使用することで、支払日= 31日/締日=31の支払期日を正しく計算することが可能です。

 

支払方法「締日」から「当月」へ支払条件マスターの設定を変更し対応します。

支払条件 = 31日締翌月末払い(支払方法 = 当月)

請求書日付 = 2018年5月1日

計算される締日 = 2018年6月30日

 

 

 

 

 

 

 

 

 

 

なお、締日の計算は、日本国別要件の月次締め請求書でも利用されていますが、この設定変更による月次締め請求書への影響はございません。

 

How to find the Global Admin for your Azure AD tenant

$
0
0

The smooth working of a bot will require a proper configuration on Azure AD. Sometimes users themselves don't have the permission to modify settings in AAD, only Global Admin will have the right, then the question becomes: how to find my Global Admin ? For now it's not feasible via Azure Portal, but we can user Powershell commands to achieve this goal.

Here we'll use the Azure Active Directory PowerShell for Graph module. It can be downloaded and installed from the PowerShell Gallery: www.powershellgallery.com. The gallery uses the PowerShellGet module. The PowerShellGet module requires PowerShell 3.0 or newer and requires one of the following operating systems:

Windows 10 / Windows 8.1 Pro / Windows 8.1 Enterprise / Windows 7 SP1 / Windows Server 2016 TP5 / Windows Server 2012 R2 / Windows Server 2008 R2 SP1

PowerShellGet also requires .NET Framework 4.5 or above. You can install .NET Framework 4.5 or above from here. For more information, please refer to this link. For more detailed info on installation of the AzureAD cmdlets please see: Azure Active Directory Powershell for Graph.

1. Launch Windows Powershell console as Administrator;

2. If you have never installed Azure AD module in Powershell, please type "Install-Module AzureAD";

3. You'll receive the following warning:

4. Type Y to continue; or you can type A to accept all so you'll not be asked again;

5. Once installation finished, type command "Connect-AzureAD", a pop-up window for user connection will show up. Please logon with your Azure account;

6. Once you get logged in, some basic information regarding this AAD tenant will be shown in Powershell console:

7. Then execute the command "Get-AzureADDirectoryRole", a list of available roles under this AAD tenant will be shown;

8. From the list, copy the ObjectId for the fole "Company Administrator" into Notepad; (Note: Global Administrator = Company Administrator in this context)

9. Then execute the following command to get the Admin account info (please replace the highlighted part with the ObjectId you've copied in Step 8):

Get-AzureADDirectoryRoleMember -ObjectId "Put-ObjectId-here"

10. You'll get some necessary information from the displayed result regarding your Global Admin.

Hope this article would be useful for you.

Jin W. from Microsoft France IIS/ASP.NET/Azure Bot team

You might also be interested in:

Administrator roles in Azure Active Directory

https://docs.microsoft.com/en-us/azure/active-directory/active-directory-assign-admin-roles-azure-portal

Azure Active Directory PowerShell 2.0

https://docs.microsoft.com/en-us/powershell/module/azuread/?view=azureadps-2.0

Introducing Mastermind: Powerful and Simple Deployment, Scaling, Load Balancing and Testing of Applications in the Cloud

$
0
0

Guest post by University College London IXN Project Group  bsc-comp214p-team22- Project Mastermind

UCL_Logo image

Mastermind is an extensible CLI application that allows easy deployment, updating, scaling and testing of applications using only a Git repository as a parameter. All application-specific configuration and deployment options are specified in a Kubernetes-compatible mastermind.yaml file within the Git repository itself that specifies how the application should be deployed, scaled, exposed to the internet and provisioned with database backends and testing suites.

Mastermind


Mastermind was designed to solve one of the fundamental problems that DevOps Engineers and System Administrators face on a daily basis, which is the complexity of publishing and updating applications to staging, testing and production environments in a standards-compliant way that causes minimal disruption and adheres to modern best practices, such as the following:

· Speed of delivery: it shouldn’t take long to publish or update an application. This should take seconds, not hours.

· No downtime: we need to be able to update live applications without causing user disruption. We shouldn’t have to bring down our entire application in order to perform an upgrade.

· Fault tolerance: we want our applications to remain available in the case of failures in other systems. We don’t want a single failing server to bring down our entire application.

· Horizontal scalability: we want to be able to enable our application to handle more load by spinning up multiple copies of it, and scaling it in this way automatically, instead of needing to purchase increasingly expensive servers. Even with the best single server, a high-traffic application would crash.

· Load balancing: Once we have multiple copies of our application, we need a way to spread traffic over all of them, instead of just one. For this, we need a gateway to effectively proxy traffic to different containers of the application based on their current load and other parameters.

· Principle of least privilege: We want our applications to only have the minimum access necessary to do their jobs. We don’t want a single compromised application to be able to take over our network of servers.

· Secret sharing: We don’t want to have to hard-code secrets, such as database passwords, directly in the source code of our applications. We need a way to store them in a central, safe place and make them available to the applications as they need them.

imagePreviously, the luxuries required tooling available only to biggest and/or most technically adept companies in the world, the likes of Google and Microsoft or required hundreds of man-hours of work to build the necessary infrastructure.

image

Prior to Mastermind, no software package or toolkit, to my knowledge, enables more than half of the goals above to be realized in an out-of-the-box fashion. Even when some of the ideas above were possible, they required extensive configuration or took a long time to deploy, or took a long time to deploy because of the extensive configuration required on the part of the developers.

Project Mastermind

Mastermind solves all of these issues, and democratizes access to high-availability, fault-tolerant application distribution in public Clouds and, through extension, private and on premise Clouds. This will allow lone developers, freelances and small companies to enjoy the same luxuries in making their applications available to the world.

The key feature of Mastermind is its ease of use, it’s made available as a POSIX-compliant system-wide command-line utility, think “ls” or “cat”:

it takes a mere 3 commands in a terminal to install it on any operating system,

2 commands to prepare to use a cloud provider like Azure,

1 command to deploy an entire Mastermind cluster,

1 command to deploy an application to said cluster,

and 1 command to revert any of the aforementioned.

This will all become apparent because we will proceed to explain how to use it below. For complete documentation, you can visit the public GitHub page for Mastermind: https://github.com/bsc-comp214p-team22/bsc-comp214p-mastermind.

Once Mastermind is installed

We have to instruct it which service it should use as underlying infrastructure, we can run

“mastermind config set provider AZURE” for Microsoft Azure.

Mastermind will intelligently inherit settings and credentials from the “gcloud” and “az” utilities on the system

So there is no further configuration needed to connect to each service. It also includes a pluggable provider interface, meaning that other public Cloud, private Cloud or on premise services can be added to it easily.

Once we have configured a provider

We need to initialize a Mastermind cluster on the service provider we chose. To do that, we need only run “mastermind cluster create”. This is one command, but the difficulty it abstracts away is enormous. Mastermind deploys a fully functional Kubernetes cluster, meaning that in the background it will provision compute resources such as VMs, generate CA and TLS certificates, generate configuration files, generate encryption keys, deploy an etcd cluster to store configuration data, bootstrap a control node and worker nodes, configure Kubernetes for remote access, provision pod network routes, among other things.

To grasp the difficulty of setting up a full Kubernetes cluster you can read “Kubernetes the Hard Way” at https://github.com/kelseyhightower/kubernetes-the-hard-way. But for users of Mastermind, this all becomes one command.

Now that we have our cluster up and running, deploying an app on Mastermind is equally simple.

We need a Git repository with a valid Dockerfile and a “mastermind.yaml” file. The former is a file used to build a Docker image from the code in the Git repository, and the latter is a Kubernetes deployment file with the addition that a user can substitute the Docker image name in the deployment file with the keyword “THIS”. Mastermind will proceed to build a Docker image from the current repository, upload it to the provider’s container image repository, and substitute it in the deployment file when it’s time to push the application to the Cloud. Effectively this means that a developer using Mastermind can use the full power of the world-class Kubernetes framework, without needing to perform the usual configuration steps such as building a Docker image, publishing it to a container repository, etc. All of this is done for them in the command “mastermind app create <git-url-here>”. In addition, the developer can have multiple entries in their deployment file, enabling the ability to deploy databases alongside the application, and sharing configuration options, hostnames and other information.

When the developer makes a change to their application

All they have to do to update the live version of the application is: “mastermind app update <git-url-here>”. Mastermind will intelligently find the previous deployment and update it in a way that leads to no downtime: by bringing down one container of the live application, bringing up a single container of the updated application and repeating this process until only new versions of the applications are live. This leads to no downtime because there are multiple containers running per application, and the deletion of one simply load-balances traffic among the remaining.

Using VSCode and Mastermind

We have developed a dedicated VScode Mastermind Extension This extension adds a Mastermind Update command, that automatically updates an already-deployed application right from Visual Studio Code! Download it from the VSCode Extension Marketplace https://marketplace.visualstudio.com/items?itemName=bsc-comp214p-team22.mastermind-update and see the source code at https://github.com/bsc-comp214p-team22/bsc-comp214p-vscode 

Mastermind can deploy a testing suite

Alongside any deployed application to perform Performance, Load and Functionality testing in a real Cloud environment, ending the guessing game of whether an application will hold up under traffic. This testing suite is a web application that can simulate any number of users performing any number of tests on an application remotely, without direct intervention in the application’s code. It plots graphs, shows tables of statistics and allows export via CSV and other formats.

Conclusion

No longer do developers have to jump through hoops in order to deploy, test, load balance and scale their applications. Most of the work is done for them. All they have to do is write the code, and run one command, quite literally.


OpenHack IoT & Data – 28. – 30. Mai

$
0
0

Als CSE (Commercial Software Engineering) können wir Kunden beim Umsetzen von herausfordernden Cloud Projekten unterstützen. Ende Mai bieten wir als Readiness Maßnahme einen dreitägigen OpenHack zu IoT & Data an. Für alle Entwickler die sich mit diesen Themen beschäftigen eigentlich ein Muß.

28. - 30. Mai OpenHack IoT & DataWir geben bei den OpenHacks den Teilnehmern, also Euch, Aufgaben, die Ihr selber lösen müßt. So ergibt sich eine enorm hohe Lerneffizienz und ein bemerkenswerter Wissenstransfer.

Weiters braucht Ihr in diesem Fall auch noch kein eigenes Projekt, an dem ihr weiterarbeitet. Ihr braucht uns hier also dementsprechend auch keinerlei Projektinformationen von Euch zu geben. Kann bei machen Chefs ja ein wichtiger Punkt sein. Smile

Zielteilnehmer: Alle die entwickeln können: Developer, Architekten, Data Scientists, …

Falls ihr Euch also aktiv mit den Themen IoT & Data auseinandersetzt oder auseinander setzen wollt, dann kommt selber oder schickt andere Entwickler. Ihr könnt Euch dann gemeinsam mit den Software Engineers der CSE mal so richtig in das Thema reinknien.

Ah ja: Ganz wichtig: Eigenen Laptop mitnehmen und “come prepare to Hack!!”. Kein Marketing, kein Sales. Hacking pur!!

Das Ziel wäre es klarerweise Euch zu motivieren über eigene IoT & Data Projekte nachzudenken bzw. diese zu starten. Nur so als kleine Anregung daher ein paar Projekte, die in Kooperation mit der CSE entstanden sind: https://www.microsoft.com/developerblog/tag/IoT

Anmelden unter: http://www.aka.ms/zurichopenhack

Ich freue mich auf Euer Kommen!!

Microsoft Planner: Where did my New plan option go?

$
0
0

This one should only affect administrators, but the behavior will also help explain why other users may not see the New plan option.  So what does this look like?  When Planner loads initially as it loads the New plan option can be seen – just above the Planner Hub option:

Planner screen while New plan is still visible

But as the page finishes loading a few calls will have been made to check tenant settings and group memberships – and the UI will be trimmed and New plan option removed:

Planner screen fully rendered and New plan trimmed

This was a recent change due the the work we are doing in Planner to support Guest access – and we are trimming the UI so that people who are not allowed to create Groups do not see the New plan option in Planner.  The article at https://support.office.com/en-us/article/manage-who-can-create-office-365-groups-4c46c8cb-17d0-44b5-9776-005fced8e618 explains how to control Group creation and whether Guests can create groups too.  To control other users you can set a setting at the tenant level – EnableGroupCreation = False, and then create a security group populated by all the users who you are allowing to create Groups.  The GUID for this group is set against the property GroupCreationAllowedGroupId.

Where this comes unstuck is if you have configured this option but have not included the admins in the group allowed to create Groups.  Admins bypass this control and can always create Groups – but our checks don’t find them in the group and trim the UI.  In Planner there is no way to determine if the current user is an admin.

A couple of workarounds – firstly if you are quick then you can click on New plan before it goes away and you will be able to create a plan (admins only – if the UI is being trimmed because you can’t create Groups then you will still not be able to create Groups.  The better solution is to add all of your admins to the group you are using to control group creation – then they will not get trimmed UI.

For the inquisitive amongst you take a look at the F12 Developer tools in your browser of choice and you can see the calls that are getting the data to make the trimming decision.

GetCurrentTenantSettings returns the settings to know if how EnableGroupCreation is set (False in this case) and also the group used to control who can create Groups (0ff9c27d-47f3-4d19-b39a-695c8e8ae9d1

{"AllowGuestsToAccessGroups":true,"AllowGuestsToBeGroupOwner":false,"AllowToAddGuests":true,"ClassificationDescriptions":[],"ClassificationList":["Low","Medium","High"],"CustomBlockedWordsList":[],"DefaultClassification":"","EnableGroupCreation":false,"EnableMSStandardBlockedWords":false,"GroupCreationAllowedGroupId":"0ff9c27d-47f3-4d19-b39a-695c8e8ae9d1","GuestUsageGuidelinesUrl":"","PrefixSuffixNamingRequirement":"","UsageGuidelinesUrl":”Http://aka.ms/o365g”}

Once we have this GroupId we can check if I cam a member using CheckCurrentUserToGroupMemberships – passing in the GroupId.  This will return an array of the Groups that were passed in of which I am a member (This time there was only one group – but the same call can be used to test many groups)

"{"@odata.context":"https://graph.microsoft.com/beta/$metadata#Collection(Edm.String)","value":[]}"

The empty array ([]) confirms that I am not a member so the UI gets trimmed!

Configure Alerts for a Hung/Stuck SQL Agent Job step

$
0
0

SQL Server does a decent job alerting us of SQL Agent Job failures/ Success

But, what if we  want to know if a job is hung, been running for a long time and has been stuck?

When my customer approached me with the below problem I realized that the Job history table in the MSDB does not have entries for Job or Job Steps that are hung or in Transit.

It only adds an entry to the Job History table after the Job has completed or Failed.

 

We had the following SQL Agent Job in our environment
1. We were using an SSIS package with an WMI Event Watcher Task to continuously watch for arrival of new files. Each day the new file would arrive at 3AM

2. After this there were more steps in the Job execution which would process the file..etc

The Job had steps 1 to 8

The problem was that it was getting hung on the 2nd  step which was the SSIS Package execution. This was to process the file after the WMI Event Watcher Task determined that the file had arrived. It was getting hung “intermittently” with no specific pattern , even after the file has arrived.

Each time the application user would raise a complaint that they did not recieve a file for  processing their Daily report which would then alert the SQL team. They would find the Job stuck executing the Step 2.

To fix it they would check if the file had arrived and then jump to step3.

We needed a proactive way to alert us if the Job step was stuck. Hence we used the below approach

Find your Step_id and Job_Id from

Use msdb
go
select * from dbo.sysjobsteps

 

We created a new SQL Agent Job for msdb database to alert us:

declare @run_date int
set @run_date = (select TOP 1 run_date from dbo.sysjobhistory where step_id = 2 and job_id='A2C37BA6-073B-4479-8C5B-AF6BE3CCA3D1' ORDER BY run_date DESC)
--Getting the latest run_date of the Job step
declare @return_value int = 0;

declare @d1 int = (SELECT YEAR(GETDATE()) * 10000 + MONTH(GETDATE()) * 100 + DAY(GETDATE()))
-- Getting todays date in the same format as the last run_date

declare @run_time int = (select TOP 1 run_time from dbo.sysjobhistory where step_id =2 and job_id='A2C37BA6-073B-4479-8C5B-AF6BE3CCA3D1' ORDER BY run_date DESC)
--Getting the latest run_time of the Job Step

declare @threshold_time int = 50000
--Declare a threshold time at which you want to be alerted. Example Job step was to complete at 3am ,but even at 5AM it hasnt posted a success message
--Here 5am is your threshold_time

if @d1 > @run_date and @run_time < @threshold_time SET @return_value = 1 ;
-- If the Job History doesnt have run_date record for Today and if Job History shows that the latest run_time is less than the threshold_time
--it means the Job Step hasnt run or is probably stuck , which means we should be alerted

IF (@return_value) > 0
begin
exec msdb.dbo.sp_send_dbmail
@profile_name = 'Default',
@recipients = 'me@example.com',
@subject = 'warning step 2 is stuck'
end



We now get alerted when our job step is stuck! 
Hope this helps if someone has the same issue.


If you have any more ideas on how to approach this please do share in the comment section below:)



Building your own educational visualisation learning platform with .Net Core & Azure

$
0
0

Guest blog by David Buchanan Imperial College Microsoft Student Partner and UK Finalist at the Imagine Cup 2018 with Higher Education App

About me

I’m a second year Mechanical Engineering student at Imperial College London. I entered the Imagine Cup this year as a solo team with my Higher Education App project. My primary interests within computing are cloud computing, web development and data engineering. I have experience in .Net Core (C#), JavaScript, HTML5, CSS3, SQL, (Apache) Gremlin and the Hadoop ecosystem. My LinkedIn address is www.linkedin.com/in/david-buchanan1, which you can follow to keep up with my progress on the App.

Introduction

My submission to the Imagine Cup this year was an educational App and website which offered an avenue for anyone looking to make their own interactive educational content.

The App is based around the concept of clearly mapping out topics and subtopics into a hierarchal structure, such that the overall layout of a subject becomes much clearer than traditional formats like textbooks. It was designed from the start to be as accessible as possible to maximise the inclusiveness of the platform.

As such, it is compatible with practically all devices thanks to its browser based design, including mobile platforms, games consoles and single-board computers like the Raspberry Pi.

image

An example of a map for studying Maths

The Aim of the App

One of the key focuses in the App’s design was to make sure the learning environment was as efficient as possible. One of the biggest shortcomings of current VLE software is the inefficiency of the navigational experience, and the fragmented nature with which data is presented. Often when a user wishes to access several topics within a subject, current platforms require constant renavigation of the same index pages, or require the student to open various tabs to more easily access the various topics they wish to look at. These are highly inefficient and distracting processes which not only waste the time of the user, but also make learning an unnecessarily laborious process.

My App aims to not only improve the efficiency of the navigational experience, but also make it natural and intuitive by integrating both touch gestures as well as keyboard and controller based interfaces.

image

An example of the menu that comes up when you click on a node or its image/text

Further to the earlier point of maximising accessibility, the platform is designed such that users with motor or visual impairments should have as comfortable and efficient a learning experience as is possible, with all the accessibility features being automatically integrated, without the need for special consideration from the content creators. This is done by utilising the HTML5 speech synthesis API to verbally call out the text highlighted on screen while the user navigates using either their keyboard or the onscreen controls. Furthermore, as the App uses vector graphics, users with partial blindness can zoom in as much as they desire without either the text or graphics blurring. All controls involved with map creation and navigation are bound to appropriate keys within the users’ keyboard, which allows individuals with special input requirements to easily map their custom hardware or software solutions to all the controls.

Adoption of EdTech

The platform is incredibly relevant right now, as educational institutions increasingly look towards EdTech to make learning more accessible and relevant to students. Inevitably, young people are spending increasing amounts of time on online-connected devices such as mobiles or laptops, with truly cross-platform learning solutions lagging in terms of quality and innovation. Of course, there are several excellent learning platforms already available such as Khan Academy, Quizlet, and OneNote, and whilst initially it may appear that my platform is a competitor these, I’d argue that it is a complement to these existing solutions, as it allows users to easily link to external resources, and also allows the community to rate these resources in terms of effectiveness and clarity. This is a crucial differentiator in my opinion, as existing platforms offer a wealth of invaluable knowledge and currently it is often difficult to identify which platforms shine in particular areas.

image

An example of a reading list for a node, which facilitates rating of online resources

Technology

The App utilises several key Azure technologies, like the App Service and Cosmos DB, to facilitate excellent performance and massive scalability at low cost. Using the Graph API for Cosmos DB has ensured the platform can easily act as an educational social network, facilitating rich communication and connection between users. The graph data structure also allows incredibly easy grouping of users, which allows universities or schools to easily form private groups in which they can securely share course resources. Naturally following on from this is the topic of data security and privacy, a topic of both great relevance and concern. To maximise protection of personal data, markedly personally identifiable information like name, age, and email are stored in a separate Azure SQL database, which has the benefit of row-level security to minimise the possible impact of data breaches, and more importantly to prevent breaches ever happening in the first place (please note GDPR defines personal data as including username and user ID, and thus the graph database does still technically including personal data).

image

The key Azure services used, more recently an Azure VM service has been used to host a virtualised Hadoop instance for development, which will eventually migrate to a distributed HDInsight service

The front end of the App is written primarily in JavaScript which is used to manipulate an SVG canvas. By utilising CSS3 transform animations, the browser can more easily utilise available GPU resources thanks to the matrix multiplication based nature of the transformations using a homogeneous coordinate system, resulting in smooth performance across both mobile and desktop platforms [1]. The frontend also features in browser LaTeX rendering, which reduces the storage size required on the backend for formulae greatly, as well as offering a really easy way to upload and display libraries of formulae, which is especially useful for STEM subjects.

image

An example of the in-app LaTeX support

The backend is written entirely in .Net Core 2.0 (C#), which has allowed seamless integration and delivery with other Azure services. Something I plan on utilising in the future when my analytics and messaging platform is more mature is the Cosmos DB change feed, which again integrates effortlessly into a .Net Core codebase. This has the potential to work really nicely with SignalR for real-time notifications. Another great benefit of .Net Core is the simplicity of its string interpolation and dynamic datatypes, which is crucial in dynamically generating gremlin queries for the graph database and something I’ll go into detail in later in this post.

Since the Imagine Cup final, I have added a highly scalable and extensible analytics platform based on Apache Hadoop and Apache Spark. Currently the platform collects data on time spent by users in each topic as well as the links they click on within each map and stores it in HDFS, offering a rich data set for users to analyse their learning patterns to make future learning as efficient as possible. This will eventually be integrated with the question/exam module to give feedback on which resources have been most useful in maximising exam results.

Why C# (and .Net Core) was a Great Fit

One of the biggest technical hurdles I encountered in the project was dynamically converting the deeply nested JSON used by JavaScript to generate and load the map on the front end into a series of gremlin queries that Cosmos DB could not only understand, but form a fully equivalent graph of. The difficulty came primarily from the fact the JSON has no fixed structure and is very weakly typed, whereas C# is typically a strongly typed and static language. This project made great utility therefore of the dynamic datatype features of C#, which has allowed the language to evolve since C# 4.0 from a statically typed language into a gradually typed language.

To give the exact context of the problem, when you make a hierarchal tree structure through JSON, the parent-child relationships are generally implied from the structure of the JSON. However, as the eventual goal is to turn the JSON into a series of easily executable gremlin queries to form the graph in Cosmos DB, the JSON is inevitably going to have to be flattened, and will inherently lose the structure that gave it those relationships. The way I prevented this problem, was by associating a unique integer ID to each node within a map, as well as a property called ParentLocalID which was, as the name implies, the ID of its parent. All nodes on the map had to have a parent, bar the root node which did not have the property.

To map out the overall process of saving the JSON to Cosmos DB in stages:

1. Parse the JSON into a format C# (and thus .Net Core) can understand

2. Flatten the JSON into an array or list (as the data is fixed in length it may be advantageous to use an array as its iteration performance is generally better) of individual nodes, each with the associated properties that contain all the educational information (each node has a minimum of 4 properties to define its position and globally unique ID, and no maximum, though typically a well filled out node will have around 10)

3. Generate the gremlin queries by using string interpolation of the properties we have just collected from the JSON, and store them in a dictionary to be executed

4. Iterate across the entire dictionary that has just been generated, and asynchrously execute the queries against the Cosmos DB

The first and second steps become much simpler than one might expect thanks to the ubiquitous Newtonsoft.Json package. It facilitates extremely simple parsing of JSON into a newtonsoft.json.linq.jobject object, which makes manipulation of the children of the root member trivial. The circumstance of my App, as I have no idea what structure the JSON inputted will be, prevents easy use of deserialization functionality of the Newtonsoft library, which requires some idea of the structure of the JSON in order to process it. The parsing function also had the bonus that LINQ functionality was enabled on the resulting data, making flattening far simpler. LINQ’s Select and ToList enable you to then easily tailor the flattening process to your dataset. Alternatively, you could utilise recursive methods to possibly make this process more computationally efficient, which is something I will be exploring in the near future [2].

The third step is made relatively straightforward again thanks to C#’s string interpolation functionality. By placing a $ sign before a string, you can simply input any variables by placing them in curly brackets. This is important because the query strings need to be dynamically generated with all the data we’ve collected in the last two steps. One of the key considerations is then minimising the number of gremlin queries required to add each node, link it to the map for easy graph navigation, and link it to its parent node to maintain the data structure. Also worth noting is that as users are going to save the same map multiple times as they update it, we don’t want to generate duplicate nodes within the database, as having two nodes with same unique ID causes the phrase to lose its meaning and usefulness.

To prevent duplicate node generation, we can use a clever gremlin trick involving folding and coalescing. Typically adding a vertex (which represents a node in my app) in gremlin involves something along the lines of the following query (note that output.nodes[] is an array of objects of all the nodes generated from steps 1 and 2, which we are iterating by f++ from 0):

$“g.V().addV('Node').property('NodeID','{output.nodes[f].uniqueid}').property('Title','{output.nodes[f].name}').property('Description','{output.nodes[f].description}').property('Content','{output.nodes[f].content}').addE('PartofMap').to(g.V('{root.uniqueid}'))”

This generates all the nodes quite nicely each with four key properties, and links them all to the root, and thus the map, but if you perform the query twice it will simply duplicate both the vertex and edge added. Therefore we use our more refined query:

$"g.V().has('Node','NodeID','{output.nodes[f].uniqueid}').fold().coalesce(unfold(),addV('Node').property('NodeID','{output.nodes[f].uniqueid}').property('Title','{output.nodes[f].name}').property('Description','{output.nodes[f].description}').property('Content','{output.nodes[f].content}').addE('PartofMap').to(g.V('{root.uniqueid')))"

What this does is makes an initial check whether the database already has a Node with the same uniqueid, and if it does then the rest of the query isn’t performed such that no duplicate edge or vertex is generated. The massive benefit is you don’t have to wait for query responses from the database for every single node to verify they already exist. The downside to this is that if a node already exists and it has been updated in a new save, you have to follow it up with a second query that updates the properties.

This is then followed by two simple queries to add the parent edge. The reason I have split it into two queries is that the first deletes all pre-existing parent links, as my App is going to have functionality such that the parent can be changed, and we only ever want one parent at any one time, so this methodology guarantees that is maintained. You may notice that each node has both an id and uniqueid property also, this is because the node ids are integer values that start at 0 for the root and increase by 1 each time making them unique within the context of each map, while the uniqueid’s are UUID’s generated by a UUID v4 generation script that virtually guarantees each node can be identified globally within the database (chance of collision is incredibly improbable but not impossible).

            //Initialises an empty dictionary within which we add a string to identify each query and the query itself
            Dictionary<string, string> gremlinQueries = new Dictionary<string, string> { };
            //A for loop to iterate over all nodes and generate queries to add to the dictionary
            for(int f = 0; f < numberofnodes - 1; f++) {
                //These queries add the node and edge to the map if it doesnt already exist and then update the properties if it does already exist
                gremlinQueries.Add("SaveNode" + f.ToString(), $" g.V().has('Node','NodeID','{output.nodes[f].uniqueid}').fold().coalesce(unfold(),addV('Node').property('NodeID','{output.nodes[f].uniqueid}').property('Title','{output.nodes[f].name}').property('Description','{output.nodes[f].description}').property('Content','{output.nodes[f].content}').addE('PartofMap').to(g.V('{root.uniqueid}')))");
                gremlinQueries.Add("UpdateNodeProperties" + f.ToString(), $" g.V().has('Node','NodeID','{output.nodes[f].uniqueid}').property('NodeID','{output.nodes[f].uniqueid}').property('Title','{output.nodes[f].name}').property('Description','{output.nodes[f].description}').property('Content','{output.nodes[f].content}')");
                //Conditional to check the node isnt the root node as the root has no parent node
                if (output.nodes[f].id != root.id) {
                    //Iterates within the other iteration across all other nodes to find the parent node and link it
                    for (int z = 0; z < numberofnodes - 1; z++)
                    {
                      //Adds the parent node query if there is a match between ParentLocalID and local node id
                      if(output.nodes[f].id == output.nodes[z].parentlocalid)
                        {
                            //Removes any previous parents linked to the node as each node should have only one parent
                            gremlinQueries.Add("RemoveDuplicateParentLinks" + f.ToString(), $" g.V().has('Node','NodeID','{output.nodes[f].uniqueid}').outE('ParentNode').drop()");
                            //Adds an edge between the parent node to the node being iterated across
                            gremlinQueries.Add("AddParentLink" + f.ToString(), $" g.V().has('Node','NodeID','{output.nodes[f].uniqueid}').addE('ParentNode').to(g.V().has('Node','NodeID','{output.nodes[z].uniqueid}'))");

                        }
                    }

                };
            };

            //Iterates across all members of the dictionary to allow execution of the queries against the database
            foreach (KeyValuePair<string, string> gremlinQuery in gremlinQueries)
            {
                Console.WriteLine($"Running {gremlinQuery.Key}: {gremlinQuery.Value}");

                // The CreateGremlinQuery method extensions allow you to execute Gremlin queries and iterate
                // results asychronously
                IDocumentQuery<dynamic> query = client.CreateGremlinQuery<dynamic>(graph, gremlinQuery.Value);
                while (query.HasMoreResults)
                {
                    foreach (dynamic result in await query.ExecuteNextAsync())
                    {
                        //Writes to console the result of the query to show success
                        Console.WriteLine($"t {JsonConvert.SerializeObject(result)}");

                    }
                }

            }

Step four can be seen within the foreach loop which iterates across the dictionary of the queries, and uses code from Microsoft docs [3], which are linked in the references section. You can then package all of this into an async Task, and have the JSON sent via the body of a HttpPost request.

Some key notes to make on this process and things I have learned from development:

· Use dynamic datatypes with caution. They are extremely useful with JSON when the datatype is not explicitly stated and the structure is unknown, but as they are evaluated at runtime it is important to put appropriate security checks in place to ensure the executed code doesn’t violate the security of the application in question.

· My methods aren’t necessarily the most algorithmically efficient way to perform this process and it is something I’m going to work on and refine, the purpose of this article is to give a good general idea of the process involved. Whilst ExpandoObjects are incredibly useful when you don’t know the model of the data you are processing in advance, they are highly memory intensive and not necessarily the best for scalability.

· It can be possible in certain circumstances to perform the first two steps regarding flattening the deeply nested JSON on the frontend rather than the backend and that may be worth looking into depending on context.

· From my testing on a free tier app service and 400 RU/s Cosmos DB instance, the code in question can process a map of around 50 nodes with around 10 properties each in under a second. This is a perfectly acceptable save time for a single user use case but it may not necessarily scale, and load-tests are something I plan to follow up on before general release. Thankfully both the Azure App Service and Azure Cosmos DB offers turnkey scalability so that is always an easy contingency option.

· Whilst I have placed some emphasis on computational efficiency within this article, as long as the initial JSON is sent to the server (typically an extremely quick process with JSON file size typically being only around 500kB even with rich node data on over 100 nodes), no further user input is needed to ensure the save is completed as the rest of the processing is performed asynchrously on the server side. This means even if a save should take 10 minutes to complete, the user can leave the webpage almost immediately after clicking save and there should be no issues.

Plans for the Future

Whilst the platform is still in a closed Alpha stage, I’m aiming for the platform to see a full release by the end of 2018. The App is always going to be free by default, with premium membership options coming later down the line when the platform matures and the user facing multi-tenant analytics module is completed. The main obstacles to a public release at the current time is the lack of a fully featured question/exam module. It’s also worth noting the platform has been designed with the intention of eventually implementing a recommendation system through machine learning, which could act to offer subjects or topics you may be interested in based on what you have looked at previously, but I would like to finish the core feature set and release at least an open beta version of the site before I start working on this.

Closing Remarks

I’d like to thank everyone at Microsoft involved with the Imagine Cup for both their time and their invaluable advice. The finals were a brilliant experience and I’d definitely recommend anyone with an interest in technology to apply for next year. I’d also like to congratulate the other finalists on all of their extremely polished and professional projects. Special congratulations goes to the top two winners of the UK finals from Manchester and Abertay, both of which have a brilliant combination of presentation and utilisation of modern technology. I wish them all the best in the world finals, they’ll no doubt do the UK proud.

1. http://wordsandbuttons.online/interactive_guide_to_homogeneous_coordinates.html

2. https://www.codeproject.com/Tips/986176/Deeply-Nested-JSON-Deserialization-Using-Recursion

3. https://azure.microsoft.com/en-gb/resources/samples/azure-cosmos-db-graph-dotnet-getting-started/

Top stories from the VSTS community – 2018.04.20

$
0
0

Here are top stories we found in our streams this week related to DevOps, VSTS, TFS and other interesting topics, listed in no specific order:

TOP STORIES

WHAT’S NEW

  • TFS 2018 Update 2 RC2
    Erin Dormier shares some key links on details about the latest Team Foundation Server 2018 Update 2 RC2, which introduces a platter of new features.
    clip_image001[4]
  • What’s new in VSTS Sprint 132 Update
    Alex Nichols introduces the Sprint 132 update of Visual Studio Team Services (VSTS). If you have multiple, dependent teams in your organization working on large products, check out the new build completion trigger.

VIDEOS

TIP: If you want to get your VSTS news in audio form then be sure to subscribe to RadioTFS .

Here are some ways to connect with us:

  • Add a comment below
  • Use the #VSTS hashtag if you have articles you would like to see included

Wolf in Sheep’s Clothing a cyber security tech-talk

$
0
0

Guest post by Mariam Elgabry, MRes Security and Cyber Crime at University College London and Microsoft Student Partner

 image.

The insider threat

As a part of the Seminar Series run by the University College London Security and Crime Science Department,

The Microsoft Student Partners invited a Microsoft Security expert to spook us a little with the reality of the “insider” threat in the cyber space.

image

The common association we have when thinking of cyber security is predominantly focused on how to protect ourselves from external threat actors, heavily investing on heightening our “walls” against ”outsiders”. Rarely do we think about what threats already exist within our own “fences”. Our invited guest speaker explored how internal people, processes and technology can equally, if not more so, become a challenging threat to security given the opportunity.

image

Phil Winstantley, a Cyber consultant at Microsoft who works to keep their customers safe and secure, has worked across many sectors from the high threat club of Defence and National Security through to National Critical Infrastructure and into the Finance and Media space. Outside of his day job, Phil is a Special Officer with the UK National Crime Agency (NCA) where he works on disrupting serious and organized crime.

Cyber Crime

He began by making us think about examples of different types of cyber thieves and their motives, making it clear that it will be an interactive talk and that we’re quickly going to have to change our mindset: being vulnerable against a threat isn’t just possible but probable. We began to list incentives ranging from financial gain and personal data, all the way up to national critical structures and intelligence.

image

Phil outlined the main personas that often constitute the ideal internal threat actor: one that has some type of privileged access, one that has third party admittance or one that has been a previous employee. We chuckled to his example of IT Support being the “perfect” insider threat as it has both the opportunity and excuse to access data (any data) that can be in turn maliciously used. “Black shadow” access may be the only data that IT Support might not be able to get their hands on as it is usually constructed by third parties in forms of Facebook groups or twitter profiles. This lack of control however can also lead to the loss of admin monitoring. By far the most complex scenarios Phil admitted were the cases that involved an emotional drive, in other words the deep dark side of feelings, particularly that of revenge! A previous employee with such devise can quite rapidly cause huge damage. Described as the most challenging to fight against as it is non-technical, illogical and revolves around people, process or morale – which can be chaotic!

Paranoia

Despite the progression of a sense of paranoia in the room, Phil concluded his tech talk with a much brighter message. A set of statistics he displayed were quite surprising, revealing that most of the insider threat incidents usually originate from - simple but stinging - employee neglect. Moving closer to a less vulnerable structure, Phil and his current work with Microsoft focuses on promoting awareness through education. They aim to make people and businesses more mindful of ways to decrease the “attack surface” of internal threat actors by lessening their privelage space and minimizing access to data and systems for a strictly need-to-know basis. He mentioned the “just in time access” technique, that only momentarily allows access to employees of data outside their usual job description, is one way of preventing a large chunk of common insider threats we face today.

Q&A

Opening up to a Q&A session, Phil was bombarded with career-oriented questions. He gave excellent advise on how to pursue a career within cyber space. He spoke about how important it is to build your own profile through independent research and outreach. He explained how important it is to embrace ”your inner geek” as it is this quality that drives the best work forward and it is this quality that employers are looking for.

Overall it was great to see the gender distribution of the room, with a larger number of girls! Especially with international women’s day just celebrated a few days prior!

Resources

https://www.microsoft.com/en-gb/security/default.aspx

https://docs.microsoft.com/en-us/azure/security/azure-security-cyber-services

Running Docker Windows and Linux Containers Simultaneously

$
0
0

Many of you with familiarity with Docker for Windows know how you currently have to switch between running either Windows or Linux Containers. In the following post, Premier Developer Consultant Randy Patterson teaches us how to combat this limitation and run Docker Windows and Linux Containers simultaneously on the same host.


Starting with Docker for Windows version 18.03.0-ce-win59 the Linux Containers on Windows (LCOW) is available as an experimental feature. Previously, you could get LCOW only on the Edge or Nightly Build Channels. For people like me who need a stable version of Docker for Windows, this feature was not available until now.

Docker for Windows currently allows you to switch between running Windows or Linux Containers but not both. Linux containers were hosted in a Linux Virtual Machine making it convenient for testing purposes but not production. LCOW will make it possible to have an application that mixes Linux and Windows containers together on a single host.

What you Need

In order to use the new LCOW feature, you will need the latest version Docker for Windows and have the Experimental Features enabled:

1. Docker for Windows version 18.03.0-ce-win59 or greater

    image

2. Experimental Features enabled

    a. Docker -> Settings –> Daemon

        clip_image004

Let's Get Started

With Docker for Windows started and Windows containers selected, you can now run either Windows or Linux Containers simultaneously. The new --platform=linux command line switch is used to pull or start Linux images on Windows.

docker pull --platform=linux ubuntu 

image

Now start the Linux container and a Windows Server Core container.

docker run --platform=linux -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done"

docker run -d microsoft/windowsservercore ping -t 127.0.0.1

image

Both containers are running on a single host.

If you list your local image cache you’ll see a mixture of both Windows and Linux images. To determine the operating system an image requires you can use docker inspect and filter on the “Os” property.

docker inspect --format '{{.Os}}' ubuntu

image

Conclusion

Running Windows and Linux containers simultaneously on the same host is an interesting new feature in Docker with lots of possibilities. However, this is an experimental feature and may have some issues. One known problem is volumes are not stable especially when mapping between Linux and Windows file systems. This can cause some containers that rely heavily on volumes to fail to load. Furthermore, tooling support is not yet complete. For example, Docker-Compose and Kubernetes cannot yet mix Windows and Linux containers. Microsoft is currently tracking issues here and feature progress can be tracked at the Github site here.


FHIR Server in Azure (PaaS)

$
0
0

Fast Healthcare Interoperability Resources (FHIR) is a draft standard describing data formats and elements (known as "resources") and an application programming interface (API) for exchanging electronic health records. There are a number of implementations of libraries and servers already out there. In this blog post, I will show you how to run a FHIR server in Azure using PaaS services. Specifically, I will demonstrate how to run the frontend of a FHIR server in an Azure Web App and the backend using either Azure SQL Database or Azure Cosmos DB. You can find the templates and instructions in my https://github.com/hansenms/fhir-azure repository on GitHub.

There is a FHIR .NET API implementation. Based on that library, the company Firely has implemented few different FHIR servers. They have an open source Spark server, which implements Draft Standard for Trial Use 2 (DSTU2) and a commercial Vonk server, which implements Standard for Trial Use 3 (STU3). I have created templates that will allow easy deployment of:

  1. Firely Spark with Cosmos DB (MongoDb API) backend.
  2. Firely Vonk with Cosmos DB (MongoDb API) backend.
  3. Firely Vonk with Azure SQL Database as backend.

The first two of these configurations are not officially mentioned as supported by any of the documentation and they should be considered for experimentation only. The third option (Vonk with SQL backend) could be considered for production deployments. Vonk is a commercial product and a license is needed. I have used a trial license for Vonk.

Deploying Firely Spark

The template for Firely Spark deploys a Web App and a Cosmos DB backend. It then sets all the appropriate app settings (including connection strings) on the web app and pulls the code for the server from a GitHub repository. It actually pulls the code from my dastu2/azure branch of the source code repository. I had to make a few code changes to make it work with Cosmos DB and in an Azure Web App. The key change was to force the MongoDb client to use TLS1.2 (in MongoDatabaseFactory.cs):

        private static MongoDatabase CreateMongoDatabase(string url)
        {
            var mongourl = new MongoUrl(url);

            //Switch to Tls12 only to be compatible with CosmosDB
            var settings = MongoClientSettings.FromUrl(mongourl);
            settings.SslSettings = new SslSettings();
            settings.SslSettings.EnabledSslProtocols = SslProtocols.Tls12;
            var client = new MongoClient(settings);

            return client.GetServer().GetDatabase(mongourl.DatabaseName);
        }

In addition to this I also switched off direct file system logging, which was causing some problems in a Web App. These are really just some hacks to make it run and some more thorough testing would probably identify additional issues. This is not production code.

On the page with template for Firely Spark you should find buttons for deploying to Azure Commercial or Azure Government. It should deploy in 5 minutes or so. You may find that the deployment initially fails to deploy the code. It is a large code repository and it may time out during the build. If that happens, simply go to the deployment options for the Web App and redeploy. Once the code is deployed, hit the website and you should see the Spark FHIR server front page:

If you visit the https://FHIR-SERVER-URL/maintenance/initialize endpoint for the server, you should be able to populate it with some test data and then subsequently do a GET https://FHIR-SERVER-URL/fhir/Patient to see a list of patients, e.g. with Postman:

 

Deploying Firely Vonk

Vonk is a commercial server, but you can download binaries and a trial license. Because you need to register to get binaries and license, the templates do not deploy the server binaries themselves. However, they set up frontend and backend including all app settings for the web app. Since Cosmos DB is not officially supported, we will be using the template for Vonk with Azure SQL backend. Use the deploy buttons in the GitHub repository to deploy the infrastructure and once completed (should be 5 minutes or so), download the Vonk binaries and the trial license file.

The file with the binaries is called vonk_distribution.zip and you can add your trial license to the package by simply dragging and dropping it onto the zip file. After adding the license file, go to the Kudu console of the deployed web app and use "Zip Push Deploy" from the tools menu to deploy the application.

Once the application is deployed, you can check the front page of the server:

The Vonk server does  not have an easy initialization endpoint like Spark, but you can upload some sample data. There are some detailed instructions in the repo on how to do that.

Preloading of data is disabled in the default configuration, so you have to go to the app settings of the web app and disable the exclusion of the preload command:

 

You do that by changing the "preload" to "preloadXXX" and restarting the Web App. Don't forget to set it back when you are done loading data.

You can find data at http://www.hl7.org/fhir/examples-json.zip. That is a pretty big file and you may have problems with the upload timing out depending on the size of the resources you have deployed and the network speed, so I recommend chopping the file into some smaller bits and uploading. Here is how to do that with PowerShell:

#Get the data an unzip
Invoke-WebRequest -Uri http://www.hl7.org/fhir/examples-json.zip -OutFile examples-json.zip
New-Item -Type Directory example-json
Expand-Archive -OutputPath .example-json -Path .examples-json.zip -ShowProgress

#Then create new zip files with chunks:
$chunkSize = 100
$files = Get-ChildItem .example-json*.json
New-Item -Type Directory ".example-json-files"

$length = $files.Length
$fileCount = 0
for ($index = 0; $index -lt $length; $index += $chunkSize)
{
    $zipFileName = ".example-json-filesexample-json-" + $fileCount + ".zip"
    $filesToZip = $files[$index..($index+$chunkSize-1)] | Select-Object -ExpandProperty FullName
    Compress-Archive -Path $filesToZip -DestinationPath $zipFileName
    $fileCount++
}

#Send them to the FHIR server:
$zips = Get-ChildItem .example-json-files*.zip
foreach ($z in $zips)
{
    Write-Host "Processing file" $z.FullName
    Invoke-WebRequest -Method Post -Uri https://VONK-SERVER-URL/administration/preload -InFile $z.FullName -ContentType "application/octet-stream"
}

After this you have a Vonk server with preloaded data ready for testing.

Conclusions

This blog demonstrates how to run a FHIR server using PaaS services in Azure. The configuration with Web App frontend and Azure SQL Database backend could potentially be used for production use. The templates have been tested in Azure Government where the Web App and Azure SQL services are covered by the FedRAMP High platform ATO.

Visual Studio ‘Failed to verify module reference’ Authoring SCOM Management Pack, VSAE

$
0
0


Today I’m writing about an elusive problem with a management pack that I was authoring recently. I was writing a generic datasource for a registry discovery; that is a discovery that will instantiate a class based on the existence of a registry key on the Target machine, and generic because I will use a single RegistryProbe datasource for discovery of multiple different class types. (This will save me some time down the road when using the VSAE Discovery template group in Visual Studio). If the registry key exists, the specified class will be discovered. When I attempted to initiate a build (something I do incrementally after minor changes to verify that what I just changed is valid) I encountered an ambiguous error:

------ Build started: Project: WF.Fax, Configuration: Debug x86 ------
     Starting MP Build for WF.Fax.
     Starting Fragment Verification
     Resolving Project References
     Starting Merge Management Pack Fragments
     Starting Pre Processing
     Starting MP Verify

Service ModelClasses.mpx(10,9): warning TypeDefinitionInUnsealedMP: Unsealed management packs should not contain type definitions.  The element WF.Fax.RightFax.Server.Class of type ManagementPackClass found in an unsealed management pack.   (Path = WF.Fax.RightFax.Server.Class)

C:Program Files (x86)MSBuildMicrosoftVSACMicrosoft.SystemCenter.OperationsManager.targets(270,5): error : Failed to verify module reference [Type=ManagementPackElement=System.Discovery.ClassSnapshotDataMapper in ManagementPack:[Name=System.Library, KeyToken=31bf3856ad364e35, Version=7.5.8501.0], ID=Mapping] in the MemberModules list.

: Incorrect XPATH reference: ClassId.
    (Path = WF.Fax.RegistryProbe.Discovery.DS/Mapping)

Done building project "WF.Fax.mpproj" -- FAILED.


========== Build: 0 succeeded or up-to-date, 1 failed, 0 skipped ==========

There were a few key pieces of info here:

Failed to verify module reference [Type=ManagementPackElement=System.Discovery.ClassSnapshotDataMapper
- This tells me there is something wrong with how I’m using this module.

Incorrect XPATH reference: ClassId
-  This tells me there something wrong with this parameter

(Path = WF.Fax.RegistryProbe.Discovery.DS/Mapping)
-  This tells me where the problem is located: in the ‘Mapping’ section of the ‘WF.Fax.RegistryProbe.Discovery.DS’ module.

I once again visited the MSDN site where I found documentation for the reference module (https://msdn.microsoft.com/en-us/library/ee692953.aspx) and I copied the section of code from the Example into my Visual Studio 2017 SCOM 2016 MP project and BAM! I was able to build it successfully. The code from the example appeared to be absolutely identical to what I had first written.

Here’s what I first used (broken):

<ConditionDetection ID="Mapping" TypeID="System!System.Discovery.ClassSnapshotDataMapper">
     <!--
https://msdn.microsoft.com/en-us/library/ee692953.aspx-->
     <ClassId>$Config/ClassId$</ClassId>
     <InstanceSettings>$Config/InstanceSettings$</InstanceSettings>

</ConditionDetection>

Here’s the working code (builds successfully):

<ConditionDetection ID="Mapping" TypeID="System!System.Discovery.ClassSnapshotDataMapper">
     <!--
https://msdn.microsoft.com/en-us/library/ee692953.aspx-->
     <ClassId>$Config/ClassID$</ClassId>
     <InstanceSettings>$Config/InstanceSettings$</InstanceSettings>

</ConditionDetection>



Notice anything different about those two snippets?  Well I didn’t see a darn thing different but one would build and the other would not. Why? Whyyyyy?!?! I was getting really frustrated because I needed to know. I decided to go toe to toe with the universe on this one and bust out some PowerShell skills to solve my problem. I decided to do an individual character comparison of the two strings.

I’m not sharing the MP at this time because it’s for a customer and it’s actually not finished yet (at the time of this writing) but here’s the resulting script available for download. My initial version was much uglier but I decided to polish it up and add it to the Gallery so maybe it might help others some day.

“This is an easy way to compare two strings. The script will compare individual characters and output the ASCII values, hash values, and comparison results for each. This is useful when two strings appear to be the same but may have different characters (encodings or ASCII values). The script contains a function of the same name so that this can be included easily into a profile or module.” (Example: ‘dot source’ the script in your PowerShell profile.)

Here’s an example of how the script works:

VSBuildError-Compare_String1


In order to solve the mystery of why the universe was smiting me I created two “here” strings, one for each snippet of code and then I compared them with the script:

VSBuildError-Compare_String2

The Result:

VSBuildError-Compare_String_result1


Notice in my datasource Configuration: ClassID 

VSBuildError1-3

Here’s the problem:

ID” should be capitalized in “ClassID”. 

VSBuildError1-4

Mystery solved, I’m an idiot. I overlooked one single character.

This error should have tipped me off: Incorrect XPATH reference: ClassId

I went to a bit of trouble to track down the answer here and in the end it was a tiny typo. This is not the first time that a tiny typo like this or hidden encoded character has caused me major headache. Now I have another script to add to my toolset so hopefully next time I can breeze through the problem in little time.

Migrating MongoDB databases from Mongo Lab to Cosmos DB

$
0
0

Note:

This is an update to blog post I did back in 2016 Azure DocumentDB and NightScout better together!

Ever since we started using NightScout more than four years ago to monitor our son’s glucose level remotely to better manage his Type 1 Diabetes condition, Microsoft Azure has been a core piece of the building blocks to host this opensource solution, from Web apps to Logic Apps to Notification Hub except the most important part backend data repository which we were using MongoLab since early days.  We’ve always wanted to move to a more scalable option vs. the free sandbox offering from mlab as it was not making any sense not to use Azure and pay for paid version of mlab to avoid outages and performance issues. I heard from others in community where they opted in to host MongoDB databases themselves but ideally, we wanted to use the subscription we have had access to host not only the web app, mobile app, logic app and other pieces of the solution but also the backend data!

Early on we had conversations to change the NightScout code to port everything to Cosmos DB but that required a lot of effort till now. With MongoDB API support through Cosmos DB we now can host MongoDB databases as is while benefiting from all scalability and redundancy capabilities the service provides!

To make this real we went to couple of easy steps and in a very short time our application was backed by and running on Cosmos DB. So long MLab!!

Here are the simple steps we followed to make this happen:

  1. On Azure subscription provisioned a new Cosmos DB by selecting “Database as a service for MongoDB”:
  2. From Azure Portal got the connection string for the newly created database (Key has been changed so below key is not valid) :
  3. Using Studio 3T for MongoDB established a connection to both mlab and Cosmos DB instances:
  4. Next we provisioned an empty database in Cosmos DB and magically with a copy and paste action migrated all the collections from MLab over to Cosmos DB:
  5. After couple of minutes the operation was completed
  6. Final step was to change the connection string on NightScout Web App and just like that our open source online glocuse monitoring app was running live on Cosmos DB and Azure.

What’s Next

With this scenario and by leveraging Cosmos DB we can now leverage other Azure offerings like Machine Learning, EventGrid, Azure Functions and Logic Apps to unlock the power of data being gathered from the sensors, pumps and our artificail pancreas and to use more Azure power to better manage our son’s Type One Diabetes condition.

 

References:

https://news.microsoft.com/features/open-source-and-the-cloud-changing-the-lives-of-people-with-type-1-diabetes/

 

Using VSTS API with PowerShell to scaffold Team Projects

$
0
0

In this post, Senior Application Development Managers Art Garcia demonstrates how to navigate the VSTS REST APIs.


Visual Studio Team Services or VSTS has matured into one of the leading Application Lifecycle management tools available. It allows you to not only manage your work and team velocity, but it also is a great tool to use in building and deployment as well. If you have several projects both current and future, the setup and administration can sometimes be a challenge. You want to make sure all projects have certain team’s setup and some basic build and release artifacts. This will make transitioning from one project to another seamless and make for a consistent experience.

So therein lies the problem. I have multiple project I need to create and assign users and teams to. Fortunately, VSTS has a rich array of REST API’s to help. Everything from creating a project to adding work items and almost everything in between. For this discussion we will cover adding a project, adding teams to the project and finally adding users to the teams. We will also cover adding account level groups as well. So, let’s get started.

The place you will spend most of your time will be the VSTS REST API documentation. Here is where you will find the REST calls to manage VSTS. We will start with the Project and Teams.

clip_image002

Here you navigate to the Create a team project. As with many of the REST calls in VSTS, it’s not just one call to get what you are after. For the create project you will run the following POST operation.

POST https://{instance}/defaultcollection/_apis/projects?api-version={version}

The {instance} is your VSTS account i.e. myaccount.visualstudio.com, the version is the latest version of the API which is found in the documentation for the API. You will need to add the request for this operation and here is where it gets interesting. The request for this is as follows:

image

The first few, name, description and what source control type are straight forward. The process template is the one that will take another call. You need the GUID of the process you want to use. Agile, Scrum, CMMI, or any custom process in your account has a unique identifier. We need that GUID here in the request. Fortunately, that’s an easy call to the API.

GET https://{instance}/DefaultCollection/_apis/process/processes?api-version={version}

image

This will give you a list of processes. You can filter them by name to find the process you are looking for. Here is some PowerShell to find the process.


$projectUri = "https://" + $VSTSMasterAcct + ".visualstudio.com/DefaultCollection/_apis/process/processes?api-version=1.0"

$returnValue = Invoke-RestMethod -Uri $projectUri -Method Get -ContentType "application/json" -Headers $authorization

$id = ($returnValue.value).Where( {$_.name -match “Agile” })

return $id.id


Now you just replace the process id in the request and you are ready to create your project. Below is a snapshot of my CreateVSTSProject PowerShell function. If you notice on line 17, I call GetVSTSProcesses and pass in my $userParams. This is a small JSON text that contains the project name, and a few more parameters I need for my code to run.

clip_image004

The parameter file looks like this. I reference this file throughout my scripts, so I can change things easily in one place.

clip_image006

I also check if the project exists, if it does the API will throw an error. I capture that error and return that the project exists.

Ok, so now we have a valid Project in VSTS, now let’s add some teams and Groups to the project. To do this we will make a call to an API in the same Project and Team family. The Create a Team REST call is as follows and the documentation can be found here.

POST https://{instance}.VisualStudio.com/DefaultCollection/_apis/projects/{project}/teams?api-version={version}

The {instance} is your vsts account i.e.. myaccount.visualstudio.com, the version is the latest version of the API which is found in the documentation for the API. The Project is the name of the project you want to add these teams to. The request for this is simple, just a name and description as shown below.

image

Here is what the code looks like:

clip_image008

This will give you teams in the project you just created. These are teams only visible in the project you created. If you want to have a team span multiple projects, then you would create a group at the account level. This will require using the Graph REST API. A word of caution here. As of this writing this API is in an Alpha preview. Really that just means it’s not completely baked yet. They are still working out the usage patterns and hardening the code. That said, I still encourage you to use them, just remember if you find an issue, report it. That way we all benefit.

So to add groups with the Graph API at the account level you will run the following REST call.

POST https://{instance}/_apis/graph/groups?api-version={version}

The request for this call is very straight forward, just the displayName and the description as seen below.

image

This call will either create a new account level group and return the group information or just return the group information from an existing group. Either way you get the group information. The code to add a VSST account level group is shown below

clip_image010

So at the beginning of this blog I said I would create a project, add teams and account level groups. The only part missing is adding users to those groups and teams. This is actually very easy. All that’s required is a simple Graph REST call. Again, as before, be forewarned, this API is in Alpha preview.

So the code to add users to a group or team is as follows.

clip_image012

If you look closely, you see that this call is asking for the Group descriptor. So, remember when we discussed adding an account group? I mentioned that it would return the group information. One important part of that return is the group descriptor. You will need this to find the appropriate group to add users to. Once you have that it’s a matter of adding the user’s email in the request as shown in the code example.

So that’s a lot of information to digest. Believe me it took me a bit to wrap my head around the API and all its capable of. These API’s are a wonderful way to automate and standardize many VSTS functions and manual tasks. What I have shown is just the beginning. Now we have the project, the teams and the users in the teams or groups. What about a build script or a release definition or maybe adding some standard work items? All are possible with the API.

I trust this has given you a small glimpse into what is possible with the API. The next entry in the series will be adding a build and release to the project you just created. Then we will tackle the security. Securing the teams and groups to allow or deny what operations can be performed. That’s where it gets interesting. In my upcoming post, I will demonstrate the VSTS REST APIs to secure Team Projects.

Get the source from this article on GitHub here.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

AWS: Obtain BlendedCost billing data

$
0
0

There may be a case when you might want to use Get-CECostAndUsage Cmdlet, however I thought it might be helpful to document some examples. The AWS documentation doesn't display any example. Below is my attempt to come up with some examples, which I found useful for me, may be someone else does too.

 

Running the Cmdlet without any parameters, doesn’t reveal what we are looking for, so I checked the command reference: https://docs.aws.amazon.com/powershell/latest/reference/items/Get-CECostAndUsage.html. None of the parameters appear to be mandatory, still I got this output without any parameters:

 

I guessed period is DateInterval, so went ahead and created it for sample, but hit another parameter required:

I made sure again I provided Granularity, trying DAILY

Valid values for metric are BlendedCost, UnblendedCost, UsageQuantity, NormalizedUsageAmount, AmortizedCost. I tried one of them and hit another error:

Sounds good so far, I had to enable cost explorer access to my user. So now I activated IAM user/role access to billing information https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/grantaccess.html to grant my IAM user access to billing info. And here is the result:

I’m sure heading somewhere. After expanding the ResultsByTime object, I could eventually land to a result I was looking for that looked similar to the Cost Management Dashboard on the AWS console

Here is from the AWS web console billing home:

 

So, here is what I added to my PS profile, so I get to know the cost for current month right away:

 

<Code>

#Print cost details for the Account for current month
$currDate = Get-Date
$firstDay = Get-Date $currDate -Day 1 -Hour 0 -Minute 0 -Second 0
$lastDay = Get-Date $firstDay.AddMonths(1).AddSeconds(-1)
$firstDayFormat = Get-Date $firstDay -Format 'yyyy-MM-dd'
$lastDayFormat = Get-Date $lastDay -Format 'yyyy-MM-dd'

$interval = New-Object Amazon.CostExplorer.Model.DateInterval
$interval.Start = $firstDayFormat
$interval.End = $lastDayFormat

$costUsage = Get-CECostAndUsage -TimePeriod $interval -Granularity MONTHLY -Metric BlendedCost

$costUsage.ResultsByTime.Total["BlendedCost"]

</Code>

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>