Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Kostenloses Azure-Konto für Schüler und Studenten – jetzt sichern

$
0
0

Digitale Kompetenzen werden für die berufliche Laufbahn der heutigen Kinder und Jugendlichen von hoher Bedeutung sein – dies belegen Studien wie „Der Abschlussjahrgang 2030“. Um sich für die Herausforderungen der Zukunft zu wappnen und wichtige digitale Fähigkeiten anzueignen, gibt es für Schülerinnen und Schüler sowie Studierende seit dem 12. Dezember neue Möglichkeiten: Azure for Students bietet ihnen die Chance, sich zahlreiche Vorteile für das digitale Lernen zu sichern.

Tolle Features für Digital Natives

Nach dem Erstellen eines kostenlosen Azure-Kontos erhalten Schülerinnen und Schüler sowie Studierende eine Gutschrift in Höhe von 100 US-Dollar. Darüber hinaus bekommen sie freien Zugriff auf zahlreiche beliebte Produkte wie beispielsweise Visual Studio Code. Der Studenten- oder Schülerstatus muss anhand einer E-Mail-Adresse der jeweiligen Bildungseinrichtung nachgewiesen werden.

Microsoft Learn – jetzt Kompetenzen für die Zukunft aufbauen

Im Rahmen des Azure-Kontos stehen den Lernenden vielfältige kostenlose Übungen zur Verfügung, um ihre digitalen Kompetenzen auszubauen. Wie wäre es beispielsweise mit der Erstellung einer eigenen App mit Visual Studio Code? Oder der Kreation einer intelligenten Anwendung, die dank Künstlicher Intelligenz (KI) hören, sehen, sprechen und verstehen kann? Zudem lassen sich mit dem kostenlosen Azure-Konto neueste Open-Source-Technologien einsetzen, um zum Beispiel leistungsfähige cloudbasierte Machine-Learning-Anwendungen zu erstellen. Eine vollständige Liste aller kostenfrei nutzbaren Produkte und Anwendungen gibt es hier.

Und wer sich bereits bestens auskennt, kann im kommenden Jahr gleich beim Imagine Cup mitmachen: Bei dem Tech-Projekt haben Teams die Chance, ihre Innovation ins Leben zu rufen und bis zu 100.000 US-Dollar zu gewinnen.

Microsoft Azure-Zertifizierungsprüfungen nutzen und Wissen testen

Angehende Azure-Experten können in den umfangreichen Microsoft Azure-Zertifizierungsprüfungen ihr Wissen auf die Probe stellen. Zudem können Lernende die Microsoft Official Practice Tests nutzen, um sich auf die Prüfung vorzubereiten. Eine Microsoft-Zertifizierung zeigt geballtes Wissen und Know-how im IT-Bereich und verbessert gerade bei Studentinnen und Studenten die Chancen auf dem Arbeitsmarkt. Azure-Zertifizierungen gibt es in zahlreichen Bereichen wie Microsoft Azure Architect Design oder Cloud Data Science with Azure Machine Learning.

Neue Möglichkeiten für Bildungseinrichtungen ab 2019

Voraussichtlich Ende 2019 wird es zudem Neuerungen für Schulen, Universitäten und weitere Bildungseinrichtungen geben. Für sie steht bald ein zentraler Azure Education Hub zur Verfügung, aus dem heraus alle Software-Produkte und Cloud-Services verwaltet werden können. Darüber wird es auch möglich sein, den Schülerinnen und Schülern bzw. Studierenden einfach und kostenfrei den Zugang zu Microsoft Azure sowie den Lernumgebungen von Microsoft Learn zu geben.


RichEdit Property Sets

$
0
0

RichEdit has many character-format properties, most of which are documented for ITextFont2 and CHARFORMAT2. Nevertheless, the OpenType specification defines many more character-format properties called OpenType features consisting of a 32-bit identifier (id) and a 32-bit value. For example, the Gabriola font has stylistic set 6, which displays “Gabriola is graceful” as

Variable fonts are the latest addition to the OpenType specification and the variable-font axis coordinates are also specified by an id-value pair. For example, the experimental Holofont font has three axes, ‘wght’, ‘wdth’, and ‘opsz’, the first two of which are illustrated in

You can try out variable fonts by checking out this site. You can see myriad articles and talks here. Variable fonts present a user-interface (UI) challenge. One technique is to use a slide bar to choose an axis coordinate. AI might provide good default values. If the traditional font drop downs are used, you can be confronted with a zillion choices. Holofont has 9 weights × 5 widths × 6 optical sizes = 270 entries which all appear in the current Word drop-down font list! And that’s tiny compared to the continua of possible axis coordinate values. To illustrate this quandary, here are the first few entries in the Holofont font drop-down list

Narrow Thin

Narrow ExtraLightmmm

Narrow Light

Narrow SemiLight

Narrow

Narrow SemiBold

Narrow Bold

Narrow ExtraBold

Narrow Black

SemiNarrow Thin

SemiNarrow ExtraLightmmm

SemiNarrow Light

SemiNarrow SemiLight

SemiNarrow

SemiNarrow SemiBold

SemiNarrow Bold

SemiNarrow ExtraBold

SemiNarrow Black

Thin

ExtraLightmmm

Light

SemiLight

Regular

SemiBold

Bold

ExtraBold

Black

SemiWide Thin

SemiWide ExtraLight

SemiWide Light

SemiWide SemiLight

SemiWide

SemiWide SemiBold

SemiWide Bold

SemiWide ExtraBold

SemiWide Black

Clearly such detailed font drop-down lists are impractical, so maybe we should use slide bars or drag selected text handles.

OpenType properties that are used in shaping complex scripts like Arabic are invoked automatically by DirectWrite and Uniscribe. But many other OpenType properties including these examples are discretionary and must be present in the backing store to work. In addition, it’s desirable to be able to add other kinds of properties. The CHARFORMAT2::dwCookie allows a client to attach one 32-bit value to a text run, but there’s need to attach multiple properties such spelling, grammar, and other proofing-error annotations along with other client properties.

To handle all these properties, the latest Office 365 RichEdit implements property sets as described in the remainder of this post. The D2D/DirectWrite RichEdit mode (but not the GDI/Uniscribe mode) displays the OpenType properties as illustrated in the figures above. The following, admittedly technical, discussion describes the property-set object model, the RTF and binary file format additions for property sets, how to display variable-font and other OpenType features using DirectWrite, and the OpenType variable-font (fvar) table.

Kinds of Properties

The kinds of RichEdit character format properties are summarized in the table

ID Range Usage
0..0xFFFF Properties not in property sets
0x10000..0x1FFFF RichEdit temporary properties such as proofing errors
0x20000..0x2FFFF Client temporary properties
0x30000..0x3FFFF RichEdit persisted properties
0x40000..0x2020201F Reserved; returns E_INVALIDARG if used
0x20202020..0x7E7E7E7Emmm OpenType features/axis (if 0x80808080 mask = 0; else invalid)
0x7E7E7E7F..0xFFFFFFFF Reserved; returns E_INVALIDARG if used

There are no persisted client properties since they are client-specific and could be misinterpreted if read by a different client.

Property Set Object Model

The client APIs for setting and getting properties are ITextFont2::SetProperty (id, value) and ITextFont2::GetProperty (id, pvalue). The id’s for these methods are given by xxxx, where xxxx is an OpenType feature tag, an OpenType variable-font axis tag (see MakeTag() below) or an annotation id defined in the table at the end of the preceding section. Since OpenType x’s belong to a limited set of ASCII characters in the U+0020..U+007E range, there’s plenty of room in the 32-bit id space to define other properties. Common properties like font weight are already represented as CCharFormat::_wWeight and in principle don’t need to be members of a property set. Since by default there are no properties in a property set, calling ITextFont2::SetProperty(id, tomDefault) deletes the property id if it exists. Note that id values < 0x10000 are reserved for other purposes, such as tomFontStretch (0x33E) to define a font’s stretch value. These values are well below the first possible OpenType id 0x20202020 (4 spaces). The largest OpenType tag is 0x7E7E7E7E, which gives 944 = 78,074,896 tags, although most of them will never be used or are used for other purposes such as ‘MATH’ for the math table. This leaves 2564 − 944 =  4,294,967,296 − 78,074,896 = 4,216,892,400 IDs for other purposes.

OpenType tags are constructed in the order given by the macro

#define MakeTag(a, b, c, d)   (((d)<<24) | ((c)<<16) | ((b)<<8) | a)

For example, the variable-font weight axis tag ‘wght’ has the value 0x74686777.

Internally it’s useful to mark OpenType feature tags with a bit (tomOpenTypeFeature—0x00800000) to distinguish them from variable-font axis tags. This bit cannot be confused with annotation id’s which have values of 0x3FFFF or less. The feature tags are defined by the DWRITE_FONT_FEATURE_TAG enum defined in dwrite.h. The variable-font axis tags are defined by the font’s fvar table discussed below and in principle can be any combination of ASCII letters. So, if a tag isn’t a feature tag, we assume that it’s a variable-font axis tag and let DirectWrite accept or reject it.

Property Set RTF

In RTF, property sets are encoded similarly to the {colortbl…} for colors and have the form

{*propsets id value…; …}

Here the id and value are 32-bit values that are encoded for all properties in a property set. Each property set is ended by a semicolon. This format is repeated for all property sets used in the text. If an id starts with an ASCII letter and consists of 4 ASCII letters, it is written as a character string. For example, the id ‘wdth’ is written as such for the 32-bit id value 0x68746477. If any byte in the id isn’t an ASCII letter, the id is written as a 32-bit integer. These choices make it easier to read property IDs. A value with no fractional part is written as an integer. A value with a fractional part is written as a decimal fixed-point number, e.g., 123.545. Any other combination is invalid and ends reading the RTF stream. The property set table {*propsets …} is stored in the RTF header following {fonttbl …} and {colortbl …} (if they are present).

An example with two property sets containing variable-font id’s is

{*propsets wght 800 wdth 104;wght 400;}

This syntax is a slightly simplified version of the variable-font CSS syntax used in web applications.

In the RTF body, a reference to the Nth property set in the propsets table is given by psN (like crN for choosing the Nth color in the colortbl). Here N is 0-based, that is, ps0 refers to the property set immediately following propsets.

Property Set Binary Format

The property id-value pair is written in the binary format as opyidProperty (0x8A), optProperty (opt8Bytes) followed by the 32-bit id and value. CPropertySet is written as opyidPropertySet (0x89), optPropSet (optArray) followed by the set’s opyidProperty’s. The array of property sets CPropertySets is written as opyidPropertySets (0x88), optPropertySets (optArray) followed by the opyidPropertySet’s. These constants are defined in rebinary.h.

Rendering Variable-Fonts and OpenType Features

In addition to backing-store enhancements, the display routines need to pass active variable-font axis coordinates and OpenType features to DirectWrite. See OpenType Variable Fonts for information about the DirectWrite APIs for this. To create a font specified in part by axis coordinates, RichEdit gets an IDWriteFontFace5 (see dwrite_3.h) with the desired axis coordinates in place of the usual IDWriteFontFace. It does this by calling IDWriteFontFace::QueryInterface() to get an IDWriteFontFace5 interface, calling IDWriteFontFace5::GetFontResource() to get an IDWriteFontResource interface, releasing the IDWriteFontFace5 and calling IDWriteFontResource::CreateFontFace() to get a new IDWriteFontFace5 with the desired axis coordinates. Then it uses this IDWriteFontFace5 instead of the original IDWriteFontFace.

To pass OpenType features to DirectWrite, copy them into a std::vector<DWRITE_TYPOGRAPHIC_FEATURES> and pass them to IDWriteTextAnalyzer1::GetGlyphs() and IDWriteTextAnalyzer1::GetGlyphPlacements(). Some font features, such as Gabriola’s stylistic set 6 ‘ss06’ introduce glyphs with ascents and/or descents that exceed the standard typo ascents and descents as discussed in High Fonts and Math Fonts. To display such large glyphs with no clipping, the rendering software needs to calculate the line ascent and descent from the glyph ink, rather than from the usual font values. This is the approach used with the LineServices math handler.

OpenType Variable Font Axes

The variable font axes are defined in the OpenType fvar table, which has the header

struct FvarHeader             // Variable font fvar table header
{
   OTUint16 majorVersion;     // Major version of fvar table (1)
   OTUint16 minorVersion;     // Minor version of fvar table (0)
   OTUint16 axesArrayOffset;  // Byte offset from table start to first VariationAxisRecord
   OTUint16 reserved;         // Permanently reserved (2)
   OTUint16 axisCount;        // Count of VariationAxisRecord's
   OTUint16 axisSize;         // BYTE count of VariationAxisRecord (20 for this version)
   OTUint16 instanceCount;    // Count of InstanceRecord's
   OTUint16 instanceSize;     // BYTE count of InstanceRecord
};                            //  (axisCount*sizeof(DWORD) + (4 or 6))

Types like OTUint16 that begin with OT describe 4-byte, big-endian quantities that need reverse ordering to work with our little-endian machine architecture. The header is followed by axisCount VariationAxisRecord’s defined by

struct VariationAxisRecord
{
   OTUint32 axisTag;          // Tag identifying axis design variation
   OTFixed  minValue;         // Minimum coordinate value (16.16 format)
   OTFixed  defaultValue;     // Default coordinate value
   OTFixed  maxValue;         // Maximum coordinate value
   OTUint16 flags;            // Axis qualifiers (hidden if 1)
   OTUint16 axisNameID;       // ID for 'name' table entry that provides axis display name
};

The axisTag’s have the same MakeTag() form as the regular OpenType tags. Since they are accessed via the OpenType fvar table, they are in a different namespace from the regular OpenType tags. We don’t know of any tag conflicts between the two name spaces, so it’s probably okay not to mark the axis tags differently. But internally we mark OpenType feature tags by setting the high bit of byte 2 (OR in tomOpenTypeFeature), since the tags consist of ASCII symbols in the range 0x20..0x7E. This marking avoids sending OpenType tags to the wrong DirectWrite APIs.

The VariationAxisRecord’s are followed, in turn, by the InstanceRecord’s defined by

struct InstanceRecord
{
   OTUint16 subfamilyNameID;       // ID for 'name' table entry giving subfamily name
   OTUint16 flags;                 // Reserved for future use (0)
   OTFixed coordinates[axisCount]; // instanceSize coordinates
   OTUint16 postScriptNameID;      // Optional. ID for 'name' table entry giving PostScript name
};

At some point, it might be worth dealing with the InstanceRecord’s, but it’s certainly easier to use axis coordinates than handle myriad localizable font names (see Holofont discussion in the introduction). RichEdit could export a facility for translating between the two, but probably such a facility should be delegated to the font picker. The localizable font names are designed to help end users recognize the nature of a variable font instance, but they aren’t efficient at the RichEdit level. They also aren’t usable for variable-font animations, since such animations vary axis coordinates continuously.

4-28 Decimal Floating-Point Format

The OpenType “fvar” table described in the previous section defines the min, max, and default variable-font axis coordinate values using the OpenType 16.16 numeric format. The integer part of the value is given by shifting right 16 bits, i.e., dividing by 65536. If the fractional part is nonzero, store the value in a floating-point variable and divide by 65536. In applications, coordinates are easier to read when the fractional part is 0 if only the integer part is displayed. Since purely fractional coordinates (values < 1) are useless, if the absolute value is less than 65536, the value can be understood to be an integer without a fractional part.

The OpenType 16.16 format is a binary fixed-point format that may encounter roundoff when converted to decimal, e.g., 800.1 → 800.100006. This roundoff is ugly in RTF, CSS, and dialog boxes. So we need a decimal floating-point format that doesn’t have such roundoff. The IEEE 754-2008 decimal floating-point encoding defines decimal32 with 20 bits of precision, a sign bit and the large exponent range of 10192. OpenType variable-font axis coordinates need at most four decimal places. The sign bit is used for the slant (slnt) standard axis and can be used for custom axes.

If the value has no fractional part, we store it as a standard 2’s complement integer rather than in the high word of 16.16 for readability in RTF, CSS and dialog boxes. To convert it to the 16.16 format, multiply by 65536. But if the value has a fractional part, we use the following signed 4-28 decimal floating-point format

s n significand
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

 

If the number is negative, the sign bit 31 is 1. Bits 0..27 are the significand. The decimal divide value n is defined by

n divide significand by
000mm (not floating point)
001 10
010 100
011 1000
100 10000
101 100000
110 1000000
111 (not floating point)

 

n must have at least one 0 bit to distinguish the format from a negative 2’s complement integer and at least one 1 bit to distinguish it from a positive integer.

This gives 28 bits of precision with a maximum value of (1028 – 1)/10 = 26843545.5 with one decimal place and a minimum value of 0.000001 with six decimal places. These limits are beyond the values used for OpenType variable font-axis coordinates, which typically range between 1 and 1000. The 4-28 decimal floating-point format is easy to use and displays the original fixed-point values with no round-off error. To convert it to the 16.16 format, store the 28-bit significand field in a double variable, divide by the number corresponding to n, multiply by 65536 and round to the nearest integer. For the DWrite APIs, store the 28-bit significand field in a double, divide by the number corresponding to n and cast the result to a FLOAT.

In C, the 4-28 decimal floating-point format of the value x is recognized by the function IsDecimalFloat(x) defined by

#define IsDecimalFloat(x)       IN_RANGE(3, (x >> 28) & 7, 6)

where IN_RANGE() is defined by

#define IN_RANGE(n1, b, n2)     ((unsigned)((b) - (n1)) <= unsigned((n2) - (n1)))

The divide factor in the n table is given by pow(10, (x >> 28) & 3) or (x >> 28) & 3 can be used as a table index.

People First – The Frequently Overlooked Importance of Culture Change in DevOps Journeys

$
0
0

In this post, App Dev Manager Kristofer Olin explores the frequently overlooked importance of culture change in DevOps journeys.


devops_fiMicrosoft defines DevOps as the union of people, process, and products to enable continuous delivery of value to end users.  The underlying concepts of DevOps are not new by any means, but their recent application to software lifecycle management has proven to be transformational in the success, speed to market, and consistent quality of applications and new features. Improved employee satisfaction and inter-department cooperation for organization success are inherent benefits of DevOps. So why is it that some organizations, though they may invest great expense in tools, process development, and rigor, fail to obtain the transformation and benefits other organizations have so greatly experienced?

I was recently onsite with a customer struggling to pass mandated test cycles before fielding a suite of products. Our team was briefing their leadership on the fundamentals of DevOps, at the conclusion of which the senior representative in the room stated, "I hate to break it to you, but we're already doing this." And, yes, if you were to walk the halls, they have large printed flowcharts and posters outlining their processes that contain all kinds of meaningful terminology like Value Stream Mapping, Agile, waste, constraints, etc. If that were the judge of success, one would say they'd made it; however, they'd been developing and testing for two-and-a-half years with no minimum viable product yet to show for it. This may be a drastic example, but it's reflective of the reality; either the product is repeatedly rejected from test, or is passed through due to business needs and then encounters repeated issues in production or doesn't fully meet the need.

The conversation continued after our presentation, during which during which the conversation turned toward the definition presented above, and their senior representative was enlightened. People. People are first in the definition because they are what's most important in obtaining success on a DevOps journey. Without changing people's behaviors and attitudes, the culture of an organization cannot be transformed to enable DevOps success, which infers there will always be blockers and constraints that cannot be overcome. Consider Larman's Laws of Organizational Behavior:

  1. Organizations are implicitly optimized to avoid changing the status quo middle- and first-level manager and “specialist” positions & power structures.
  1. As a corollary to (1), any change initiative will be reduced to redefining or overloading the new terminology to mean basically the same as status quo.
  1. As a corollary to (1), any change initiative will be derided as “purist”, “theoretical”, “revolutionary”, "religion", and “needing pragmatic customization for local concerns” — which deflects from addressing weaknesses and manager/specialist status quo.
  1. As a corollary to (1), if after changing the change some managers and single-specialists are still displaced, they become “coaches/trainers” for the change, frequently reinforcing (2) and (3).
  1. Culture follows structure.

Do these points resonate for you and your organization? The most common cause of failure in a DevOps journey is not lack of tools, process, or collective team knowledge. No, success is most deeply rooted in an organization's culture, which is driven by its structure; specifically a culture of trust, open communication and access. Too often, organizations invest in tooling, instantiate processes, and implement automation, but lack trust between teams, or an organizational structure that leads to success. Teams continue to "throw products over the wall" without an empathetic understanding of the business needs and goals throughout the lifecycle, with disconnect from one-another and the customer.

Microsoft's Developer Advisory Support has a proven track record of helping organizations realize success on their DevOps journeys. Our Application Development Managers and Development Consultants can insightfully assess an organization's needs; are willing to make hard recommendations that traverse an entire business' hierarchy; and are experienced in both coaching to successful culture change and with helping to implement processes and tools to support the value chain all the way to the customer. Let us help your organization find success in your DevOps journey.

SonarQube Hosted On Azure App Service

$
0
0

This post is provided by Premier Field Engineer, Nathan Vanderby, who simplifies the setup of a SonarQube server with one step using Azure App Services.


SonarQube is tool that centralizes static code analysis and unit test coverage. It can be used across multiple languages and for a single project up to enterprise scale.

SQProject

There are various guides on how to setup SonarQube hosted in Azure. From hosting it on a virtual machine (here and here) to a docker image or a Linux container in an Azure App Service. There are also instructions for using IIS as a reverse proxy to allow SSL traffic for additional security.

There is another project out there to try and simplify setting up a SonarQube server that is publicly accessible. The project is SonarQube-AzureAppService. It use the HttpPlatformHandler extension from IIS which can similarly be used to host SonarQube on-prem with IIS directly.

SonarQube-AzureAppService

Project URL: https://github.com/vanderby/SonarQube-AzureAppService

This project simplifies the setup of a SonarQube server to one step. Simply click the Deploy to Azure link on the project homepage and follow the simple walkthrough to have resources deployed out and configured. The initial start time for SonarQube may take up to 10 minutes on slower resources. That's all! Read below for more in depth details on what's going on behind the scenes.

Deploying Azure Resources

The Deploy To Azure button on the GitHub repository uses the azuredeploy.json project file to deploy an ARM template. The ARM template defines a simple App Service Plan, Web App and the code from the GitHub repository. The code is pushed to the repository folder in the web app. Within the project a .deployment file defines a deployment script to execute.

DeployToAzure

On a side note, at the time of writing the minimum App Service Plan tier is Basic. The Free and Shared tiers throw an error on startup related to Java memory restrictions with the default SonarQube settings.

Deployment Script

The deployment script is Deploy-SonarQuveAzureAppService.ps1. This script copies the wwwroot folder from the repo, which contains the web.config and HttpPlatformHandlerStartup.ps1 files, to the web app wwwroot folder. I'll walk through these files later. It also downloads and extracts the latest SonarQube binaries. I've removed the logging and error handling lines for brevity in the code block below.

xcopy wwwroot ..wwwroot /Y    
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$global:progressPreference = 'SilentlyContinue'
$downloadSource = 'https://binaries.sonarsource.com/Distribution/sonarqube/'
$allDownloads = Invoke-WebRequest -Uri $downloadSource -UseBasicParsing
$zipFiles = $allDownloads[0].Links | Where-Object { $_.href.EndsWith('.zip') -and !($_.href.contains('alpha') -or $_.href.contains('RC')) }
$latestFile = $zipFiles[-1]
$downloadUri = $downloadSource + $latestFile.href
$outputFile = "..wwwroot$($latestFile.href)"
Invoke-WebRequest -Uri $downloadUri -OutFile $outputFile -UseBasicParsing
Expand-Archive -Path $outputFile -DestinationPath ..wwwroot

Once the deployment is complete your web app wwwroot folder should have a sonarqube folder and just a few files. You could also manually upload sonarqube binaries.

AzureDeployFiles

Runtime Files

This project was made possible due to how an app service hosts Java applications. It does this using the HttpPlatformHandler extension. This extension will start any executable defined in the web.config and forward any requests it receives to the port defined by the HTTP_PLATFORM_PORT environment variable. This environment variable is randomly set by the HttpPlatformHandler when it invokes the startup executable.

Web.Config

The web.config file shown below is very simple. It adds the HttpPlatformHandler extension to the handlers then defines the behavior of the HttpPlatformHandler. The handler itself is told to run PowerShell and execute the HttpPlatformHandlerStartup.ps1 script. We will go into details of this script later. It also tells the handler to log stdout messages, do not retry the startup and wait 300 seconds before timeout on startup. We want a long startup timeout since SonarQube takes a while to start. Especially the first time if your using an in-memory database.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.webServer>
    <handlers>
      <add name="httpplatformhandler" path="*" verb="*" modules="httpPlatformHandler" resourceType="Unspecified" requireAccess="Script" />
    </handlers>
    <httpPlatform stdoutLogEnabled="true" startupTimeLimit="300" startupRetryCount="0"
	  processPath="%SystemRoot%System32WindowsPowerShellv1.0powershell.exe" 
	  arguments="%home%sitewwwrootHttpPlatformHandlerStartup.ps1"> 
    </httpPlatform>
  </system.webServer>
</configuration>
HttpPlatformHandlerStartup.ps1

This script writes the HTTP_PLATFORM_PORT port to the sonar.properties file, updates the wrapper.conf with the java.exe location and runs StartSonar.bat. The most important thing happening here is setting the HTTP_PLATFORM_PORT as it is randomized each time the application is started. I've removed the logging and error handling lines for brevity in the code block below.

$port = $env:HTTP_PLATFORM_PORT
$propFile = Get-ChildItem 'sonar.properties' -Recurse    
$configContents = Get-Content -Path $propFile.FullName -Raw
$configContents -ireplace '#?sonar.web.port=.+', "sonar.web.port=$port" | Set-Content -Path $propFile.FullName
$wrapperConfig = Get-ChildItem 'wrapper.conf' -Recurse
$wrapperConfigContents = Get-Content -Path $wrapperConfig.FullName -Raw
$wrapperConfigContents -ireplace 'wrapper.java.command=java', 'wrapper.java.command=%JAVA_HOME%binjava' | Set-Content -Path $wrapperConfig.FullName
$startScript = Get-ChildItem 'StartSonar.bat' -Recurse
& $startScript[-1].FullName

On another side note, the script executes the last StartSonar.bat file it finds so that it runs the script from the x64 folder and not the x86 if present.

Hopefully this is helpful if you are looking at setting up a cloud based SonarQube server.

Random internal Windows terminology: IDW, Razzle, and their forgotten partners IDS and Dazzle

$
0
0


In the Windows team, you'll see the term IDW.
You don't see it much in the outside world,
though.

Here's an ISO image called
6.0.5383.1.1.WindowsSDK_Vista_idw.DVD.Rel.img
,
so it does get out once in a while.
(You can see the file name if you expand the
Installation Instructions section.)
It also appears

on this Web page about
performance tips when developing network drivers
.



The Kernprof.exe tool is provided with the developer
and IDW builds of Windows that extracts the needed information


The abbreviation IDW stands for Internal Developer Workstation.
This is a term applied to builds stable enough to be self-hosted
by the development team.



Razzle was the code name for Windows NT
(or NT OS/2 as it was then known),
and it is the name of the script that prepares
the command line environment for
developing Windows.
You open a fresh command prompt,
then run the Razzle.cmd script,
and it gets your machine ready to work on the Windows source code.
It sets the environment variables used by the build tools,
it adds the build tools to your PATH,
it installs test signing certificates,
it lets you specify whether you want to build free or checked builds,
optimized or unoptimized builds, all that stuff.



Razzle is still alive and well, but
the term IDW is not used much any more because we use
Windows Insider rings nowadays to declare which builds are suitable
for developer self-hosting.
There was also an abbreviation IDS which stood for
Internal Developer Server,
but that abbreviation died out a long time ago.



And while Razzle provided the software half of the Windows NT story,
the hardware part originally came from a project called Dazzle.
Dazzle was a

single-board i860 computer
.




Larry Osterman actually used a Dazzle
,
though "used" might be a rather generous term.
It was basically a breadboard with a VGA port in the back.
As Larry recalls, they had Reversi running on it,
but the pieces were squares rather than circles.



Why squares and not circles?



Because GDI didn't support circles yet!

Q# – a Wish List for the New Year

$
0
0

In previous blog posts you have read about some of the ideas behind Q#, how it came into existence, and its development over the past year. You have read about quantum computing, quantum algorithms and what you can do with Q# today. With the end of the year approaching, there is only one more thing to cover: What is next?

This blog post is about our aspirations for the future and how you can help to accomplish them. It contains some of our visions going forward, and we would love to hear your thoughts in the comment section below.

Community

One of the most exciting things about Q# for us is the growing community around it. Being rooted in the principles of quantum mechanics, quantum computing tends to have this air of unapproachability to the "uninitiated". However, quantum computing builds on the notion of an idealized quantum system that behaves according to a handful of fairly easy to learn principles. With a little bit of acquired background in linear algebra, some persistence, and patience when wrapping your head around how measurements work it is possible to get knee-deep into quantum algorithms reasonably quickly!

Of course, a couple of good blog posts on some of these principles can help. We strive to actively support you in the adventure of exploring quantum algorithms by providing materials that help you get started, like our growing set of quantum katas. Our arsenal of open source libraries provides a large variety of building blocks to use in your quest of harnessing the power of quantum. One of the main benefits of open source projects is being able to share your work with all the people brave enough to explore the possibilities that quantum has to offer. Share your progress and help others build on your achievements! Whether in kata or library form, we welcome contributions of any size to our repositories. Let us know how we can help to make contributing easier.

Exchange among developers is one of the most important aspects of software development. It is omnipresent and vital to building a sustainable environment around a particular toolchain and topic. Thankfully, modern technology has made that exchange a lot easier than when the first computer programmers started their careers. We intend to make full use of the power of the internet and give a voice and a platform for discussions on topics related to Q# and quantum computing to developers around the world. The Q# dev blog is part of this effort. Contact us or comment below if you have an idea for a blog post or would like to hear more about a specific topic related to Q#. Establishing good feedback channels is always a challenging endeavor and in particular for a small team like ours. We would like this place to become a source of knowledge and exchange, a place where you can find the latest news and voice your take on them.

Growth

This brings us back to our plans for Q#. We have built Q# to make quantum development easier and more accessible. Of course, there were also a couple of other considerations that have played into that decision. For instance, we are anticipating the need to automate what is largely done in manual labor today, e.g. qubit layout and gate synthesis that are often still done on a case-by-case basis for each program and targeted hardware. When is the last time you worried about how error correction works on the hardware your code gets executed on? With qubits being an extremely scarce resource, and the long-term ambition to use quantum computing to address the most computationally intensive tasks that cannot be tackled with current hardware, the optimization of large-scale quantum programs needs to be a priority. We chose to develop our own language in order to have full control and flexibility over what information is represented how, and when it is used during compilation in order to be able to support a modular and scalable software architecture for executing quantum programs. But that's a tale for another time. What is important is that these considerations are key factors in how we design and develop the language going forward.

A programming language is more than just a convenient set of tools for expressing an algorithm. It shapes the way that we think and reason about a problem, how we structure it and break it down into tasks when building a solution. A programming language can have a tremendous impact on our understanding of existing approaches, as well as how to adapt and combine them for our purposes. Particularly so when venturing into new territory.

Our goal is therefore to build a shared understanding of what it is we strive to accomplish, and to evolve Q# into the powerful language needed to drive progress in quantum programming. Our goal is to leverage the expertise of a community of language designers, compiler veterans, quantum physicists, algorithms and hardware experts, and a variety of software developers to shape a new kind of computing architecture. And we want you to be part of it.

Transparency

Since our 0.3 release at the beginning of November we have been eagerly working on not just the next release, but on defining and preparing the next steps in 2019. While we are in the middle of formulating our plans for the future, I want to give you a brief insight into some of our considerations.

As I am sure you have noticed, the support for data structures in Q# is minimal. While we do provide quite a few high-level language features for abstracting classical and quantum control flow, we intentionally omit some of the more object-oriented mechanisms such as classes. We anticipate remaining heavily focused on transformations that modify the quantum state, expressed as operations in Q#, as well as their characteristics and relations in the future. However, basic bundling of data and manipulations of such is of course an important aspect of many programs and we want to provide suitable mechanisms to express these in a way that allows to make abstractions, is convenient, and is resistant to coding errors. User defined types in the current setting have limited power besides an increased type safety. The "black box approach" to type parameterization currently restricts their usefulness; we do not provide a mechanism for dynamic reflection and it is not possible to apply operators or other type specific functionalities to argument items whose type is resolved for each call individually. In that sense, these items are "black boxes" that can merely be passed around. We want to do as much of the heavy lifting as possible statically in particular since debuggability of quantum devices is a huge challenge. There are several mechanisms one might consider alleviating the consequences of these decisions. On one hand, type constraints are a common mechanism used in several popular languages. In a sense, they can be seen as "specializations based on the properties of a type". One could also pursue the stricter path of specializing based on the concrete type itself, de-facto adding a form of overloading that we currently explicitly prevent from being used. Either way, by clearly separating user defined types from tuples in the type system we have made a first step towards extending their power.

If you are curious to hear more about possible ideas for Q#, their benefits and caveats, or want to share some thoughts of your own, comment below! Contribute to the discussion and post your speculations to the question: What makes a quantum programming language "quantum", i.e. what makes it particularly suited for quantum computing?

Join us

I hope you join us into a new year of pushing the boundaries of computation by participating in our coding competitions, contributing to our open source repositories, commenting on or writing blog posts and sharing your ideas and experiences!

How about a new year's resolution of your own? Let us know what you expect to accomplish and how we can help you achieve your new year's resolution around quantum programming in Q#!

Bettina Heim, Senior SDE, Quantum Software and Application
@beheim
Bettina Heim is a quantum physicist and software engineer working in the Quantum Architectures and Computation Group at Microsoft Research. She is responsible for the Q# compiler and part of the Q# language design team. Prior to joining Microsoft she worked on quantum algorithms, adiabatic quantum computing, discrete optimization problems, and the simulation and benchmarking of quantum computing devices.

Azure Data Architecture Guide – Blog #8: Data warehousing

$
0
0

In our eighth blog in this series, we'll continue to explore the Azure Data Architecture Guide. The previous entries for this blog series are:

Like the previous post, we'll work from a technology implementation seen directly in our customer engagements. The example can help lead you to the ADAG content to make the right technology choices for your business.

Data warehousing

Here we see store data coming from multiple sources into Azure Data Lake Store, in their native format. Azure SQL Data Warehouse directly queries against the data with a combination of external tables and schema on read capabilities through PolyBase. Use Azure Data Factory to store the data you need within your warehouse, and quickly analyze and visualize the combined data with Power BI.

Data warehousing

Highlighted services

 

Related ADAG articles

 

Please peruse ADAG to find a clear path for you to architect your data solution on Azure:

 

Azure CAT Guidance

"Hands-on solutions, with our heads in the Cloud!"

Classification d’images avec le Transfer Learning

$
0
0

Selon Wikipédia, « l'apprentissage par transfert (Transfer Learning) est l'un des champs de recherche de l'apprentissage automatique qui vise à transférer des connaissances d'une ou plusieurs tâches sources vers une ou plusieurs tâches cibles. Il peut être vu comme la capacité d'un système à reconnaître et appliquer des connaissances et des compétences, apprises à partir de tâches antérieures, sur de nouvelles tâches ou domaines partageant des similitudes. »

Ce billet se propose d'aborder ce domaine prometteur au travers du sujet de la classification d'images comme son titre l'indique.

Nous vous proposons de voir cela de plus prêt au travers d'une illustration. Je tiens à remercier Xiangzhe Meng actuellement en stage de fin d'étude au sein de Microsoft France pour cette contribution.

Qu'est-ce que le Transfer Learning ?

Ainsi, le Transfer Learning peut se traduire dans la pratique comme une approche puissante qui permet aux utilisateurs de créer rapidement des modèles d'apprentissage profond (Deep Learning) en apprenant à partir de réseaux de neurones pré-entraînés avec de grands ensembles de données.

Autrement dit, nous utilisons un modèle déjà bien entraîné et l'adaptons à notre propre problème. Notre nouveau modèle est pour l'essentiel basé sur les caractéristiques et les concepts appris lors de l'entraînement du modèle de base.

Avec un réseau de neurones convolutifs (CNN), nous utilisons les caractéristiques apprises à partir d'un ensemble des données qui est suffisamment grand, par exemple ImageNet qui contient 1,2 million d'images avec 1000 catégories, et retirons la dernière couche de classification en la remplaçant par une nouvelle couche dense prédisant les étiquettes de classe de notre nouveau domaine.

Pourquoi le Transfer Learning ?

Le Transfer Learning s'avère une technique utile lorsque nous devons classifier certaines images dans différentes catégories, mais que nous ne disposons pas de suffisamment de données pour entraîner un réseau de neurones profonds (DNN) à partir de rien.

En effet, l'entraînement des DNNs nécessite beaucoup de données, toutes étiquetées. Cependant, nous ne disposons pas toujours de ce type de données. Si notre problème est similaire à celui pour lequel un réseau de neurones a déjà été entraîné, nous pouvons utiliser le Transfer Learning pour modifier ce réseau de neurones en fonction de notre problème avec une fraction des images étiquetées nécessaire : nous parlons maintenant de dizaines au lieu de milliers des images.

Il convient de souligner, qu'au-delà de la reconnaissance d'images qui sera notre illustration dans la suite de ce billet, le Transfer Learning est également utilisé avec succès pour adapter les modèles de réseau de neurones existants à la traduction, à la synthèse de la parole et à de nombreux autres domaines. Le Transfer Learning est ainsi notamment utilisé dans la cybersécurité comme en témoigne la session « Transfer Learning: Repurposing ML Algorithms from Different Domains to Cloud Defense » de Mark Russinovich, Directeur technique Azure, lors la dernière conférence RSA en avril dernier.

Comment fonctionne Transfer Learning ?

La figure ci-dessous illustre le principe du Transfer Learning. Le modèle de l'étudiant est initialisé en copiant les N-1 premières couches de l'enseignant. Une nouvelle couche dense est ajoutée pour la classification. Sa taille correspond au nombre de classes dans la tâche d'étudiant. Le modèle de l'étudiant est ensuite entraîné en utilisant son propre jeu de données, tandis que les K premières couches sont « gelées », c'est-à-dire que leurs poids sont fixes et que seuls les poids des N-K couches sont mis à jour.

Les K premières couches sont « gelées » au cours de l'entraînement, parce que les sorties de ces couches représentent déjà des caractéristiques significatives pour la tâche de l'étudiant. Le modèle de l'étudiant peut réutiliser directement ces caractéristiques ; ce qui peut réduire à la fois les coûts de l'entraînement et la quantité de données requises.

En fonction du nombre de couches gelées (K) pendant le processus de l'entraînement, le Transfer Learning est classifié dans les trois types suivants :

  • Extracteur de caractéristiques de couches profondes (Deep-layer Feature Extractor) :

    N-1 couches sont gelées pendant l'entraînement du modèle de l'étudiant et seule la dernière couche de classification est mise à jour. Ceci est préférable lorsque la tâche de l'étudiant est très similaire à la tâche de l'enseignant et qu'elle nécessite un coût de l'entraînement minimal.

  • Extracteur de caractéristiques de couche intermédiaire (Mid-layer Feature Extractor) :

    Les K premières couches sont gelées, où K < N -1. En général, l'extracteur de caractéristiques de couche intermédiaire se montre plus performant que l'extracteur de caractéristiques de couches profondes dans les scénarios où la tâche de l'étudiant diffère plus de la tâche de l'enseignant et où plus de données d'apprentissage sont disponibles. En permettant à plus de couches d'être mises à jour, l'étudiant optimisera mieux sa propre tâche.

  • Extracteur de caractéristiques de réglage (Full Model Fine-tuning) :

    Toutes les couches sont dégelées et ajustées pendant l'entraînement du modèle de l'étudiant (K = 0). Cela nécessite beaucoup plus de données d'apprentissage et est approprié lorsque la tâche de l'étudiant diffère de manière significative de la tâche de l'enseignant.

En général, l'entraînement avec les poids de modèle de l'enseignant permet au modèle de l'étudiant de converger plus rapidement et d'obtenir potentiellement de meilleures performances que d'entraîner à partir de zéro.

L'outillage

Aujourd'hui, de plus en plus de plates-formes de Machine Learning et de bibliothèques de Deep LeEarning commencent à recommander le Transfer Learning à leurs utilisateurs. Beaucoup d'entre elles proposent à cet effet des tutoriels détaillés pour guider les utilisateurs dans le processus de Transfer Learning.

C'est en particulier le cas de notre bibliothèque Microsoft Cognitive Toolkit (CNTK), une bibliothèque de Deep Learning open source déjà abordée sur ce même blog que nous utiliserons dans la suite de ce billet pour notre illustration et pour construire notre modèle de Transfer Learning.

Le tutoriel « Build your own image classifier using Transfer Learning » de CNTK décrit une tâche de classification des fleurs et recommande le modèle ResNet_18 comme le modèle de base et « Full Model Fine-tuning » comme la configuration par défaut.

CNTK fournit également des paramètres de contrôle pour passer en mode « Deep-layer Feature Extractor ». Cependant, le mode « Mid-layer Feature Extractor » n'est pas (encore) disponible à ce stade.

En termes d'outillage, d'aucun pourrait également citer PyTorch ou d'autres bibliothèques encore.

Notre illustration

Le but de notre illustration consiste à créer un pipeline de classification d'images avec le Transfer Learning. Ainsi, nous nous proposons de construire un modèle de Transfer Learning pour classifier les images de cinq sortes de fruits : pomme, banane, raisin, orange, fraise.

Nous allons pour cela suivre les 8 étapes ci-dessous :

  1. Créer des dossiers en respectant une structure spécifique.
  2. Obtenir des données (images) (optionnel).
  3. Télécharger et consulter les modèles de base.
  4. Définir les paramètres généraux et choisir le type de Transfer Learning à utiliser.
  5. Entraîner le modèle.
  6. Évaluer le modèle avec une seule image.
  7. Évaluer le modèle avec un groupe d'images.
  8. Afficher les images mal-classifiées.

Voyons ce qu'il en est dans le détail.

Le code source et le pipeline complet sont disponibles dans un Jupyter Notebook ici. Vous disposez ainsi de tous les éléments nécessaires pour reproduire ce qui suit 😉

1. Créer des dossiers en respectant une structure spécifique

Afin de réutiliser le pipeline de classification d'images proposé, vous devez commencer par créer des dossiers selon une structure spécifique et placer nos données (images) dans les dossiers correspondants.

L'approche proposée dans le cahier Jupyter Notebook vise à créer ces dossiers automatiquement. Il reste simplement à définir le nom du jeu de données et des classes différentes pour la tâche.

Voici la structure de dossiers utilisée ainsi pour ce pipeline de classification d'images :

2. Obtenir des données (images) (optionnel)

Cette seconde étape est une étape optionnelle.

Si nous voulons essayer ce pipeline de classification d'images sans cible spécifique, il est possible de simplement définir le nombre total d'images à télécharger et la partition des images pour chaque classe. Ensuite, il suffit de lancer le programme de téléchargement, qui téléchargera automatiquement les images à partir de la bibliothèque Google Image sur GitHub, séparera ces images de manière aléatoire en deux groupes (ensemble des données d'apprentissage et de test) et les stockera dans les dossiers correspondants précédemment créés.

Remarque : Les mots-clés de recherche que nous utilisons pour télécharger des images à partir de Google sont les noms de classe définis dans la partie précédente. Dans notre cas, les mots-clés de recherche sont donc pomme, banane, raisin, orange, fraise.

Voici un exemple de l'exécution pour télécharger les images :

3. Télécharger et consulter les modèles de base

Avant d'entraîner notre modèle de Transfer Learning, nous devons télécharger le modèle de base que nous nous proposons d'utiliser et choisir les couches que nous allons conserver en inspectant la structure des réseaux de neurones.

4. Définir les paramètres généraux et choisir le type de Transfer Learning à utiliser

Avant d'exécuter notre programme de l'entraînement, il est nécessaire de :

  1. Définir les paramètres généraux liés au processus d'apprentissage, aux propriétés de l'image d'entrée, et aux emplacements et caractéristiques du modèle et des données.
  2. Choisir les types de Transfer Learning.

CNTK supporte à date les modes « Deep-layer Feature Extractor » et « Full Model Fine-tuning » comme évoqué précédemment. Nous devons donc choisir l'un de ces deux modes :

5. Entraîner le modèle

Pour cette illustration, nous avons choisi ResNet_18 comme le modèle de base.

Ce modèle sera adapté à l'aide du Transfer Learning pour la classification des fruits. Ce modèle est un réseau de neurones convolutifs (CNN) construit à l'aide de techniques de réseau résiduel. Les réseaux de neurones convolutifs construisent des couches de convolutions, transforment une image d'entrée et la distillent jusqu'à ce qu'ils commencent à reconnaître les caractéristiques composites. Avec des couches de convolutions plus profondes, ils peuvent même reconnaître des caractéristiques complexes.

Le réseau résiduel est une technique issue de Microsoft Research (MSR). Il consiste à passer par le signal principal des données d'entrée, de sorte que le réseau finisse par apprendre uniquement sur les portions résiduelles qui diffèrent d'une couche à l'autre. Dans la pratique, cela permet d'entraîner des réseaux beaucoup plus profonds en évitant les problèmes qui entravent la descente de gradient sur des réseaux plus vastes. Ces cellules contournent les couches de convolution et reviennent plus tard avant ReLU.

6. Évaluer le modèle avec une seule image

Nous avons deux façons d'évaluer notre modèle. Tout d'abord, nous l'évaluons avec une image choisie et montrons la prédiction avec les probabilités de chaque classe comme ci-dessous.

7. Évaluer le modèle avec un groupe d'images

Après avoir collecté toutes les images de test, nous pouvons également évaluer notre modèle avec toutes ces images une par une, placer la prédiction avec les probabilités de chaque classe dans une table et calculer la précision de la classification.

8. Afficher les images mal-classifiées

Après avoir évalué les modèles avec l'ensemble des données de test, nous pouvons afficher toutes les prédictions erronées afin de trouver une raison potentielle concernant la classification erronée de ces images.

Dans notre cas, par exemple, l'image ci-dessous est mal classifiée car les formes de pomme et d'orange sont assez similaires et ils peuvent avoir la même couleur (verte). C'est peut-être une des raisons pour lesquelles cette image orange est reconnue comme une pomme.

En guise de conclusion

Le Transfer Learning est une approche puissante. Cependant, il a aussi des limites.

Par exemple, nous avons ré-entraîné un modèle qui avait été entraîné sur les images de ImageNet. Cela signifie qu'il connaissait déjà les images et avait une bonne idée des concepts allant du niveau bas (rayures, cercles etc.) au niveau élevé (les petits points sur la surface de fraise etc.).

Ré-entraîner un tel modèle pour classifier les fruits a du sens, mais le ré-entraîner pour détecter les véhicules à partir d'images aériennes serait plus difficile. Nous pouvons toujours utiliser le Transfer Learning dans ces cas, mais il vaut mieux de réutiliser juste les couches précédentes du modèle, c'est-à-dire les couches convolutifs qui ont appris des concepts plus primitifs. Nous aurons donc probablement besoin de beaucoup plus de données d'apprentissage.


2019 – a Developer’s view

$
0
0

2018 has been great so far for developers. With cloud automation CI/CD has become the focus. A major chunk of that scripting work is to be done via the script or code. How a developer would grow into the current role defines his/her ability to quickly adopt the ever changing world. Gone are those days when knowing only the ASP.NET and ADO.NET would get you a job for couple of years. No need to learn additional new technologies back then. Now it is just the pole opposite. Absolute process of learn and unlearn would keep you relevant in the market. As a close observer to the enterprises on how they are growing and thinking around modernizing are key to my thinking of year 2019 for a developer,

Web is the clear winner

Web is the way to go. It is time tested again and again with its many rivals. Web Application is definitely less cumbersome when it comes to the maintance and deployment. The reach is key to the current market. To reach more people you need to have less dependencies on the user devices. One should not be forced to install and update an app every time there is a change. The change is evident, hence make is simple for end users. An app is like a hotel room - you leave the room to come back to see it is being done. Freshness is the key to success. Hence you observe the frequent UI changes which is adding the features to the applications.

API is inherent

With web based applications API also becomes the key to the collaboration. API enables enormous avenue to extend beyond the Web Applications. You may have a Web based application but there are quite a lot who also wants Mobile Phone apps. API would then give you the ability to seamlessly control the business process. UI changes but the business rule remains the same. Today is it iOS and Android - tomorrow it might be something else. API would give you a breathing space to think more around enhancing and less on compatibility for various platforms. This design success of API also contributed to the popularity of the web.

It is Nano-services not always Microservices

Microservices is undoubtedly the winner but it is not a silver bullet to every problem. May be you need an API - at the most a Nano-services where APIs can be deployed independently. A Microservices design is really complex and every API holds their own databases and sandboxed. A typical Microservices is meant to deal with very large userbase, unpredictable burst in nature. One of the solution design mistakes most of the team is doing today is to convert their legacy applications to Microservices. If you have one single database and fixed number of users think about Nano-services.

Conceptually Microservices is being defined by Martin Fowler https://martinfowler.com/articles/microservices.html

Container

With every cloud vendors adopting container it is obvious that developers would think around how applications can be deployed and distributed. It is not just the container, emphasis is on how the container orchestration would fit into the game. Kubernetes seems the default to many applications teams as of today as well to the major cloud vendors. RedHat is also gaming for their OpenShift. Let me be very clear that the container is a developer's game and is part of the CI/CD. So if you are a developer - watch out this space.

DevOps

Old wine in a new bottle - good that it is now recognized widely. The importance of having automation is not just the headache of a developer but it is now in the charter of the organization. This translates into the budget allocations and focus from the day one.

ThoughtWorks Technology Rader

Keep watching the technology trend here https://www.thoughtworks.com/radar/

Happy coding and welcome 2019.

Six Opinionated Tips to be a Better .NET Developer

$
0
0

App Dev Manager Isaac Levin shares his six “opinionated” tips for being a better .NET Developer.


I am humble to be a part of the 2nd Annual C# Advent Calendar. Thank you to everyone who helped put it together and contributed. My blog post is hopefully a fun one not to be taken too seriously. I have been a developer for over 10 years, and I am always looking at new ways to be more productive. May I present you with 6 tips that I have started to adopt that I think have increased my developer skills 1000% (YMMV).

1. Make the CLI your Best Friend

I will shout it from the rooftops, I LOVE CLIs!!! I just find the process of working in the command line a far more efficient experience than the IDE in certain scenarios. Being a .NET developer, it is obvious I would gravitate towards the .NET CLI to kickstart my development experience.

Continue reading on Isaac’s blog

How do I save the results of a file search in Explorer? Not the query itself, but the results

$
0
0


Say you perform a file/folder search in Explorer
and you get the results.
How do you save the results?
That is,
save the list of files that were found.



This is not the same as saving the query,
which you can do by going to the Search tab
and selecting 💾 Save search.



To save the results, you can select all of them,
say with the
Select all button,
and then shift-right-click on the selection and say
Copy as path.
This will put all the paths on the clipboard,
and you can save them wherever you like.



It's not exactly the most obvious thing,
but it's a neat trick once you know it.

Build an interactive assistant using QnA Maker

$
0
0

Senior App Dev Manager Nayan Patel walks us through how to build an interactive assistant using QnA Maker.


I was recently engaged in a customer proof of concept scenario where they needed to turn their knowledgebase articles, FAQ’s and other company data into an interactive bot. For our scenario, we created a QnA Maker service that pulled HR information from a backend database so users can ask common questions in a conversational way instead of wasting time searching and scrolling through content. We leveraged the QnAMaker service which was announced general availability at Build 2018. The service basically creates a question-answering endpoint on top of your existing data, whether it’s a database, word/excel files, pdf or URL’s.

Below are the steps to spin up a chatbot in just a few minutes using the QnA Maker service.

Go to the QnA Maker site and on the ‘Create a knowledge base’ tab, click ‘Create a QnA service’.

qna1

This will redirect you to your Azure portal to setup the service. Fill in all the required fields and hit ‘Create’.

qna2

Once the service is deployed, go back to the QnA Maker site and fill in the rest of the fields to connect your service and create the KB. Note, our newly create SPOQnAbot service is selected in the Azure QnA service dropdown. For this example, I have used the QnA Maker FAQ site that the knowledgebase will extract Q&A pairs from.

qna3

The KB will extract content from the QnA Maker FAQ site and form Q&A pairs. You can also add your own question/answer pairs based on commonly asked user questions and ‘Save and retrain’ your bot. You can now publish your KB.

qna4

Save the KnowledgebaseId, EndpointHostname, and AuthKey values as they will be used to setup your bot.

qna5

Go to the Azure portal and Create a new Web App Bot using the ‘Question and Answer’ Bot template as shown below.

qna6

Once deployed, go to the ‘Application Settings’ and copy the KnowledgebaseId, EndpointHostname, and AuthKey values from the QnA Maker site to the QnAAuthkey, QnAEndpointHostname and QnAKnowledgebaseId fields as shown below.

qna7

When deploying a web app bot, it is deployed as a “Web Chat” channel by default (can also connect to Teams, Cortana, Slack etc.). Go to channels and hit “Get bot embed codes”:

qna8

In the configuration page, click on “Show” to reveal one of the secret keys (either will do). Copy the contents of the “Embed code” text box to somewhere else, and replace “YOUR_SECRET_HERE” with the secret key:

qna9

You can now embed the iframe code in your web page and start interacting with your new bot.

The chatbot can be retrained over time based on user interactions. You can also add multiple QnA Maker services for different departments (HR KB, Finance KB etc.) in conjunction with a LUIS app to identify the user intent and route the incoming question to the appropriate knowledge base.

qna10

36 Best Business Books that Changed Microsoft Leaders’ Lives

$
0
0

I’ve overhauled a book list of the 36 Best Business Books that Changed Leaders’ Lives.

Some of the most effective leaders I know, regularly draw from books for insights, inspirations, and new ideas to change their game.

Business is an interesting game, by the way:

“Business is a game, played for fantastic stakes, and you’re in competition with experts. If you want to win, you have to learn to be a master of the game.” — Sidney Sheldon

There is no shortage of books to read or new ideas to learn.

Instead, the challenge is finding the best business books that are worth reading – the books that you can use to accelerate and enhance your business effectiveness.

To find the best business books for building business effectiveness, I reached out to the most effective Microsoft leaders I know (past and present0 and asked them a simple question …

“What are the top 3 books that changed your life in terms of business effectiveness?”

Here are the results …

Top 3 Business Books for Improving Business Effectiveness

The top three business books that showed up multiple times were:

1.  Blue Ocean Strategy

image

Blue Ocean Strategy is a beautiful strategy guide on how to make your competition irrelevant. 

Rather than swim with the sharks in the red ocean of competition, find the blue ocean and swim with the dolphins.  

Learn how to expand the market to non-consumers by reducing friction points and pivoting around what’s truly valued by customers. 

Learn how to differentiate to avoid commoditization.

2.  Good to Great

image

Collins and his team researched 1,435 companies to find 11 companies that made huge improvements in their performance over time. 

What did the 11 companies have in common? 

Discipline. 

They demonstrated discipline in people, thought, and action. 

This is the book that made the Hedgehog concept famous. 

The Hedgehog concept is the intersection of three circles:

  1. What can you be the best in the world at?
  2. What drives your economic engine?
  3. What are you deeply passionate about?

3.  The Five Dysfunctions of a Team

image

Normally I don’t like storybooks for business skills, but the fable format really works for this one. 

This book is about how to function as a unit by learning the five dysfunctions:

  1. Absence of trust
  2. Fear of conflict
  3. Lack of commitment
  4. Avoidance of accountability
  5. Inattention to results.

I’ve experienced great shifts in culture by leaders who learned the lessons from this corporate fable and applied them to their teams.

The 36 Best Business Books that Changed Leaders’ Lives

Here is the list of the best business books that changed leader’s lives:

  1. All I Really Needed to Know I Learned in Kindergarten
  2. Authentic Leadership
  3. Blue Ocean Strategy
  4. Built to Last
  5. Execution: The Discipline of Getting Things Done
  6. Fierce Conversations
  7. First, Break All the Rules
  8. Fortune’s Formula: The Untold Story of the Scientific Betting System That Beat the Casinos and Wall Street
  9. Freakonomics
  10. Good to Great
  11. How To Win Friends and Influence People
  12. Human Competence: Engineering Worthy Performance
  13. Jack: Straight from the Gut
  14. Leadership on the Line
  15. Leadership and Self-Deception: Getting Out of the Box
  16. Made to Stick:  Why Some Ideas Survive and Others Die
  17. Memoirs of Hadrian
  18. Moneyball: The Art of Winning an Unfair Game
  19. Pasteurs Quadrant: Basic Science and Technological Innovation
  20. Siblings Without Rivalry: How to Help Your Children Live Together So You Can Live Too
  21. The Soul of a New Machine
  22. Start-up Nation: The Story of Israel’s Economic Miracle
  23. The 7 Habits of Highly Effective People
  24. The One Minute Manager
  25. The Art of Happiness
  26. The Art of Innovation
  27. The Art of Leadership
  28. The Art of the Start 2.0: The Time-Tested, Battle-Hardened Guide for Anyone Starting Anything
  29. The Art of War
  30. The Crisis of Global Capitalism: Open Society Endangered
  31. The Five Dysfunctions of a Team
  32. The Innovator’s Dilemma
  33. Tribal Leadership: Leveraging Natural Groups to Build a Thriving Organization
  34. Unleashing the Idea Virus
  35. Wikinomics: How Mass Collaboration Changes Everything
  36. Winning with People

For mini-descriptions of each, please see 36 Best Business Books that Changed Leaders’ Lives.

Japan Cognitive Services Support Blog 終了のお知らせ

$
0
0

こんにちは、Cognitive Services  サポートチームです。

2019 1
月をもちまして、弊社システム刷新の都合により、本Blog
を終了いたします。

 

今後、同様の情報公開は、以下の情報公開専用フォーラムにて行ってまいります。

 

Azure Cognitive Services
サポートチーム フォーラム
< https://social.msdn.microsoft.com/Forums/ja-JP/home?forum=cognitivesupportteamja
>

 

2017 11
月に開始以来、多くのお客様にご覧いただき、誠にありがとうございました。

読者の皆様のビジネスに多少なりとも貢献できておりましたら幸甚です。

今後ともどうぞよろしくお願いします。

Japan Azure IoT Support Blog 終了のお知らせ

$
0
0

こんにちは、Azure IoT サポートチームです。

2019 1
月をもちまして、弊社システム刷新の都合により、本Blog
を終了いたします。

 

今後、同様の情報公開は、以下の情報公開専用フォーラムにて行ってまいります。

 

Azure IoT サポートチーム フォーラム
<
https://social.msdn.microsoft.com/Forums/ja-JP/home?forum=azureiotsupportteamja
>

 

2017 11
月に開始以来、多くのお客様にご覧いただき、誠にありがとうございました。

読者の皆様のビジネスに多少なりとも貢献できておりましたら幸甚です。

今後ともどうぞよろしくお願いします。


Japan WDK Support Blog 終了のお知らせ

$
0
0

こんにちは、Windows Driver Kit サポートチームです。

2019 1
月をもちまして、弊社システム刷新の都合により、本Blog
を終了いたします。

 

今後、同様の情報公開は、以下の情報公開専用フォーラムにて行ってまいります。

 

Windows Driver Kit
サポートチーム フォーラム
<
https://social.msdn.microsoft.com/Forums/ja-JP/home?forum=wdksupportteamja
>

 

2009 2
月に開始以来、多くのお客様にご覧いただき、誠にありがとうございました。

読者の皆様のビジネスに多少なりとも貢献できておりましたら幸甚です。

今後ともどうぞよろしくお願いします。

Don’t forget: std::pair does lexicographical ordering, so you don’t have to

$
0
0


A feature perhaps not as widely known as I thought
is that the std::pair type performs lexicographical
ordering, so you don't have to.



// Suppose we record versions as std::pair<int, int>
// where the first is the major version
// and the second is the minor version.

std::map<ComponentId, std::pair<int, int>> requiredVersions;

bool IsSupported(ComponentId id, std::pair<int, int> actualVersion)
{
auto item = requiredVersions.find(id);
if (item == requiredVersions.end()) {
return true;
}

auto& requiredVersion = item->second;

if (actualVersion.first > requiredVersion.first ||
(actualVersion.first == requiredVersion.first &&
actualVersion.second >= requiredVersion.second)) {
return true;
}

return false;
}



First, we try to find the component in our list of required versions.
If it's not found, then the component has no version requirements,
and we say, "Sure, it's supported!"
(This is just an example. Maybe you want to say that if it's not
on the list, then it's not supported at all.)



Otherwise, we check the actual version number against the
required version.
If the major version is greater, or if the major version is equal
but the minor version is greater or equal,
then we decide that we have met the minimum requirements.



Writing the comparison of major and minor versions is

easy to get wrong
,



So don't write the code that's easy to get wrong.
Let the standard library do it.



bool IsSupported(ComponentId id, std::pair<int, int> actualVersion)
{
auto item = requiredVersions.find(id);
if (item == requiredVersions.end()) {
return true;
}

auto& requiredVersion = item->second;

return actualVersion >= requiredVersion;
}



Bonus chatter:
I saw this mistake in some code that used the
std::pair as the key in a map.



std::map<std::pair<int, int>, CoolThing> sortedThings;


The idea is that the cool things would be sorted by a
sort key that behaved like major/minor.
The code compared the keys manually, presumably because the
author didn't think that
std::pair supported the relational operators.



But of course
std::pair supports the relational operators
bcause that's one of the prerequisites for being the key of
a std::map.
(Okay, technically, std::map requires only
operator<, but once you have
operator<, you can synthesize the rest.)

Why you should consider VS Code for your Kubernetes/Docker work

$
0
0

Premier Developer consultant Julien Oudot spotlights VS Code for Kubernetes and Docker workloads.


Visual Studio Code (VS Code) is sometimes considered as a slimmer and minimalist version of Visual Studio. However, depending on the technology stack used, VS Code can really be the platform of choice to benefit from the best features. Furthermore, its cross platform support allow users to have the same customer experience on multiple platforms (Windows, Linux and Mac OS).

With its modular architecture based on the concept of extensions, a lot can be done with Docker, Kubernetes or Helm just after only few extension installations. From simple YAML editor functionalities that save a lot of headaches dealing with indentation, to more advanced scenarios, like opening an interactive session inside a Docker Container running in a remote Kubernetes cluster.

The below tutorial will walk you through some of the convenient features provided by the following VS Code extensions:

On a Linux based OS, you need to be able to run docker commands from VS Code. To that aim, we need to have the current user added to the docker group. Run the two following commands (restart might be needed):

sudo groupadd docker
sudo usermod -aG docker $USER

1. Connect VS Code to Docker Hub account and Kubernetes cluster

To connect your Visual Studio Code to a Docker hub account, open Visual Studio Code and click on Settings at the bottom left.

kb1

Search for vsdocker and override the two settings by entering the two key-value pairs on the right-hand side:

"vsdocker.imageUser": "docker.io/<dockerHubUserName>",

"docker.defaultRegistryPath":"<dockerHubUserName>"

kb2

Then, click on the Docker extension and click on Docker Hub to authenticate using the Docker ID and password set up in the previous task

kb3

Finally, you also need to authenticate from the terminal running inside VS Code. Click at the bottom of the window, then Terminal and enter the command that will prompt you for your docker credentials.

docker login

kb4

Before we can deploy a basic application to a Kubernetes cluster, we just need to make sure that we are connected to the right cluster. Type the following command

kubectl config current-context

kb5

If you are not connected to any Kubernetes cluster and have an AKS cluster running in Azure, you will need to run the following command.

az aks get-credentials --resource-group <AKS-RESOURCE-GROUP> --name <AKS-CLUSTER-NAME>

2. Deploy a NodeJS application to Kubernetes from VS Code

Download a basic Node JS application from https://github.com/joudot/nodejs.

From VS Code, click on Explorer and then Open Folder.

kb6

Select the folder nodejs that was just downloaded and click OK. You can see the basic Node JS application provided, as well as a Dockerfile to build its image.

kb7

Open command palette by clicking on the Settings icon and then Command Palette.

kb8

Type Kubernetes Run and select it. It will build the Node JS image, push it to Docker Hub and deploy the application into your AKS cluster.

kb9

You can follow these steps at the bottom of the IDE.

Once complete, click on the Kubernetes extension, then expand the cluster and click Workloads - Deployments - nodejs. You will see what is deployed in your cluster. The view will show you the YAML file that you can interpret, understand and reuse for other Kubernetes API objects to be deployed. More generally, VS Code is a user-friendly solution to work with YAML files since it helps with indentation, auto complete and coloration. There is also the kubectl explain integrated tool to annotate Kubernetes API objects and dynamically see documentation when hovering over YAML fields.

kb10

If you look under Services, you will see that the Node JS application is deployed but is not exposed through a service. To do this, open the VS Code Terminal and type

kubectl expose deployments/nodejs --port=80 --target-port=8080 --type=LoadBalancer

kb11

Note the target port that is 8080 because, as specified on the Dockerfile, we expect traffic to come through the port 8080 in the container.

Just like we did from the Linux terminal, we can follow the service creation from the VS Code terminal with the command

kubectl get svc

kb12

Once the service has been set up and the IP address has been created in Azure, you can refresh the cluster view and call the IP address from any browser.

kb13

kb14

3. Interact with deployed Pods and Containers from VS Code

Another convenient feature we can access from the Kubernetes clusters view, is to interact with the pods and containers. Right-click on the nodejs pod and Show Logs. It will show you the container logs. In our case, the Node JS application started successfully and is waiting for traffic on port 8080. Everything looks good!

kb15

kb16

You can also click Describe, which is equivalent to the kubectl describe pod command and will give you information on the pod status.

kb17

kb18

Finally, what we can do is open an interactive session from within the container for troubleshooting purpose. Right-click on the nodejs pod and click Terminal.

kb19

You can type the ls or cat server.js commands to see what is inside the container file system.

kb20

Open the Command Palette one last time and type Create. You will see that VS Code can help you to create Azure Container Registries, Helm Chart or even Kubernetes clusters. All of it without leaving the IDE.

kb21

SHOpenRegStream does not mix with smart pointers

$
0
0


Some time ago, I noted that

Co­Get­Interface­And­Release­Stream
does not mix with smart pointers

because it performs an
IUnknown::Release of its interface parameter,
which messes up all the bookkeeping because smart pointers expect
that they are the ones which will perform the Release.



The other half of the problem is functions like
SH­Open­Reg­Stream and
SH­Open­Reg­Stream2 which
return a COM pointer directly,
rather than putting it into an output parameter.
When you put them into a smart pointer,
the default behavior of the smart pointer is to create a new
reference,
so it will call AddRef upon assignment or construction,
and call Release upon replacement or destruction.



// Code in italics is wrong

Microsoft::WRL::ComPtr<IStream> stream;
stream = SHOpenRegStream(...);

Microsoft::WRL::ComPtr<IStream> stream(SHOpenRegStream(...));

ATL::CComPtr<IStream> stream;
stream = SHOpenRegStream(...);

ATL::CComPtr<IStream> stream(SHOpenRegStream(...));

_com_ptr_t<IStream> stream;
stream = SHOpenRegStream(...);

_com_ptr_t<IStream> stream(SHOpenRegStream(...));



All of these operations will take the raw pointer returned by
SH­Open­Reg­Stream,
save it in the smart pointer,
and increment the reference count.
When the smart pointer is destructed,
the reference count will be decremented.



But the object started with a reference count of 1.
After storing it in the smart pointer, the reference count is 2,
even though there is only one object tracking it.
When that object (in this case, the smart pointer)
releases its reference, there is still one reference remaining,
which nobody is tracking.



You have a memory leak.



The solution is to use the Attach method
to say,
"Here is an object that I would like you to adopt responsibility
for."
The smart pointer will take the object but will not
increment the reference count,
because you told it,
"I want you to take responsibility for the reference count that
I am giving you."



Microsoft::WRL::ComPtr<IStream> stream;
stream.Attach(SHOpenRegStream(...));

ATL::CComPtr<IStream> stream;
stream.Attach(SHOpenRegStream(...));

_com_ptr_t<IStream> stream;
stream.Attach(SHOpenRegStream(...));

_com_ptr_t<IStream> stream(SHOpenRegStream(...), false);



The _com_ptr_t class has a bonus constructor that
takes a boolean parameter that indicates whether the smart pointer
should perform an AddRef on the pointer.
In the case where you want to adopt an existing reference,
you pass false.



This problem is basically the flip side of the
Co­Get­Interface­And­Release­Stream
problem.
Whereas that one results in an over-release,
this results in an under-release.



And the root cause of both of them is that they use a calling
pattern that doesn't conform to COM recommendations.

Using Snowflake on Azure for Querying Azure Event Hubs Capture Avro Files

$
0
0

In this video, we look at how to use Snowflake on Azure to query Avro files generated by Azure Event Hubs Capture feature. In our example, we'll create Azure Blob Storage account and configure Azure Event Grid to send blob creation and deletion events to an Azure Event Hub and Azure Storage Queue simultaneously, and then use Snowflake on Azure to parse and query the Avro files generated by the capture.

Video Walkthrough

Tip: Play the video full screen.

Table of Contents

00:00 Beginning of video
01:58 Create resource group, storage account for files, and storage queue
03:15 View storage account in Microsoft Storage Explorer
04:15 Create Azure Event Hubs Namespace and Event Hub
05:50 Create Event Grid subscription blob2queue using Azure CLI
06:45 Create Event Grid subscription blob2eventhub Azure Portal
08:40 Uploading files to Azure Blob Storage using Azure CLI upload-batch
10:40 Viewing messages in the queue
12:10 Using Snowflake on Azure worksheet
13:28 Create Snowflake stage pointing to the Azure Blob Storage container
17:20 Querying number of records in Avro files
20:40 Decoding message body
23:00 Inserting decoded data into Snowflake table
24:50 Querying captured events using Snowflake JSON parsing capability
27:45 Comparing events captured in queue with ones captured in Event Hub
29:30 Grouping query to view events by minute
30:25 Deleting files and querying deletion events
34:20 Comparing total number of events in queue vs Event Hub after deleting all files

Create Storage Account For Files and Queue

# Create resource group
az group create -n avehc1 -l eastus2

# Create storage account for upload files and for queue
az storage account create -g avehc1 -n avmyfiles1 --sku Standard_LRS -l eastus2 --kind StorageV2

# Create container
az storage container create -n myfiles --account-name avmyfiles1

# Create queue for events az storage queue create -n queue1 --account-name avmyfiles1

Create Azure Event Hub

# Create event hub namespace
az eventhubs namespace create -g avehc1 -n avehc1ns -l eastus2 --sku Standard

# Create event hub
az eventhubs eventhub create -g avehc1 -n avehc1 --namespace-name avehc1ns

# Create storage account and container for event hub capture files
az storage account create -g avehc1 -n avmycapture1 --sku Standard_LRS -l eastus2 --kind StorageV2 az storage container create -n mycapture --account-name avmycapture1

Create Event Grid Subscription

az eventgrid event-subscription create --resource-id /subscriptions/SUBSCRIPTION_ID/resourceGroups/avehc1/providers/Microsoft.Storage/storageAccounts/avmyfiles1 --name blob2queue --endpoint-type storagequeue --endpoint /subscriptions/SUSBSCRIPTION_ID/resourceGroups/avehc1/providers/Microsoft.Storage/storageAccounts/avmyfiles1/queueservices/default/queues/queue1

Upload Batch of Files to Generate Events

az storage blob upload-batch --account-name avmyfiles1 --destination myfiles --source /mnt/c/Python36 --pattern "*.*"

Snowflake Queries

use database TEST_DB;

create or replace file format av_avro_format
  type = 'AVRO'
  compression = 'NONE';
show file formats;

-- Create Snowflake stage pointing to the container with the captured Avro files
create or replace stage aveventgrid_capture
  url='azure://avmycapture1.blob.core.windows.net/mycapture'
  credentials=(azure_sas_token='?st=xxxxxxxxxxxxxxxxxxxxxxx')   file_format = av_avro_format;

-- List all Avro files
list @aveventgrid_capture;

-- Count records in all Avro files
select count(*) from @aveventgrid_capture;

-- Look at raw data in one Avro file
select * from @aveventgrid_capture/avehc1ns/avehc1/0/2018/12/27/01/38/26.avro;

-- Decode the body
select HEX_DECODE_STRING($1:Body) from @aveventgrid_capture/avehc1ns/avehc1/0/2018/12/27/01/38/26.avro;

-- Parse other fields of the Avro file
select HEX_DECODE_STRING($1:Body), TO_TIMESTAMP(REPLACE($1:EnqueuedTimeUtc,'""',''),'MM/DD/YYYY HH:MI:SS AM'), TO_NUMBER($1:Offset), $1:Properties, TO_NUMBER($1:SequenceNumber), $1:SystemProperties from @aveventgrid_capture/avehc1ns/avehc1/0/2018/12/27/01/38/26.avro;

-- Create table to store parsed Avro capture files
create or replace table aveventgrid_capture (   jsontext variant,
  eh_enqueued_time_utc timestamp_ntz,
  eh_offset int,
  eh_properties variant,
  eh_sequence_number int,
  eh_system_properties variant
);

-- Review the table which is initially empty
select * from aveventgrid_capture;

-- Load data from Avro files into the created Snowflake table copy into aveventgrid_capture (jsontext, eh_enqueued_time_utc, eh_offset, eh_properties, eh_sequence_number, eh_system_properties) from (   select HEX_DECODE_STRING($1:Body), TO_TIMESTAMP(REPLACE($1:EnqueuedTimeUtc,'""',''),'MM/DD/YYYY HH:MI:SS AM'), TO_NUMBER($1:Offset), $1:Properties, TO_NUMBER($1:SequenceNumber), $1:SystemProperties   from @aveventgrid_capture );

-- Review how lateral flatten works to break up a JSON array into individual records
select value from aveventgrid_capture, lateral flatten ( input => jsontext );

-- Query event grid blob storage events by parsing JSON using Snowflake’s built-in functions
select
    value:eventType::string as eventType,
    value:eventTime::timestamp as eventTime,
    value:subject::string as subject,
    value:id::string as id
from aveventgrid_capture, lateral flatten ( input => jsontext )
where not value:eventType::string is null;

-- Look for specific event id
select
    value:eventType::string as eventType,
    value:eventTime::timestamp as eventTime,
    value:subject::string as subject,
    value:id::string as id
from aveventgrid_capture, lateral flatten ( input => jsontext )
where value:id::string = '50eac4d4-e01e-00b5-5584-9da1d9063d9d';

-- Group events by minute and event type
select
    date_trunc('MINUTE',value:eventTime::timestamp) as eventTimeWindow,
    value:eventType::string as eventType,
    count(*) as eventCount
from aveventgrid_capture, lateral flatten ( input => jsontext )
where not value:eventType::string is null
group by eventTimeWindow, eventType
order by eventTimeWindow desc;

image

Thank you!

Please leave feedback and questions below or on Twitter https://twitter.com/ArsenVlad

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>