Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Solving SQL Connectivity issues: A new guided walk through just got published

$
0
0

We recently published a new document that provides a one stop shop for solving majority of connectivity issues that you may run into when working with SQL Server. This can be accessed here:

Solving Connectivity errors to SQL Server

In addition to providing a quick checklist of items that you can go through, the doc provides step by step troubleshooting procedures for the following error messages:

  • A network-related or instance-specific error occurred while establishing a connection to SQL Server
  • No connection could be made because the target machine actively refused it
  • SQL Server does not exist or access denied
  • PivotTable Operation Failed: We cannot locate a server to load the workbook Data Model
  • Cannot generate SSPI context
  • Login failed for user
  • Timeout Expired
  • The timeout period elapsed prior to obtaining a connection from the pool

We hope that you use this to guide your troubleshooting of connectivity issues and would love to get your feedback on these kind of documentation experiences.

We also have the following troubleshooters available for Always On and SQL Azure DB connectivity issues:

Troubleshooting Always On Issues

Troubleshooting connectivity issues with Microsoft Azure SQL Database


Retail Pricing and Discount Data Checkout Cache

$
0
0

For the performance of a business application, we can roughly split it into two areas: algorithm and persistence, especially for pricing and discount. We have focused extensively on retail discount problem and algorithm. Today, we talk about a way to enhance data access performance: pricing and discount data cache.

First, any time we have cache, we have data inconsistency. For a good caching mechanism, we need to minimize the inconsistency as much as possible. Secondly, we need to make sure it is functionally consistent. For example, during the checkout item scan, it is okay to see stale price and discount, but it is not okay to see fresh and stale prices randomly. In addition, pricing and discount data is time-sensitive. A discount is active now, but may not be active a second later, and vice versa.

Introduce pricing and discount data checkout cache.

  1. When we scan the first item, load pricing and discount data that is active anytime during a small window, say now to 10 minutes later. In other words, we have a cache-start-time and cache-end-time.
  2. The discount engine will re-check the time to filter out inactive ones at the given time.
  3. When we scan items one by one,
    • If it’s still in the cache time range,
      • If we have the item already in the checkout cart, no need to read from database.
      • If it is a new item, then load the pricing and discount data for that item only for the existing cache time range.
    • If it is outside the cache time range, drop the existing cache and start a new one, with a new cache time range.
  4. When we have new information that would change the pricing and discount data – for example, the customer scans his/her loyalty card – drop the existing cache and start a new one, with a new cache time range.
  5. Where to store pricing and discount data checkout cache?
    • If pricing engine is embedded in the POS, then in memory.
    • If pricing engine is part of a store server, then it may be persisted along the cart.

Details are important in cache effectiveness.

  • If pricing engine is part of a store server deployed in two-plus machines, a light-weighted POS may hit multiple instances of the store server during the checkout. There is no guarantee that two machines have the same time. We could make cache start time a bit earlier, say one minute before now. If two machines have huge time difference, then cache would not be effective at all. Then again, we face a much bigger problem.
  • Data inconsistency, for example, when we scan the first item, mix and match isn’t there; when we scan the second item, mix and match is available. We end up with partial mix and match data.
  • Cache cost is mostly in serialization and de-serialization. If we do not store cache in memory, or along with the cart, then we would incur additional cost.
  • Total isolation of 2 checkouts: My cache is for my checkout and yours is for yours.

Note: in Dynamics Retail Solution, pricing and discount data checkout cache is available in AX6, but not in AX7.

Related: Retail Discount Concurrency – Best Deal Knapsack Problem

Related: Dynamic Programming for Retail Discount Knapsack Problem

Related: Discount Best Deal Algorithm Performance

How to Check Database Availability from the Application Tier

$
0
0

Reviewed by: Mike Weiner, Murshed Zaman

A fundamental part of ensuring application resiliency to failures is being able to tell if the application database(s) are available at any given point in time. Synthetic monitoring is the method often used to implement an overall application health check, which includes a database availability check. A synthetic application transaction, if implemented properly, will test the functionality, availability, and performance of all components of the application stack. The topic of this post, however, is relatively narrow: we are focused on checking database availability specifically, leaving the detection of functional and performance issues out of scope.

Customers who are new to implementing synthetic monitoring may choose to check database availability simply by attempting to open a connection to the database, on the assumption that the database is available if the connection can be opened successfully. However, this is not a fully reliable method – there are many scenarios where a connection can be opened successfully, yet be unusable for the application workload, rendering the database effectively unavailable. For example, the SQL Server instance may be severely resource constrained, required database objects and/or permissions may be missing, etc.

An improvement over simply opening a connection is actually executing a query against the database. However, a common pitfall with this approach is that a read (SELECT) query is used. This may initially sound like a good idea – after all, we do not want to change the state of the database just because we are running a synthetic transaction to check database availability. However, a read query does not detect a large class of availability issues; specifically, it does not tell us whether the database is writeable. A database can be readable, but not writeable for many reasons, including being out of disk space, having incorrectly connected to a read-only replica, using a storage subsystem that went offline but appears to be online due to reads from cache, etc. In all those cases, a read query would succeed, yet the database would be at least partially unavailable.

Therefore, a robust synthetic transaction to check database availability must include both a read and a write. To ensure that the storage subsystem is available, the write must not be cached, and must be written through to storage. As a DBMS implementing ACID properties, SQL Server guarantees that any write transaction is durable, i.e. that the data is fully persisted (written through) to storage when the transaction is committed. There is, however, an important exception to this rule. Starting with SQL Server 2014 (and applicable to Azure SQL Database as well), there is an option to enable delayed transaction durability, either at the transaction level, or at the database level. Delayed durability can improve transaction throughput by not writing to the transaction log while committing every transaction. Transactions are written to log eventually, in batches. This option effectively trades off data durability for performance, and may be useful in contexts where a durability guarantee is not required, e.g. when processing transient data available elsewhere in case of a crash.

This means that in the context of database availability check, we need to ensure that the transaction actually completes a write in the storage subsystem, whether or not delayed durability is enabled. SQL Server provides exactly that functionality in the form of sys.sp_flush_log stored procedure.

As an example that puts it all together, below is sample code to implement a database availability check.

First, as a one-time operation, we create a helper table named AvailabilityCheck (constrained to have at most one row), and a stored procedure named spCheckDbAvailability.

CREATE TABLE dbo.AvailabilityCheck
(
AvailabilityIndicator bit NOT NULL CONSTRAINT DF_AvailabilityCheck_AvailabilityIndicator DEFAULT (1),
CONSTRAINT PK_AvailabilityCheck PRIMARY KEY (AvailabilityIndicator),
CONSTRAINT CK_AvailabilityCheck_AvailabilityIndicator CHECK (AvailabilityIndicator = 1),
);
GO

CREATE PROCEDURE dbo.spCheckDbAvailability
AS
SET XACT_ABORT, NOCOUNT ON;

BEGIN TRANSACTION;

INSERT INTO dbo.AvailabilityCheck (AvailabilityIndicator)
DEFAULT VALUES;

EXEC sys.sp_flush_log;

SELECT AvailabilityIndicator
FROM dbo.AvailabilityCheck;

ROLLBACK;

To check the availability of the database, the application executes the spCheckDbAvailability stored procedure. This starts a transaction, inserts a row into the AvailabilityCheck table, flushes the data to the transaction log to ensure that the write is persisted to disk even if delayed durability is enabled, explicitly reads the inserted row, and then rolls back the transaction, to avoid accumulating unnecessary synthetic transaction data in the database. The database is available if the stored procedure completes successfully, and returns a single row with the value 1 in the single column.

Note that an execution of sp_flush_log procedure is scoped to the entire database. Executing this stored procedure will flush log buffers for all sessions that are currently writing to the database and have uncommitted transactions, or are running with delayed durability enabled and have committed transactions not yet flushed to storage. The assumption here is that the availability check is executed relatively infrequently, e.g. every 30-60 seconds, therefore the potential performance impact from an occasional extra log flush is minimal.

As a test, we created a new database, and placed its data and log files on a removable USB drive (not a good idea for anything other than a test). For the initial test, we created the table and the stored procedure as they appear in the code above, but with the call to sp_flush_log commented out. Then we pulled out the USB drive, and executed the stored procedure. It completed successfully and returned 1, even though the storage subsystem was actually offline.

For the next test (after plugging the drive back in and making the database available), we altered the procedure to include the sp_flush_log call, pulled out the drive, and executed the procedure. As expected, it failed right away with the following errors:

Msg 9001, Level 21, State 4, Procedure sp_flush_log, Line 1 [Batch Start Line 26]
The log for database 'DB1' is not available. Check the event log for related error messages. Resolve any errors and restart the database.
Msg 9001, Level 21, State 5, Line 27
The log for database 'DB1' is not available. Check the event log for related error messages. Resolve any errors and restart the database.
Msg 3314, Level 21, State 3, Line 27
During undoing of a logged operation in database 'DB1', an error occurred at log record ID (34:389:6). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.
Msg 3314, Level 21, State 5, Line 27
During undoing of a logged operation in database 'DB1', an error occurred at log record ID (34:389:5). Typically, the specific failure is logged previously as an error in the Windows Event Log service. Restore the database or file from a backup, or repair the database.
Msg 596, Level 21, State 1, Line 26
Cannot continue the execution because the session is in the kill state.
Msg 0, Level 20, State 0, Line 26
A severe error occurred on the current command. The results, if any, should be discarded.

To summarize, we described several commonly used ways to implement database availability check from the application tier, and shown why some of these approaches are not fully reliable. We then described a more comprehensive check, and provided sample implementation code.

Dynamics Retail Discount Concepts: Discountable Item Group

$
0
0

In Dynamics Retail, we allow retailer to configure whether to aggregate the newly scanned product into an existing one if it already has the product. For example, when you scan two of the same keyboard, you may see one line of quantity two, or two lines, each of quantity one. For discount, it makes no difference functionally. However, technically, in combinatorial and knapsack algorithm, it can make a huge difference. In general one line of multiple quantity works far superior in performance.

Introduce DiscountableItemGroup: discountable item group. Item group index will be the index of the fixed array of discountable item groups.

The first and obvious task is that it holds similar products together, aggregating quantity for discount. By similar, as of this writing, it means same product Id (or variant Id if it is a variant), same price, same unit of measure, etc.

The second, less obvious but important responsibility is to allocate discount details to the sales lines after the discount engine figures out the deal. Let’s have some examples, say in the cart, you have 4 keyboards and you have applied 2 discount applications, mix and match of 3 with 20% off, and a simple discount of 1 with $5 off.

Example 1: Discountable item group with one sales line of quantity four

We will split the sales line into 2: one with quantity three for mix and match of 20% off, and the other one with quantity one for simple discount of $5 off.

Example 2: Discountable item group with two sales lines, of quantity one and three

Perfect match: one with quantity three for mix and match of 20% off, and the other one with quantity one for simple discount of $5 off.

Example 3: Discountable item group with four sales lines, each of quantity one

Three sales lines will get mix and match of 20% off, and the other one will get simple discount of $5 off.

Example 4: Discountable item group with two sales lines, each of quantity two

We will make one of sales lines for mix and match of 20% off. Then we will split the other sales line into two, each of quantity one: one for mix and match and one for simple discount.

We have not touched compounding and priority yet. They would make the allocation more complicated, especially the compounding.

Note: in the blog posts, I mostly talk about product, while in Dynamics Retail Pricing Engine, we often use item. They are mostly the same thing.

Related: Retail Discount Concurrency – Best Deal Knapsack Problem

Visual Studio Code – Tipps und Tricks

$
0
0

Visual Studio Code ist ein sehr cooler Cross-Plattform Code Editor, mit dem ihr auf Linux, Mac OS, oder Windows entwickeln könnt, ohne eine große IDE zu installieren. Einige der Features, die Visual Studio Code auszeichnen, sind, dass der Editor am Rechner nicht nur einen kleinen Footprint hat, sondern dass für die Entwicklung sehr hilfreiche Features wie Git Integration, Intellisense oder Debugging für NodeJS schon von Haus aus dabei sind.

 

Wenn ihr das Meiste aus Visual Studio Code herausholen wollt, schaut auch diese Tipps und Tricks Seite an, wo viele Features auch anhand von netten GIFs erklärt sind. Zum Beispiel:

 

Cycle through errors with f8 or shift+f8

errors and warnings

 

Natürlich könnt ihr über die zahlreichen Extensions das Tool auch euren Entwicklerbedürfnissen anpassen.

Chat Bots — Designing Intents and Entities for your NLP Models

$
0
0

The key for a bot to understand the humans is its ability to understand the intentions of humans and extraction of relevant information from that intention and of course relevant action against that information.
NLP (Natural language processing) is the science of extracting the intention of text and relevant information from text. The reason why you see so many bot platforms popping up like mushrooms is the advent of many NLP as a service platforms. Connecting to channels and developing bots was not a problem, the only missing link was a NLP platform which can scale and is easier to work with because you won’t like to learn NLP to make a silly Bot !
Some popular NLP as a service platforms are
1. LUIS.ai — By Microsoft (BTW I work for MS)
2. Wit.ai — By Facebook
3. Api.ai — By Google
4. Watson — By IBM

An ideal Bot platform offers
1. A NLP service — that you can train yourself.
2. SDK to support and handle conversations and their meta-data.
3. A Platform to host the bot code
4. A Platform to connect the Bot logic with multiple channels

While NLP as a service platforms helps developer in developing the NLP capabilities in as least amount of time as possible, at times the developers find themselves out of wits to understand the basic jargon of NLP and training their NLP as a service platform to the best of its ability.
Each NLP service has its own corpora of Language and domain that it bootstraps with, the corpora gives ability to models to understand language, grammar and terminologies of a certain domain and you must choose the most suitable domain when you are deploying the NLP service.
In this article, I would point out some best practices to train your NLP as a service models.

Intent — Simply put, intents are the intentions of the end-user, these intentions or intents are conveyed by the user to your bot. You can mainly put your intents in 2 categories
1. Casual Intents
2. Business Intents

1. Casual Intents — I also call them ‘Small talk’ Intents. These intents are the opener or closer of a conversation. The Greetings like “hi”, “hello”, “Hola”, “Ciao” or “bye” are the opening or closing statements in a conversation. These intents should direct your bot to respond with a small talk reply like “Hello what can I do for you today” or “Bye thanks for talking to me”.
The casual intents also comprise of Affirmative and Negative intents for utterances like “Ok”, “yes please”, “No not this one but the first one”, “Nope”.
Having General affirmative and negative intents help you handle all such intents and rather take them in context with the conversation bot just had with the client.
For ex — if the Bot just asked a question to end-user — you should expect either an affirmative or a negative intent and if its anything else Bot can ask the same question again. Your affirmative and negative intents should be able to handle most such utterances.

2. Business Intents — These are the intents that directly map to business of the bot. For eg — if it’s a Movie information Bot then an utterance from client like “When was Schindler’s list released?” is a business intent that intends to find out the Release year of Schindler’s list and you should label it accordingly with an understandable name like “GetReleaseYearByTile”.
Ideally you should think more about business intents because rest of small talk like saying hellos or affirming choices is taken care by general casual intents.

Entities
Business intents have metadata about the intent called “Entities”. Let’s take an example for an Intent “GetReleaseYearByTitle” — Sample Utterance “When was Schindler’s list released ?”
Here “Schindler’s list” is the title of the movie for which the user “intends” to find out the release year. The process of finding the entities can be understood at Part of sentence (POS) tagging. However as a user of NLP as a service you don’t need to get into the technicalities of knowing how POS tagging work but if you do want to here is a nice paper on it http://nlp.stanford.edu/software/tagger.shtml

Whenever user is thinking about designing their intents the entities must also be identified and labelled accordingly. Again, in entities you can have general entities labelled for use throughout the intents like metrics (including quantity, count, volume), Dates and most of NLP as service allows you to tag entities of such general types without any big hassle.

Some entities may be labelled as composite entities that is having more than one entities (component entity) inside it. As a science, it doesn’t matter if you don’t have this feature with your NLP service as long as you have simple entity labelling. One must define component entities before labelling composite entities.
For ex:- Find me a pair of Size 8 Red Adidas Sport shoes.
Intent — SearchProduct
Entities —
Composite Entity — ProductDetail
Component Entity —
Size — 8
Brand — Adidas
color — Red
Category — Sport Shoes

Training for Intents and Entities

Ideally one should train the NLP as a service with some real corpus, so if you have some chat messages with your clients over Facebook or skype or whatever channel you work with those messages/utterances can help training for intents, otherwise you can think of training for intent with your own “Manufactured” utterances for any intent. For ex — Training for intent of “GetReleaseYearByTitle” can have utterances like

“what was the release year of movie Pulp fiction”
“in which year Pulp fiction was released”
“when did pulp fiction came” — Bad English I know 🙂
“When was Pulp fiction released”

The training with Manufactured utterances helps in bootstrapping the system but one must re-train the NLP service when it starts getting some real utterances. The process of re-training the system should keep going on until the error rate reduces. The more variance of utterances you receive from real conversations the better you can train your NLP service for intents. Ideally Minimum 5 or optimally 10 utterances per intent are good enough to bootstrap the system.

The NLP services go through a routine of Supervised — Unsupervised — Supervised learning phases. Each supervised learning phase serves as a feedback loop through which the course correction is done for the NLP models.

User trains the system with utterances — this is supervised learning, then the NLP service learns on its own on the basis of supervised learning and about 10% of what the NLP Service has learnt during unsupervised learning is asked back from the NLP service to the user if what it has learnt has been correct or not, the user again trains or affirms/negates the unsupervised learning results of NLP service and re-trains the model. This process keeps going on and eventually user will find less and less questions by NLP service with more and more confidence towards the questions its asking the user to affirm.
Key Takeaways –

  • Identify Intents in advance — differentiate between general/casual and business intents.
  • Identify Entities — differentiate between metric related and noun related entities.
  • If possible train Intents with original corpus of conversations, otherwise train with manufactured utterances. Minimum 5 utterances, optimum 10 utterances.
  • Train, Converse, re-train — feedback loop must continue in order to train your NLP models.

Interesting reads –

  • http://www.nlpca.com/DCweb/settingintent.html
  • https://research.google.com/pubs/NaturalLanguageProcessing.html
  • http://nlp.stanford.edu/software/tagger.shtml
  • https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/45678.pdf

Similar article at Medium – https://medium.com/@brijrajsingh/chat-bots-designing-intents-and-entities-for-your-nlp-models-35c385b7730d#.dj39gyhtl

Making the connection between HTML and UI Automation Patterns

$
0
0

This post discusses how your HTML UI can automatically support various UI Automation patterns, and how you can verify the results by using the Inspect SDK tool.

 

Introduction

The Narrator screen reader uses the Windows UI Automation (UIA) API to access UI presented by your app. So Narrator will access UIA properties relating to elements shown in your UI, and this means it can announce helpful information such as an element’s Name and ControlType. For example, your customer might hear “Save, Button” as they move the keyboard focus over to a button whose purpose is to save something.

Narrator can also access information about the UIA patterns supported by an element. These patterns describe the behaviors of the element, and allow Narrator to programmatically control the element in response to your customer’s actions. For example, your customer might move Narrator to an element which supports the UIA ExpandCollapse pattern, and Narrator will inform your customer whether the element is currently expanded or collapsed. If your customer is using a keyboard, they might choose to change the expanded state through the regular keyboard action. If your customer has no keyboard, and is interacting with the device through touch, they might make a touch gesture which results in Narrator programmatically changing the expanded state through the UIA ExpandCollapse pattern.

 

I’ve recently seen a few devs get told that their UI doesn’t support all the UIA patterns that it should. So this then raises a few questions:

1. Who says my UI should support those patterns?

2. How could this have been avoided?

3. How do I fix this?

4. How could I have learnt about the problem myself when I first built the UI?

 

Below is my take on all this.

 

Who says my UI should support those patterns?

When you design your UI, you don’t think of some big array of pixels on the screen, where some pixels show different colors from others. And you don’t think of shapes and lines and text characters which happen to appear near each other on the screen. So you don’t say to yourself, “I’m going to put a small box here, and nearby I’ll put some text, and if my customer clicks or taps somewhere on all that, something great will happen”. Instead you say, “I’m going to put a really helpful checkbox here”.

Everything in your UI has meaning. By creating UI elements with meaning, you already have an expectation on how that element will behave, and very importantly, so does your customer. Your customer will have expectations around how the element will react to mouse, touch and keyboard input, and how a screen reader will interact with the element. To provide a predictable experience to your customer is essential.

So what’s going to set your customer’s expectations around an element’s behavior when Narrator reaches the element? Those expectations will be set through the element’s UIA ControlType property. If Narrator announces that it’s reached an element with a ControlType of CheckBox, (and doesn’t announce that the checkbox is disabled,) then your customer will expect that they can toggle it. Similarly, if they reach an element with a ControlType of ComboBox, they’ll expect that they can expand it to reveal its dropdown list.

By the way, when I refer to ControlTypes of CheckBox and ComboBox, the actual value of the ControlType properties are UIA_CheckBoxControlTypeId and UIA_ComboBoxControlTypeId respectively.

And from a Narrator perspective, if an element can be programmatically toggled, then it must support the UIA Toggle pattern. If an element can be programmatically expanded, then it must support the UIA ExpandCollapse pattern. A full list of patterns can be found at UI Automation Control Patterns Overview. (Technically, I should be referring to patterns as “control patterns” throughout this, but I’m just so used to using the term patterns…)

 

The first part of the answer to the question of “Who says my UI should support those patterns?”, is really around what you would expect an intuitive experience to be. Many sighted devs who look at some UI, would intuitively know how they expect the UI to behave, based on their past experiences with UI. If they see something that looks like a checkbox, they’ll expect that it can be toggled. If they see something that looks like a combobox, they’ll expect that it can be expanded and collapsed. Your customers who use screen readers have the same expectations in response to learning of the element’s ControlType. In order to meet your customer’s expectations, the elements will need to support the relevant UIA patterns, such as the Toggle pattern and ExpandCollapse pattern.

Some more prescriptive details can be found at Control Pattern Mapping for UI Automation Clients. A table at that page lists UIA ControlTypes, along with the UIA patterns that each ControlType is expected to always support, and the UIA patterns which may be supported in some situations.

I’d recommend taking a look at that table, as I do think it’s pretty interesting. For example, the Button ControlType is not required to support the UIA Invoke pattern. That might come as a surprise, because all buttons can be invoked right? Well, no, not all.

As I type this in Word 2016, I consider the buttons that exist on the Word ribbon. If I point the Inspect SDK tool to the “Decrease Font Size” button, Inspect tells me that the button supports the UIA Invoke pattern. But if I point Inspect to the “Bold” button, Inspect tells me that button doesn’t support the Invoke pattern, but it does support the Toggle pattern. This all makes sense given the purpose of the buttons. The “Bold” button has a current state which the “Decrease Font Size” button doesn’t.

 

The Inspect SDK tool reporting that the “Bold” button on the Word 2016 ribbon supports the UIA Toggle pattern.

Figure 1: The Inspect SDK tool reporting that the “Bold” button on the Word 2016 ribbon supports the UIA Toggle pattern.

 

We’ll talk more about the Inspect SDK tool a little later, and highlight how useful it can be when verifying that the UIA patterns are really supported when an element claims that it supports them.

Note: The table shown at the MSDN page referenced above is created with all sorts of useful HTML such as <th>, <tr> and <td>, which gives the UI meaning. With all that great information, Narrator can help your customer efficiently interact with the data contained in the table. More details on how your customer can interact with tables in your UI is at How a table at MSDN became accessible.

Ok, so you now feel that it’s fair enough that your UIA should support some particular UIA pattern, and the bug assigned to you is justified. Your next thought is, “Why do I have to be dealing with this? Shouldn’t the UI platform have supplied my element with the required UIA pattern support by default?”

 

How could this have been avoided?

You’re asking the right question. Avoiding accessibility bugs in your UI is a far better thing than having to spend time investigating and fixing bugs.

In many situations, the bugs can be avoided by using standard controls which are provided by the UI framework.

 

For example, say my HTML is to contain a combobox with a dropdown. A standard way to define such UI is through use of the “select” tag. When the minimal HTML shown below is loaded up in Edge, a combobox is shown visually, and I can use the mouse, touch or the keyboard to expand and collapse the combobox.

 

<label for=”birdBox”>Birds

<select id=”birdBox”>

    <option>Towhee</option>

    <option>Steller’s Jay</option>

    <option>Chickadee</option>

</select>

 

But what’s more, I can point the Inspect SDK tool at the UI, and learn that the UIA ExpandPattern is supported by default.

 

The Inspect SDK tool reporting that a combobox defined through use of the “select” tag supports the UIA ExpandCollapse pattern by default.

Figure 2: The Inspect SDK tool reporting that a combobox defined through use of the “select” tag supports the UIA ExpandCollapse pattern by default.

 

Inspect also shows me that Edge is exposing all sorts of other helpful information about the element through UIA. We’ll take a look at some of those later.

 

SO: The message here is that if at all possible, use a standard control for your UI. By doing this, in many cases, the UI framework will provide the UI with a lot of accessibility by default. If you’re presenting an interactable element that shows all sorts of fun custom visuals, go crazy with styling a standard control rather than building a fully custom control.

 

How do I fix this?

Ok, so say by now, you’ve found that for some of your bugs you could replace some div, (that you’d previously marked up to behave in a particular way,) with a standard control, and that resolved the bugs. But in some other cases, you really feel that you needed to stick with something like a div for your control. The next question is, how can you convey the full meaning of that UI to your customer?

Let’s look at an example where you’ve decided to stick with using a div to define an element whose purpose is a checkbox. If Narrator’s going to be able to inform your customer that they’ve encountered a checkbox, and is to be able to programmatically toggle the state of that checkbox through the UIA Toggle pattern, then the element better have a UIA ControlType of CheckBox.

In order to achieve this, you can add an HTML role to the div. Your next question is, how do you know what role you should use, such that the element gets exposed through UIA as having a ControlType of CheckBox. This is where you can visit Mapping ARIA Roles, States, and Properties to UI Automation, and find the role you’re after. And not too surprisingly, the role you need in this case is checkbox.

So let’s say I have the following HTML hosted in Edge, and let’s ignore the fact that this doesn’t look like a checkbox at all. (If you were doing this for real, presumably you’d have all sorts of styling to make your UI look like a checkbox.)

 

<div role=”checkbox”>Birdwatching required</div>

 

If I point the Inspect SDK tool to that element, I’m told that the element has a ControlType of CheckBox, and that it supports the UIA Toggle pattern, and that its current toggle state is “off”. Wow! All that, just by setting the role!

 

The Inspect SDK tool reporting that the UIA Toggle pattern is supported by a div, and its current toggled state is “off”.

Figure 3: The Inspect SDK tool reporting that the UIA Toggle pattern is supported by a div, and its current toggled state is “off”.

 

But let’s pause for a moment to think about this. Yes, we did get some very important changes to the programmatic accessibility of the div by adding a role of checkbox. But that’s only the start of the work that’s required to make the element fully accessible. You’ll need to react to the element being programmatically checked or unchecked through the UIA Toggle pattern, and always expose its current state through the aria-checked attribute.

And then going beyond programmatic accessibility, you need to make sure the element is fully keyboard accessible. So your customer can tab to it, and your sighted customer is informed through visual feedback that the element has keyboard focus, (including through appropriate system colors when a high contrast theme is active), and they can change the toggle state of the element through the keyboard.

 

Important: Simply assigning a role to an element, or applying some aria tag, does not impact how the element responds to regular keyboard input. To have some custom UI respond to keyboard input in a similar way to how a standard control responds, will require you to add the necessary javascript.

 

So while it is sometimes necessary to set a role on some custom UI as part of making the UI accessible, the additional work required to make the UI fully accessible can be far from trivial. So please do consider first whether you really need that custom UI. If some standard control can provide all or most of the accessibility that your customer needs – go for it!

 

How could I have learnt about the problem myself when I first built the UI?

After careful consideration and some changes to use standard controls or roles, you’ve fixed your bugs. But you’re left thinking, how could you have determined that these bugs were lurking in your UI when you were first building it? It’s time consuming for multiple people to have been involved with logging and discussing bugs, and to be code-reviewing fixes. It would have been great if the bug had never been checked-in in the first place.

This is where I find the Inspect SDK tool so very, very helpful.

Say I’ve built my combobox UI in HTML, using the “select” tag. I believe that by doing this, my customer will be informed that the ControlType of the element is ComboBox, and that they can determine its current expanded state, and can use Narrator to programmatically change the expanded state. But now I want to verify that.

So I point Inspect to the element, and first check out the UIA properties exposed by Edge for that UI.

For this discussion, the first property I verify is that the ControlType is indeed ComboBox, and then I check that the IsExpandCollapsePatternAvailable is true. So far so good.

But an IsExpandCollapsePatternAvailable property of true only means that the element claims to support the UIA ExpandCollapse pattern, and doesn’t necessarily mean that the element really supports it. So what I can do next is check the UIA properties that are exposed through the ExpandCollapse pattern. (Many UIA patterns contain properties which are relevant only to that pattern, and are not available on elements which don’t support the pattern.)

In the case of the ExpandCollapse pattern, it only has one property; the ExpandCollapseState property. So I can use Inspect to verify that that property matches the visual state of the UI.

 

Inspect reporting the current expanded state of my combobox.

Figure 4: Inspect reporting the current expanded state of my combobox.

 

And this is where a powerful, but lesser known feature of Inspect becomes interesting. Not only can I use Inspect to verify the properties associated with a particular pattern, I can also have Inspect programmatically call methods available through that pattern. In the case of the ExpandCollapse pattern, it has the methods of Expand() and Collapse().

To call those methods, I point Inspect to the combobox element, and then go to Inspect’s Action menu. That menu is dynamically populated to include the methods available off the patterns that the element claims to support. All I then need to do is pick the methods shown in the menu to call the Expand() and Collapse() methods. As I do this, I make sure that the combobox UI shown in Edge responds as expected, and that the ExpandCollapseState property reported by Inspect gets updated to reflect the new state.

 

Inspect’s Action menu allowing me to programmatically expand my combobox.

Figure 5: Inspect’s Action menu allowing me to programmatically expand my combobox.

 

Once I’ve done this, I can examine lots of other UIA properties being exposed for the combobox, including:

 

1. The Name of the combobox, which tells my customer what the combobox relates to.

2. Its BoundingRectangle, which is leveraged by my customers who find Narrator’s multi-modal output of audio and visual highlight helpful. And is also leveraged by my customers using Narrator with touch, if they want to learn about the spatial relationship of the UI shown visually. And is also leveraged by my customers using Magnifier, when the entire element is to be shown in the magnified view

3. The current value of the combobox as exposed through the UIA Value pattern. The Value property should be the same as the currently selected item in the combobox’s visual UI.

4. Loads of other helpful UIA properties.

 

My combobox expanded in response to the ExpandCollapse pattern’s Expand() method being called. Inspect shows the current expanded state of the combobox, as exposed through the ExpandCollapse pattern and the current value exposed through the Value pattern.

Figure 6: My combobox expanded in response to the ExpandCollapse pattern’s Expand() method being called. Inspect shows the current expanded state of the combobox, as exposed through the ExpandCollapse pattern and the current value exposed through the Value pattern.

 

One way or another, whether I use the “select” tag or instead build custom UI, I need to get to a point where Inspect reports everything I’d expect relating to the programmatic accessibility of the combobox.

 

A note on UWP XAML and WPF

An HTML dev doesn’t directly add markup describing an element’s UIA pattern support, but UWP XAML and WPF devs have more direct control over such support. That’s because there’s a fairly close match between patterns that can be added to custom AutomationPeers and the desired UIA patterns.

For example, you might build a custom AutomationPeer that implements the XAML Windows.UI.Xaml.Automation.Provider.IExpandCollapseProvider interface, in order for the UIA ExpandCollapse pattern to be supported.

The “Adding support for other UIA patterns to a custom AutomationPeer” section at More tips on building accessible Windows apps include snippets showing how to add support for a bunch of UIA patterns by updating a custom AutomationPeer.

Important: By default you’ll want to use standard UWP XAML or WPF controls which already fully support the UIA patterns that your customer needs. Only add pattern support through custom AutomationPeers when absolutely necessary.

 

Summary

When a bug comes your way which says that your HTML doesn’t support some required UIA pattern, consider the following:

1. Consider whether it is appropriate for the UI to support that pattern. For example, if the element can be invoked, toggled, or expanded through input mechanisms such as touch, mouse or keyboard, then it will need to support the relevant UIA patterns so that the same functionality can be accessed programmatically.

2. If practical, replace the custom UI which exhibits the bug with a standard control which fully supports the required UIA pattern by default.

3. If use of a standard control is not practical, set a role on the element such that Edge will add partial support for the UIA pattern. You will then need to enhance the element’s functionality to react to programmatic calls into the pattern’s methods, and update as required the exposed properties accessed through the pattern.

4. Once you believe the bug is fixed, used the Inspect SDK tool to verify that all properties and methods contained in the pattern are fully functional.

 

As always, thanks for considering how all your customers can efficiently access all the great functionality in your app!

Guy

OS and Data disk encryption of Azure IaaS Windows VMs

$
0
0

VM disk encryption is achieved using Bitlocker encryption of Windows VMs and DMcrypt in Linux VMs. It leverages Azure KeyVault services to store your encryption keys which is again an additional level of security. The process is quite straight forward and encryption can be done for new as well as existing VMs. In case of Windows IaaS VMs, we can encrypt both the OS and Data disk, while in case of Linux IaaS VMs the data disk can be encrypted.You will have to use the ARM deployment model to leverage this feature, since it does not work with the classic model. There are ARM templates readily available in GitHub that can readily help you with the encryption of existing and New VMs.

In this blog, I am going to focus on the end to end procedure for Windows VM OS and Data disk encryption using KeyVault by leveraging these templates

Prerequisites:

1) Create an Azure KeyVault

This can be now done from the ARM portal itself. KeyVault is available in the Azure Portal in preview mode. You can create a new keyVault  by providing basic information like KeyVault Name , Resource group Name,Location etc. In addition to that , by default your user account will have access in the “Access Policies”. You can edit the “Advanced Access Policy” and enable all the three options given there

security2

Alternately, you can also use the ARM template available at this location to create a KeyVault:

https://github.com/Azure/azure-quickstart-templates/tree/master/101-key-vault-create

This template will create a KeyVault for you with all the three advanced access policies including the Volume encryption policy that we need for Disk encryption

2) Create an application in Azure AD with permission to access KeyVault:

This is a very important step since you will be using this application id and key during VM encryption

  1. Select the organization’s active directory from the classic portal and select the application tab

security3

 2.Click on add from the bottom menu to add a new application. Select the first option, ie add an application my organization is developing

security4

3. Provide name of the application in the next step

security5

3. Add the Sign-on URL and App URI. You can enter any value here in URI format, it need not have to be an existing application. Only requirement is that it should be unique for an organization

security6

4.Now click on configure, and copy the client Id of the application

security7

5.Next we need the application key. This can be generated from the portal from under the keys session. Select duration as 1 year from the drop down. Once you save the configuration, a key will be displayed which can be copied over

security8

6. Next step is to provide this application access to keyvault. It can be done from an Azure PowerShell window using the following command
Set-AzureRmKeyVaultAccessPolicy -VaultName $keyVaultName -ServicePrincipalName $aadClientID -PermissionsToKeys ‘WrapKey’ -PermissionsToSecrets ‘Set’ -ResourceGroupName $rgname


You have to set the variables $keyVaultName,$aadClientID,$rgname  to have values of your keyvault name, client id of the application that we got at Step 4 above and Resource group name

Now you have the client id and the key that you will need during the ARM template execution. Now lets proceed with the VM disk encryption

Create a new encrypted VM using ARM template and KeyVault

Deploy the following template available in GitHub to encrypt a new VM : https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-create-new-vm-gallery-image

Click on the “deploy to Azure” option in the page to deploy the ARM template directly in Azure

security9

Provide the mandatory parameter values like VM name, admin username, password, Storage account name, Virtual network, subnet,keyvault name and keyvault resource group along with the client ID and Client Secret  of the Azure AD application that we created earlier. Additionally there will be two options for Key Encryption key and URL.It is not mandatory and we are not using that in this example. You can then agree to the terms and conditions , click on purchase and the deployment of encrypted VM will start


Encrypt an existing VM using ARM template and keyvault

Deploy the following template available in GitHub to encrypt an existing VM : : https://github.com/Azure/azure-quickstart-templates/tree/master/201-encrypt-running-windows-vm

 Deploy the template to Azure and provide details of the VM that you want to encrypt.’Volume type’ can be OS,Data or All(default value) depending on which disk you want to encrypt

security11

Check encryption status

Now that you have enabled the encryption you might want to verify the status. There are few ways to check this.

Easiest way is to check from the Azure portal. Navigate to the Disks information of the VM from the portal. It will show the OS disk status as Encrypted.

security12

You can check the data disk encryption status by using Azure PowerShell command Get-AzureRmVmDiskEncryptionStatus and providing resource group name and Vmname as parameters
Get-AzureRmVmDiskEncryptionStatus -ResourceGroupName $rgname -VMName $Vmname

security13

You will see OSVolumeEncrypted and DataVolumesEncrypted status as True if encryption is enabled

You can also check the status of disk encryption from within the VM using ‘manage-bde’ command and providing the drive letter as parameter. Sample output given below

security14

Also, it can been seen from the GUI of the server , the drives will have a lock signal associated

security15

 Now you know how to enable disk encryption for protection of data at rest in Azure!!!!

 

 

Ref: https://azure.microsoft.com/en-us/documentation/articles/best-practices-network-security/https://azure.microsoft.com/en-in/documentation/articles/azure-security-disk-encryption/

 

 


Ormiston College STEM Day

$
0
0

There is no doubt that PBL (Problem or Project Based Learning) and STEM (Science Technology Engineering and Mathematics) has captured the current zeitgeist of contemporary learning around the globe.  There are many schools in Australia that have realised that they need to do something quickly to enable their students to have the skills and dispositions to survive and thrive when they leave their stretch of education.

One school that is leading the way in Australia is Ormiston College, also a Microsoft World Showcase School.  Annette MacArthur and Tamara Sullivan, along with Rowena Taylor, were our hosts for a very special learning opportunity on 3 November, 2016.  

“You always get so much value from visiting school and speaking with other educators. I was so impressed that the children ran the sessions themselves. The sessions on PBL and planning STEM units was fantastic. Congratulations to Ormiston College!”

STEM interactive learning in progress at Ormiston College.

STEM interactive learning in progress at Ormiston College.

Attendees could access a comprehensive OneNote Notebook filled with STEM and PBL resources during the day, as well as having an opportunity to share a Surface Pro device in a Class Notebook set up.  This is an ideal way to facilitate STEM learning; via a powerful touch device with front and rear cameras and a responsive pen for sketching, inking and easy recording of ideas.

“Amazing student led STEM presentations, amazing resources and friendly presenters. What a great day!”

The innovative teachers and leaders of the school facilitated the first session.  After being welcomed by Headmaster, Brett Webster, Annette and Rowena demonstrated how to facilitate STEM and PBL training for teachers, via a fun and simple hydraulics activity.  Having teachers engage in the types of activities that the students need to do, rather than listen to an expert preaching the rhetoric, is a great example of modelling for the teachers.

In this session, the Ormiston approach was outlined and the concept of a cross-disciplinary approach to delivering STEM learning was identified.  A deep dive into PBL was also an essential component of this day, with Annette and Rowena clarifying their approach to PBL for the attendees.

“It was great to see what Ormiston are doing with STEM, they have fantastic facilities and it was a really great day.”

After morning tea, the students took over.  Teachers could choose three out of four student led workshops. This included:

  • 3D printing and robotic hand
  • Sound sensor and pet water bowl
  • Ozobots and OC disaster
  • maKey maKey and interactive stories

A feedback survey completed by the attendees showed that the sound sensor with the pet water bowl, and 3D printing with the robotic hand were equal favourite sessions.

robotics, STEM

3D printing and robotics.

Having students run these sessions and teach the teachers is such a powerful activity that is recommend to all schools.  Empowering students to not only demonstrate, but also have the confidence to address adults and offer expert advice is surely a goal of all schools, and we saw excellent examples of this display at Ormiston College.

“An interesting day for many schools who are on the STEM journey, allowing us to see the work in progress at Ormiston College and contextualise it to our own Schools. Came away with more questions than answers, but that is a good thing – as these questions are more focused around what we need to do to get our STEM program working across the School.”

After these very successful rotations, teachers had the opportunity to work with one concept in more detail so they could implement it at their school.  Working in table groups, delegates quickly started sharing and collaborating on ideas, based around what they had seen on the day and with support from their table partners.

STEM, maths, classroom learning

STEM in classroom learning

“I walked away feeling that I had the confidence and knowledge to start the process of planning within my school. The sessions showed us that innovation is possible.”

So where does Microsoft fit in here?  

OneNote is the ultimate STEM tool, as it offers so many ways to collect and display data and results, including multimedia.  

Through the Class Notebook, teachers can keep a close eye on the progress of their students and provide rich and dynamic feedback.  

With the Surface Pro and pen, students have the ultimate device for learning and collaborating.

You can access Microsoft’s free STEM and PBL notebooks from this Docs.com collection.  

Written by Matt Jorgensen. Matt Jorgensen is part of the Microsoft Australia Teacher Ambassador and Microsoft Innovative Educator Expert programmes

Azure Logic Apps – Post to Twitter

$
0
0

Azure FunctionsThe Azure Development Community uses a Logic App to post to Twitter when new blog posts are created.  This post takes a look at how this was implemented.

Overview

The final logic app is pretty straightforward and consists of a single trigger and action as shown below:

la1

Creating the Logic App

Starting with an empty designer, an RSS trigger is supported that detects when a new post is published:

la2

Configuring the trigger is pretty straightforward as only the URL and interval are required:

la3

The next step is to add an action to post to Twitter:

la4

The initial setup requires signing in to Twitter to authorize the Logic App to have access to post to Twitter:

la7

After authorization has been granted, text in the tweet is specified by selecting content from the previous step (blog entry). Below shows both Feed title and Primary feed link being selected and illustrates the additional content that is available.

la5

That’s it!

Exploring the Logic App

The following looks into different aspects of running Logic App.

Health

Viewing a Logic App in the portal immediately provides some really helpful information.  In the Essentials section, we have basic information as well as a summary of the definition (1 trigger and 1 action), the current status, and a summary of the last day of activity.

la8

There are also several charts.  The first is a summary of the runs (re., when the trigger condition was met) and includes the status, start time and the duration.  In the example below it looks like the logic app is taking about 1/2 second to complete.

la9

An interesting thing to note is after the initial setup was working (1/20/2017, 2:14 PM), the next tweet failed (1/21/2017).  The chart provides a great way to trouble shoot the error.  First the failure is selected which provides a view of the failed run:

la14

By selecting the failure (clicking the red !), the issue is because Twitter responded with an Unauthorized code.  See the Twitter section below for more information.

la15

Another useful chart is the Trigger History showing the app is running every 10 minutes:

la12

And the Billable executions in the past month showing a daily activity of both billable triggers and billable actions.  Note: In the image below a single day is being hovered over to illustrate the dynamic chart.

la13

Twitter

The failed runs are interesting as the configuration did not change but the initial runs failed with an unauthorized response from Twitter. In investigating the error, the authorization was viewed by logging into the Twitter account and going to the Settings.

la10

In the Apps tab the applications with access to the twitter account are shown.

la11

Alerts

In general the tweeting Logic App can be ignored except when there is a failure sending the tweet.  To notify that there was a failure an alert was created.  There are several places to add an alert and one is when viewing the Billable executions chart.

la16

There are many metrics that could be selected as illustrated in the image below.

la17

For the purpose of being notified of failures, an alert was created where Runs Failed was great than 0 as shown below:

la18

Pricing

Determining the cost of running the Logic App is straightforward. From the Pricing page, we can get a understanding of how the pricing model works for logic apps and does vary per region.

The key is determining the number of executions. As we determined from the Billable executions in the past month report, we average around 144 executions per day. Roughly that is mostly the first step checking if there are new blog posts every 10 minutes (6 per hour per day = 6 * 24 = 144). If we round up to 150 and take a 30 day month, we can use 4500 as a monthly total of executions which puts us all within the $0.0008 / action billing range for a total of $3.60 per month.

SQL Server on Linux: An LLDB Debugging Tale

$
0
0

You are aware of our statements indicating “Microsoft Loves Linux.”  Over the last couple of years the open source activities at Microsoft have accelerated all around me.  Recently I blogged about the design of of the debugger bridge and the use of LLDB.  In this post I want to highlight my recent debugging session into LLDB and show you how we are contributing to LLDB.

Scenario

We captured a core dump on a 2TB RAM system.  When loading it with LLDB the core dump would open but we could not symbolize stacks or see specific memory that was expected to be present in the dump. Opening the dump with gdb allowed us to see the memory region and dumping the PT_LOAD headers with readelf showed the memory region was present in the dump.

Debugging Steps

I started with LLDB logging.  LLDB has logging capabilities to assist in understanding what is transpiring.

Command Example: log enable -gv –- lldb dyld module platform

You can use the command “log list” to see the channel (gdb-remote, LLDB, …) and supported logging categories.  Using the -O command line parameter, when you start LLDB, indicates to LLDB to execute the command before the files are loaded.   This lets you log information while loading the core dump.

Launch: lldb -c core.sqlservr.8917 -f ./bin/sqlservr.dbg -O”log enable -gv — lldb dyld platform”

  • LLDB loads the target and associated shared libraries (I.E. DLLs) during the target create action.  
  • In the log you can see LLDB is setting the path to that of the specified executable and using that path to look for shared libraries (.so files.)
  • The rendezvous logic is where LLDB attempts to find information in the dump overlapping the shared modules.  When I stepped through the logic the DYLD is the key.  DYLD (dynamic loading) is where LLDB is using the core dump information (named memory regions such as [vdso]) to locate the dynamic object (list maps) and other information in an attempt to obtain the shared module details.

(lldb) log enable -gv — lldb dyld platform
(lldb) target create “./bin/sqlservr.dbg” –core “core.sqlservr.8917”
DYLDRendezvous::DYLDRendezvous exe module executable path set: ‘./bin/sqlservr.dbg’
DynamicLoaderPOSIXDYLD::DidAttach() pid 8917

DynamicLoaderPOSIXDYLD::ResolveExecutableModule – got executable by pid 8917: ./bin/sqlservr.dbg
DynamicLoaderPOSIXDYLD::DidAttach pid 8917 executable ‘./bin/sqlservr.dbg’, load_offset 0x7f5e4bd16000
DynamicLoaderPOSIXDYLD::DidAttach pid 8917 added executable ‘./bin/sqlservr.dbg’ to module load list

You can see LLDB reading over the sections by name but LLDB terminates the search early and can’t find the symbol sections.  Without the symbol sections variables and stack decoding is limited.  


SectionLoadList::SetSectionLoadAddress (section = 0x5642dc4a62e0 (./bin/sqlservr.dbg..bss), load_addr = 0x00007f5e4bfb5e80) module = 0x5642dc4919b0
DYLDRendezvous::Resolve address size: 8, padding 4
ResolveRendezvousAddress info_location = 0x7f5e4bfb4dc0
ResolveRendezvousAddress reading pointer (8 bytes) from 0x7f5e4bfb4dc0
ResolveRendezvousAddress FAILED – could not read from the info location: core file does not contain 0x7f5e4bfb4dc0
DYLDRendezvous::Resolve cursor = 0xffffffffffffffff
DynamicLoaderPOSIXDYLD::DidAttach() pid 8917 rendezvous could not yet resolve: adding breakpoint to catch future rendezvous setup
DynamicLoaderPOSIXDYLD::ProbeEntry pid 8917 GetEntryPoint() returned address 0x7f5e4bd2d83c, setting entry breakpoint
DynamicLoaderPOSIXDYLD::DidAttach told the target about the modules that loaded:
— [module] ./bin/sqlservr.dbg (pid 8917) Core file ‘/media/rdorr/Scratch1/Temp/CoreWontLoad/core.sqlservr.8917’ (x86_64) was loaded.

Why is the read failing?

Taking a closer look at the log and stepping though the LLDB code I found the read failure occurring because the memory address could not be located in the tracked VM map.  The VM map is built from the PT_LOAD entries in the core dump file.   PT_LOAD is an entry containing the address range, permissions and offset in the dump where the memory region is stored.

I turned to readelf –h to dump the ELF header associated with the core dump file.

ELF Header:
  Magic:   7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
  Class:                             ELF64
  Data:                              2’s complement, little endian
  Version:                           1 (current)
  OS/ABI:                            UNIX – System V
  ABI Version:                       0
  Type:                              CORE (Core file)
  Machine:                           Advanced Micro Devices X86-64
  Version:                           0x1
  Entry point address:               0x0
  Start of program headers:          64 (bytes into file)
  Start of section headers:          16880770624 (bytes into file)
  Flags:                             0x0
  Size of this header:               64 (bytes)
  Size of program headers:           56 (bytes)
  Number of program headers:         65535 (106714)  
  Size of section headers:           64 (bytes)
  Number of section headers:         0 (106716)  
  Section header string table index: 65535 (106715)
 

Interestingly, I found the program_headers printed 65535 (106714).  What this tells us is the ELF header has a special marker (0xFFFF=65535.)  When this marker is placed in the program and section header count members, it indicates that section_header[0] holds the actual count values.  The ELF header is limited to a 16 bit value or max of 0xFFFF and on the 2TB system the number of memory regions exceeded 65535 when the dump was captured.

You can use readelf –l or –S to see the program/section header information as well as dump the PT_LOAD.  Using this technique I was able to confirm the PT_LOAD entries existed for the region in which the read was returning failure on.  I could also deduce that the PT_LOAD entry for this region was beyond the 65535th entry in the core dump.

 LOAD           0x00000003ee293e28 0x00007f5e4bfb1000 0x0000000000000000
                0x0000000000004000 0x0000000000004000  R      1

ResolveRendezvousAddress FAILED – could not read from the info location: core file does not contain 0x7f5e4bfb4dc0


LOAD           0x00000003ee297e28 0x00007f5e4bfb5000 0x0000000000000000  

                0x0000000000001000 0x0000000000001000  RW     1

So why is LLDB indicating the entry is not part of is VM map when gdb and readelf confirm the PT_LOAD’s existence.

(gdb) x/100x 0x7f5e4bfb4dc0
0x7f5e4bfb4dc0  0x00000001      0x00000000      0x0000019b      0x00000000

With a bit more stepping I found that LLDB’s, load core logic is looping over the program headers and building up the VM map.   The loop is controlled by the ELF header value, in this case 65535.  LLDB didn’t have logic to detect the 0xFFFF signature and read the section_header[0], at the ‘Start of section headers’ offset, in order to obtain the actual counts.  The gdb and readelf utilities read the proper value of (106714) but LLDB used 65535.  Thus, LLDB is not reading the available PT_LOAD headers and the region of memory needed appears after the 65535th entry.

In parallel I made changes to an internal utility (future blog to come) and my esteemed collogue Eugene, who did the majority of heavy lifting for dbgbridge, made changes to LLDB code.  Once Eugene allowed LLDB to read all program and section headers we are able to load and properly debug the large, core dump.  The change is now under review so it can be added back to the LLDB code base.

Along with this contribution we have uncovered a few other issues, provided possible fixes, and shared these with the LLDB community.  The lldb-dev community has been very responsive to engage and help address issues.  This is really a neat experience and I can truly see our use of LLDB driving more advancements for everyone.

Bob Dorr – Principal Software Engineer SQL Server

Microsoft Azure 仮想マシンに SQL Server を導入、設定する際のポイント 7

$
0
0

Microsoft Japan Data Platform Tech Sales Team
清水

みなさん、こんにちは。   6回目は、 AlwaysOn 可用性グループの有効化と、構成時のポイントをお伝えしましたが、いかがだったでしょうか?最終回は、 可用性グループ リスナー構成時のポイントをお伝えします。5回目で、 3  つの仮想マシン ( 2 つの DB サーバーと 1 つのファイルサーバー ) を用いクラスターを構成したので、これを前提として説明を行います。


可用性グループ リスナーとは

可用性グループ リスナーは、Always On 可用性グループのプライマリ レプリカまたはセカンダリ レプリカのデータベースにアクセスするためにクライアントが接続できる仮想ネットワーク名 (VNN) です。可用性グループ リスナーによって、クライアントは、接続先 SQL Server の物理インスタンス名を意識せずに、可用性レプリカに接続できます。また、現在のプライマリ レプリカに接続するために、接続文字列を変更する必要もありません。可用性グループ リスナーの詳細については、以下をご参照下さい。

 

可用性グループ リスナー、クライアント接続、およびアプリケーションのフェールオーバー (SQL Server)

https://msdn.microsoft.com/ja-jp/library/hh213417.aspx

 

Azure 仮想マシンの可用性グループ リスナー

Azure 仮想マシンのAlwaysOn 可用性グループに、可用性グループ リスナーを構成するためには、 Azure 側に ロード バランサー、バックエンド プール、正常性プローブ及び負荷分散規則が必要になります。また、可用性グループ リスナー自体の構成手順もオンプレミス環境とは少し異なっています。

 

image

 

本記事では、可用性グループ リスナー構成時の手順とポイントをお伝えするため、ロード バランサー、バックエンド プール、正常性プローブ及び負荷分散規則が必要な理由等の詳細につきましては、以下の記事をご参照下さい。

 

Azure 上に AlwaysOn AG 構成を構築する際のリスナーについて

 

ロード バランサーの作成

まず、ロード バランサーを作成するため、Azure ポータルにアクセスします。画面左上の「+」をクリック、「新規」が表示されるので、「ネットワーキング」→「 Load Balancer 」を選択すると、以下の成画面が表示されます。

image_thumb[4]

各項目を以下のように設定し、ロード バランサーを作成します。

項目名

設定値

名前 作成するロード バランサーの名前
種類 「内部」
仮想ネットワーク Azure 仮想マシンと同じ仮想ネットワーク
サブネット Azure 仮想マシンと同じサブネット
IP アドレスの割り当て 「静的」
プライベート IP アドレス Azure 仮想マシンと同じ仮想ネットワークで
使用されていない IP アドレス
サブスクリプション Azure 仮想マシンと同じサブスクリプション
リソースグループ Azure 仮想マシンと同じリソースグループ
場所 Azure 仮想マシンと同じ場所 ( リージョン )

 

作成したロード バランサーにバックエンド プール、正常性 プローブ及び負荷分散規則を追加します。

 

バックエンドプール

Azure ポータルで作成したロード バランサーをクリック、以下の画面を表示し、「バックエンド プール」→「追加」をクリックします。

image

バックエンド プールの追加画面が表示されるので、「仮想マシンの追加」をクリックします。

image

各項目を以下のように設定し、仮想マシンを追加します。

項目名

設定値

名前 追加するバックエンド プールの名前
可用性セット 各 Azure 仮想マシンが含まれる可用性セット
仮想マシン AlwaysOn 可用性グループを構成する Azure 仮想マシン
※各 Azure  仮想マシンは、同じ可用性セットに含まれる必要あり

 

正常性プローブ

作成したロード バランサーをクリック、以下の画面を表示し、「正常性プローブ」→「追加」をクリックします。

image

正常性プローブの追加画面が表示されます。

image

各項目を以下のように設定し、正常性プローブを作成します。

項目名

設定値

名前 追加する正常性プローブの名前
プロトコル 「 TCP 」
ポート 任意の空きポート
※このポートは各仮想マシンのファイアウォールで解放されている必要あり
間隔 ここでは規定値 (5)
異常しきい値 ここでは規定値 (2)

負荷分散規則

作成したロード バランサーをクリック、以下の画面を表示し、「負荷分散規則」→「追加」をクリックします。

image

負荷分散規則の追加画面が表示されます。

image

各項目を以下のように設定し、負荷分散規則を追加します。

項目名

設定値

名前 追加する負荷分散規則の名前
フロントエンド IP アドレス ロード バランサーの作成時に指定した IP アドレスが選択されていることを確認
プロトコル 「 TCP  」
ポート “1433”
※クライアントがSQL Serverに接続する際使用するポートを指定
バックエンドポート 無効
※後述のフローティング IP(Direct Server Return) を有効にするため
セッションの永続化 ここでは規定値 ( 「なし」 )
アイドル タイムアウト ここでは規定値 (4)
フローティング IP(Direct Server Return) 「有効」

 

可用性グループ リスナーの作成

プライマリ レプリカの SQL Server が稼働している DB サーバーに管理者ユーザーでログオンします。「フェールオーバー クラスター マネージャー」を起動、AlwaysOn 可用性グループで使用している WSFC クラスターを展開、ネットワークを選択します。表示されたクラスター ネットワーク名 ( 以下の例では “ クラスター ネットワーク 1“ ) をメモします ( この名前は、後述する PowerShell スクリプトの “$ClusterNetworkName” 変数に使用します)。

image

同様に「役割」をクリックします。 AlwaysOn 可用性グループ ( 以下では “AG1”) が表示されたら、これを右クリック、メニューから「リソースの追加」→ 「クライアント アクセス ポイント」の順に選択します。

image

クライアントアクセスポイントの追加画面が表示されるので、「名前」に可用性グループ リスナーの名前を入力します ( ここで指定する可用性 リスナーの名前は、クライアントアプリケーションが SQL Server 可用性グループ内のデータベースに接続するためのネットワーク名になります)。その後、「次へ」をクリックすると、「確認」画面が表示されますが、ここでも「次へ」クリックします。

image

最後に「概要」画面が表示されるので、「完了」をクリックします。クライアント アクセス ポイントの作成完了後、クライアント アクセス ポイントはオフラインになりますが、この時点では、オンラインにしないでください。クライアント アクセス ポイントの作成が完了したら、画面下の「リソース」タブをクリックして、作成したクライアント アクセス ポイントを展開します。  IP アドレス リソースを右クリックし、メニューから「プロパティ」 を選択します。

image

IP アドレス リソースのプロパティが表示されたら、「名前」をメモします ( この名前は、後述する PowerShell  スクリプトの “$IPResourceName” 変数で使用します ) 。その後、「 IP アドレス」で「静的 IP アドレス」をクリックし、IP アドレスを設定します ( 設定する IP アドレスは、ロード バランサーの IP アドレス同じものを指定 ) 。その後、「このアドレスの NetBIOS を有効にする」がチェックされていることを確認し、「 OK 」をクリックします。

image

IP アドレスの設定後、管理者権限で以下の Power Shell スクリプトを実行します。

 

$ClusterNetworkName = " < クラスター名 > " #フェールオーバー クラスターの名前
$IPResourceName = "<IP アドレス名 >" #IP アドレス リソースの名前
$ILBIP = "xxx.xxx.xxx.xxx" #ロード バランサー作成時に指定した IP アドレス
[int]$ProbePort = < 正常性プローブで指定したポート>

 

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{“Address”=”$ILBIP”;”ProbePort”=$ProbePort;”SubnetMask”=”xxx.xxx.xxx.xxx”;”Network”=”$ClusterNetworkName”;”EnableDhcp”=0}

 

スクリプトの実行完了後、フェールオーバー クラスター マネージャで作成したクライアント アクセス ポイント ( ここでは “AG1-LSNR1”) のプロパティを表示、 IP アドレスリソースに依存関係があることを確認し、クライアント アクセス ポイントをオンラインにします。クライアント アクセス ポイントがオンラインになったら、可用性グループ (ここでは “AG1”) のプロパティを表示、クライアント アクセス ポイント ( ここでは “AG1-LSNR1”) に依存関係を設定します。

image
SQL Server Management Studio( 以降 SSMS) を起動し、管理者権限のあるログインで SQL Server データベースエンジン ( 以降 SQL Server) へ接続します。「 AlwaysOn 高可用性」→「可用性グループ」→「 < 作成した可用性グループ名 > 」→「可用性グループ リスナー」を展開します。作成した可用性グループ リスナーを右クリック、メニューから「プロパティ」を選択します。 可用性グループ リスナーの「プロパティ」が表示されたら、「ポート」に「負荷分散規則」で指定したポートを入力し、「 OK 」をクリックします。

image image

 

可用性グループ リスナーを用いた SQL Server への接続

SSMS を起動し、接続先に可用性グループ リスナーの名前を指定し、管理者権限のあるログインで接続します。

image

接続後、クエリ画面を表示、以下のクエリを実行し、実際に接続している SQL Server が、その時点のプライマリ レプリカになっていることを確認します。

 

SELECT @@SERVERNAME;

 

AlwaysOn 可用性グループの手動フェールオーバーを行い、再度上記のクエリを実行します。その後、実際に接続している SQL Server が、新しいプライマリ レプリカになっていることを確認します。

 

本連載では、Microsoft Azure 仮想マシンに SQL Server を導入、設定する際のポイントをお伝えしましたが、いかがだったでしょうか  ? 次回以降は別のテーマでお伝えする予定です。

 

関連記事

Microsoft Azure 仮想マシンに SQL Server を導入、設定する際のポイント 1

Microsoft Azure 仮想マシンに SQL Server を導入、設定する際のポイント 2

Microsoft Azure 仮想マシンに SQL Server を導入、設定する際のポイント 3

Microsoft Azure 仮想マシンに SQL Server を導入、設定する際のポイント 4

Microsoft Azure 仮想マシンに SQL Server を導入、設定する際のポイント 5

Microsoft Azure 仮想マシンに SQL Server を導入、設定する際のポイント 6

Azure 上に AlwaysOn AG 構成を構築する際のリスナーについて

 

連載 : Microsoft Azure 仮想マシンに SQL Server を導入、設定する際のポイント

https://blogs.msdn.microsoft.com/dataplatjp/iaassql/

From one tech to another – Experiential learning is key

$
0
0

igor


Igor Izotov

Cloud Solution Architect
Microsoft Australia

 

“Where are you from?” That’s the question I’m often asked when people meet me. While my name offers a clue, it’s my accent that leaves people scratching their head. Despite leaving my homeland only five years ago, the way I speak has evolved into a melting pot of dialects.

One person recently suggested I was from South Africa, an hour later the same day I was asked whether my accent was eastern European. Some say it’s my slight Northern twang that confuses people the most.

The truth is I am Russian, and when I arrived in Australia five or so years ago, my English was enough to get by. Sure, I knew how to order a beer or give directions to a taxi driver, but I certainly wasn’t fluent.

My first role on Australian soil was with Capgemini, it was also my first English-speaking environment. Being such a global organisation, many of those around me spoke in a wonderful variety of accents. Come to think of it, I should probably thank my then-boss who, originally from Newcastle upon Tyne, has influenced the way I speak today.

This brings me to my point of learning through experience. Not only does it offer the steepest learning curve – for my English it took a couple of month to go from ‘enough to get by’ to ‘okay’ and another few to get to ‘proficient, although a little culturally mixed-up’, but it also helps internalise knowledge, ensuring it stays in your head.

Here’s another example of why learning through experience is a powerful way to upskill. When I joined Microsoft not long ago, after nearly 4 years as an Amazon Cloud Architect, I used the same technique to come to terms with the Azure platform.

Instead of going through endless PowerPoints and numerous training videos I spent most of my evenings with my head stuck firmly in my laptop, coming up with problems and building stuff in Azure, spinning up VMs locally and in AWS to simulate Hybrid scenarios. And just like studying a new language, it was this hands-on experiential learning that helped me translate my existing knowledge and experience into my new Azure environment.

The ‘formal’ result of my tinkering – from having no prior Microsoft certifications to MCSE in under 3 months. And, being a geek, I couldn’t have wished for a better experience, this ‘work’ felt more like play.

On a more serious note, with innovation evolving at such a staggering pace, there is the very real risk of falling behind and becoming a laggard. It’s vital to stay ahead of the wave – not just on the wave – which is why it’s critical to find the approach to learning new things that works for you.

Getting certified and maintaining your certifications can be a good motivator for some and a good structured approach for others. But if I’m allowed to give you one word of advice – never do certifications for certifications’ sake and do not prepare for the exam, upskill yourself – tinker, try, fail and try again, experiment to know enough to pass the exam.

Azure is growing, 85% of the Fortune 500 already trust the Microsoft cloud, the demand for Azure skilled professionals is fast accelerating. Today, Microsoft is offering a special initiative to encourage partners to develop their Azure skills at a discounted rate.

There are three excellent offers that combine free access to our library of flexible online courses, as well as discounts on our industry-standard Certified Professional exams and Linux certification offered through the Linux Foundation.

This is a great opportunity to jump in and experience the power of the Azure platform for yourself, while developing skills that will only become more and more valuable as the Microsoft Cloud adoption rate accelerates.

And while you may not come away from the experience with a mixed-up accent like mine, it will certainly help you stay ahead of the wave. Time to roll up your sleeves, unleash your inner geek and have tons of fun along the way.

azure-training
Related Posts

Office をマルチスレッドでオートメーションすることの危険性

$
0
0

こんにちは、Office 開発 サポート チームの中村です。

今回は、Office をオートメーションするアプリケーションをマルチスレッドで作成する際の注意点について記載します。

Office は STA (Single-Threaded Apartment) モデルで動作しているため、マルチスレッドからの呼び出し、特に Office に重い処理を実行させているときや、ループ処理などでマルチスレッドから大量の呼び出しを行うと、エラーが発生することがあります。

この動作については、以下の弊社公開資料で解説していますが、開発者の皆様に知って頂く機会を増やすとともに、図等を用いてより親しみやすい内容でご案内するため、今回、本記事でも取り上げたいと思います。

タイトル : Office でのスレッドのサポート
アドレス : https://msdn.microsoft.com/ja-jp/library/8sesy69e.aspx

 

目次
1. STA とは?
2. マルチスレッドからの呼び出しで想定されるエラー
3. 対処方法

 

1. STA とは?

プロセスのスレッド モデルには、大きく分けて STA と MTA (Multi-Threaded Apartment) モデルがありますが、Office は STA を採用しています。(主に GUI でユーザー操作を行うアプリケーションでは、処理の整合性を確保するための実装が MTA と比較して容易なため、STA が採用されることが多いと思います。)

STA モデルである Office では、他プロセスからの COM オブジェクト呼び出しは、全て一元的にウィンドウ メッセージとしてキューイングされます。Office プロセス内部では、ウィンドウ メッセージ キューから COM 呼び出し要求が取り出され、適切なスレッドで COM オブジェクトの処理が実行されます。

 

図 1. STA の仕組み

図 1. STA の仕組み

 

例え話を用いてもう少し噛み砕いて説明すると、プロセス = マンション、スレッド = マンション内の各部屋、とイメージすると分かりやすいかと思います。

STA モデルのアプリケーションは、管理人がいるオートロック マンションのようなものです。他プロセス (=マンションの訪問者) は、各部屋の住人に用事 (=COM オブジェクトの呼び出し) がありマンションを訪れると、まずは管理人に用件を伝えます。各部屋の住人 (= COM オブジェクト) への取次は、すべてこの管理人を通して行われます。訪問者が直接部屋を訪問することはできません。

また、ここからが本記事で説明したいポイントですが、管理人は 1 人しかいないので、各部屋の住人に順次訪問者の用件を取り次いでいきます。このため、訪問者が多いとすぐに用件を受け付けてもらえなかったり、後で出直してきてね、と言われることもあるのです。このような状況が、次の項目で記載するエラー発生時には起きています。

参考資料)
STAの詳細については以下の公開資料をご参照ください。

タイトル : [OLE] OLE スレッド モデルの概要としくみ
アドレス : https://support.microsoft.com/ja-jp/kb/150777

 

2. マルチスレッドからの呼び出しで想定されるエラー

STA モデルをサポートする Office に対して、マルチスレッドから同時に COM 呼び出しを行うと、例えば 0x8001010A や 0x80010001 といった COMException が発生することがあります (これらのエラーコード自体は汎用的なものですので、他の原因で生じることもあります)。

マルチスレッドからの呼び出しで COMException に陥るシナリオとして、例えば以下のようなケースが考えられます。

  •  Office でモーダル ダイアログが表示されており、他の要求が受け付けられない状態である
  •  先行処理に時間がかかっていたり、マルチスレッドからループなどで大量の処理要求が行われたことで、Office がビジー状態である

1. の図で示す通り、Office プロセスは COM 呼び出しをキューイングしますので、Office の処理中に COM 呼び出しを行うと、直ちに COMException が生じるとは限りません。(Office の状態によっては、即座に COMException が生じることもあります。) ただし、Office 内部スレッドとの連携に使用するウィンドウ メッセージ キューの許容量 (既定で最大 10,000 個) を超えた場合などには、COM 呼び出し要求が受け付けられず、COMException に陥ることになります。

1. でのマンションの例で言えば、管理人が優先度の高い訪問者の対応をしていたり、対応待ちの列が長すぎたりすると、新たな訪問者は追い返されてしまうこともあるわけです。

 

3. 対処方法

この動作によって生じる COMException について、検討できる対処方法はそれほど多くはありません。

対処方法の 1 つは、COM 呼び出しを行うアプリケーションで、マルチスレッドから Office の COM 呼び出しを行わないよう変更することです。アプリケーション自体がマルチ スレッドであることは問題ありませんが、Office への COM 呼び出しはバック グラウンドで行わず、特定のスレッドでシーケンシャルに行います。

もう 1 つは、COMExceptionが発生したら、時間を空けてリトライするような処理を実装することです。

ただし、以前の投稿で Office は 1 プロセスで複数のファイルが開かれるとご案内しているとおり、プログラムからの処理中にユーザー操作などで他の処理要求が行われることを完全に防ぐことはできません。ユーザー操作によって Office アプリケーションがビジー状態となっているところにアプリケーションから COM 呼び出しが行われても、同様に受け付けられません。また、モーダル ダイアログの例であれば、いつ閉じられるかはユーザー次第なのでどれくらいリトライすべきかの判断も難しいところです。

この記事でお伝えしたいのは、まずは自身のプログラムからの処理だけを見たときに問題が生じないよう、STA の処理の仕組みについて考慮頂きたいということです。その上で、実際の運用上、ユーザー操作や他のプログラムとの関係でどうしても発生してしまうエラーについては、運用対処やメッセージ ダイアログでユーザーに注意を促すなどの対応をご検討ください。

 

参考資料)
本記事でご紹介した Office は STA モデルであるということは、Office でサーバーサイド オートメーションがサポートされない理由の 1 つでもあります。以下の資料にも記載がございますので、参考までにご案内します。

タイトル : Office のサーバーサイド オートメーションについて
アドレス : https://support.microsoft.com/ja-jp/kb/257757
該当箇所 : 再入可能性とスケーラビリティ

 

今回の投稿は以上です。

 

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Microsoft Dynamics 365 新機能: 新しいアプリケーション: アプリケーションデザイナー

$
0
0

みなさん、こんにちは。

前回の記事で Dynamics 365 の新しいアプリケーションを紹介しました。まだご覧になっていない方はご覧ください。

Microsoft Dynamics 365 新機能: 概要
Microsoft Dynamics 365 新機能: 新しいアプリケーション: 概要

今回は、アプリケーションより詳細に見てみたいと思います。

– アプリの URL
– アプリケーションデザイナー

アプリの URL

利用者は、特定のアプリケーションへ直接アクセスすることができます。
Dynamics 365 の URL に、appid を指定することでアクセスできます。
AppIDは、ユニークな GUID 形式で自動的に割合てられます。
GUID の代わりにわかりやすい名前の接頭辞を設定することもできます。

では、実際に Sales アプリケーションの URL を見てみましょう。

1. Dynamics 365 にログインします。

2. [Dynamics 365] > [Sales] をクリックします。

image

3. URL を確認します。appid の値が Sales アプリケーションの GUID です。

image

アプリケーションデザイナー

アプリケーションデザイナーは、アプリケーションの名前、接頭辞、アイコンといった基本情報から、
アプリケーションに表示するサイトマップ、ダッシュボード、エンティティといったコンポーネントを構成するためのツールです。

では、実際に Sales アプリケーションをアプリケーションデザイナーで見てみましょう。

1. [設定] > [カスタム] > [システムのカスタマイズ] をクリックします。

2. [アプリ] をクリックします。

image

3. [Sales] をダブルクリックします。

image

4. アプリケーションデザイナーが表示されます。

image

アプリのプロパティ

1. 右ペインにある [プロパティ] をクリックします。

image

2. 設定されている名前、説明、アイコン、接頭辞を確認することができます。

image

サイトマップ

1. サイトマップをクリックします。

image

2. サイトマップデザイナーが表示されます。このアプリケーションのサイトマップの構成を確認、変更することができます。
サイトマップデザイナーについては、別記事で紹介します。

image

ダッシュボード

1. ダッシュボードをクリックします。

image

2. アプリに有効なダッシュボードの一覧が表示されます。また右ペインにもダッシュボードの一覧が表示されます。

image

3. 右ペインにダッシュボードが表示され、どのダッシュボードが有効になっているか選択します。

image

業務プロセスフロー

1. 業務プロセスをクリックします。

image

2. アプリに有効な業務プロセスの一覧が表示されます。また右ペインにも一覧が表示されます。

image

3. 右ペインに業務プロセスが表示され、どの業務プロセスが有効になっているか選択します。

image

エンティティ

1. エンティティビューエリアには、アプリに含まれているエンティティのフォーム、ビュー、グラフが表示されています。

image

2. フォームをクリックします。

image

3. 右ペインにエンティティのフォームが表示され、どのフォームを有効にするか選択します。
すべて未選択の場合、すべてのフォームが有効になります。

image

4. 続いてビューを選択します。

image

5. 右ペインにエンティティのビューが表示され、どのビューを有効にするか選択します。
すべて未選択の場合、すべてのビューが有効になります。

image

6. 続いてグラフを選択します。

image

7. 右ペインにエンティティのグラフが表示され、どのグラフを有効にするか選択します。
すべて未選択の場合、すべてのグラフが有効になります。

image

まとめ

次回は、サイトマップデザイナーについて紹介します。

– プレミアフィールドエンジニアリング 河野 高也

※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります


Common error ‘Microsoft.Bot.Builder.Internals.Fibers.InvalidNeedException’

$
0
0

Bot Error

Using the Microsoft Bot Framework streamlines the development of Bots and makes the process of integrating a Bot with common platformschannels easier.
Here is a good example of building a Bot, hosting it on a Xamarin phone App and making use of the Microsoft cognitive API LUIS to provide a more human experience.
One of the common issues developers run into, is getting the following error from the Bot:
Exception thrown: ‘Microsoft.Bot.Builder.Internals.Fibers.InvalidNeedException’ in mscorlib.dll
invalid need: expected Wait, have Done
Exception type: “Microsoft.Bot.Builder.Internals.Fibers.InvalidNeedException”

This error can be annoying because it doesn’t go away even if you restarted or redeployed your Bot Web API project.
Before we talk about how you can fix it, I would like to briefly mention how to avoid it.

Avoiding the error

Whenever you open a conversation and program your Bot to send response messages like this for example:

 await context.PostAsync("I'm sorry I don't understand. Can you clarify please?");

You need to make sure you also call wait or done after words like this:

 context.Done(true);

Fixing the error

Because your conversation hasn’t been ended (sort of in a hang state) it cannot receive more messages (activities) from the user. So in order to resolve this you need to access the conversation state store and flush it. The way I have achieved this as follows:

 StateClient stateClient = activity.GetStateClient();
 await stateClient.BotState.DeleteStateForUserAsync(activity.ChannelId, activity.From.Id);

As you see above, this will result into deleting the conversation history so be aware of that.
That code I listed above can be placed inside your Post method implementation part of the Message controller. We can provide a better visibility to the user and handle the error as follows:

 case ActivityTypes.Message:
 try
 {
 await Conversation.SendAsync(activity, () => new ShoppingDialog());
 }
 catch(Exception Ex)
 {
 if (Ex.GetType() == typeof(InvalidNeedException))
 {
 ConnectorClient connector = new ConnectorClient(new Uri(activity.ServiceUrl));
 Activity reply = activity.CreateReply("Sorry, I'm having some difficulties here. I have to reboot myself. Lets start over.");
 await connector.Conversations.ReplyToActivityAsync(reply);
 StateClient stateClient = activity.GetStateClient();
 await stateClient.BotState.DeleteStateForUserAsync(activity.ChannelId, activity.From.Id);
 }
 }
 break;

Additional resources

Make business recommendations based on business intelligence with Dynamics 365

$
0
0

Applies to: Dynamics 365

 

The fall 2016 release of Dynamics 365 introduced a powerful new feature called business recommendations, that enables a Business Analyst or System Customizer to guide users to optimal data based on intelligence they have about their business.

 

Recommendation action in the Components tab

 

Business recommendations work like the Show Error action by adding an indicator next to a form field. When the user clicks the indicator, they see a bubble with a recommendation that tells them how to fill out the field based on other data on the form. You can associate a Portable Business Logic (PBL) action with the business recommendation. That PBL action can automate setting the value to that or other fields, making the experience much more streamlined for the user.

 

Drag-and-drop designer for business rules and recommendations

 

In Dynamics 365, you can use a new drag-and drop designer to create business rules and recommendations.

Example: Product Selection

A great example is product selection. Suppose the user is running an Insurance Sales business process and is at the stage where they are discussing with their customer what insurance products they want to buy. Business analysts and customizers can check the performance and profile of past deals and establish, for example, that:

  • Married customers who buy auto insurance also usually buy home or renter’s insurance.
  • Married customers with children who buy auto insurance are usually willing to buy extra personal injury protection for their children.
  • Single, young customers who buy auto insurance are usually willing to buy extra liability insurance due to their inexperience with driving.

To set up the last example:

  1. Open the form editor for the Opportunity form.
  2. Add a new business rule.
  3. Set the branch condition for the business rule to:

IF Marital Status = single
AND Age <= 25
AND Insurance Type = Auto

  1. In the designer window, drag a Recommendation component “Young, single drivers are still learning to drive and afraid accidents. They are usually willing to pay more for extra liability insurance, so make sure to offer it.”
  2. For the associated action, drag two Set Value components with the following values:
Set Add Liability Insurance? to value Yes

Set Liability Insurance Amount to value $2,500
  1. Save and activate the rule.Whenever the condition is met, the business recommendation will show prompt the sales rep to take advantage of this type of opportunity. Even better—the fields will automatically be filled out for them in the form.

Programmability

Business recommendations are exposed through the Client API just like error messages. To add a business recommendation to a field, add the following JavaScript snippet to the running web resources, as follows:

First, create a Notification object:


var myNotification = {

messages: ['Recommendation Text'],

notificationLevel: "RECOMMENDATION",

uniqueId: "unique_id", // Pick a unique id; you will need it to clear the notification later, if necessary

actions: [{

message: "Recommendation Action Text",

actions: [function () {

// This action will execute as the button on the bubble is pressed

}]

}]

}

Next, get a control and add the notification, as such:



Xrm.Page.ui.controls.getByName(“control_name”).addNotification(myNotification);

The addNotification function returns True or False depending on whether the notification was successfully applied. To clear the bubble, call the following:


Xrm.Page.ui.controls.getByName(“control_name”).clearNotification(“unique_id”);

 

For more information, see:

 

 

Carlos Mendonça | LinkedIn

Program Manager

Microsoft Dynamics 365 team

 

 

Upgrading the Surface Studio drives to a SATA SSD 2TB and a M.2 NVMe SSD 1TB

$
0
0

Disclaimer #1: Do it at your own risk

Upgrading the drives of a Surface Studio might not be supported. (I’ll update the post if I find out). In any case, do this procedure at your own risk.

Also, I might or might not be able to provide support if you try to follow the same procedure that I followed, as this is not part of my job at the .NET team but a fun project I did over the weekend.

But if you have any question, post it at the end of my blog post and I’ll try to answer, ok?

Disclaimer #2: It is not so long! 😉

Really, the actual process is not as long as this blog post makes it appear…

I wanted to have a very detailed blog post about it as it seems that not many people have done it. But, for instance, the process of doing it was a lot shorter than writing this blog post… Winking smile

The project for the weekend!: Upgrade my Surface Studio with 2 SSD drives

I got my new and impressive Surface Studio a few weeks ago. It really is a beast while being also a beautiful machine:

– 6th Gen Intel Core i7

– 32 Gb memory Ram

– NVIDIA GeForce GTX 980M 4GB GPU GDDR5 memory

– Rapid hybrid drive: 2TB HDD with 128GB SSD cache

Here’s my Surface Studio: Smile

image

However, that “Rapid hybrid drive” is something I don’t really like. Basically, it is NOT a “solid-state hybrid hard drive” like “all within a single drive”, which I don’t like, neither.

The Surface Studio “Rapid hybrid drive” is more a “dual-drive hybrid system” based on a regular 2TB SATA 7200 rpm hard drive which is using a 128GB SSD M.2 as cache, implemented with the “Intel Rapid Storage Technology (Intel RST) RAID Driver”.

Sure, Intel RST improves the performance of a regular hard drive using an smaller SSD as cache, hence the name “hybrid”, but I wanted to have real SSDs, and “as big as possible” for this machine which is great in every aspect except in the drives specifications.

Here you can see the original drives that I removed from my Surface studio:

image

It is a regular 2TB SEAGATE hard drive (7,200 rpm) plus a TOSHIBA 128GB M.2 SSD working as a hybrid drive thanks to the mentioned “Intel RST”. However, the Surface team is using a special version of Intel RST drivers which allow the use of the 128GB of SSD M.2, but with Microsoft’s special drivers you don’t have any tool to enable/disable or investigate how that “hybrid drive” is made by.

And these are the new SSDs I installed successfully! Big and super fast SSDs that don’t need any “hybrid approach” at all! Smile

image

A SAMSUNG 850 PRO SATA 2TB SSD plus a SAMSUNG 960 PRO NVM2 M.2 1TB SSD. Super cool SSDs, I can tell.. Winking smile

Gotchas if using the Intel Rapid Storage Technology (Intel RST)

In my upgraded configuration with the 2 SSDs I’m not using Intel RST. But I wanted to mention the following issue, just so anyone is aware of it.

In the Surface Studio original configuration, If you install Intel’s drivers for Intel RST from the INTEL web site here (wait, DO NOT do it just yet), you can see what’s going on under the covers, BUT, AND THIS IS AN IMPORTANT WARNING for anyone who is not going to remove the original hard drive configuration. DO NOT INSTALL INTEL’s RST drivers unless you are certain that you want to uninstall/remove the original configuration. If you do, you won’t be able to set it up again with the 128GB M.2 SSD as cache because currently (Jan. 2017) the regular Intel RST drivers only allow you to use 64GB of SSD, as shown in the Intel RST management screenshot below which I took from my Surface Studio (even when I had a larger M.2 SSD installed):

image

So if you intend to keep your Microsoft’s hybrid drive, I recommend not to install Intel’s drivers (just to see the RST configuration) because it is probably updating different drivers. You would need to re’’-install everything with a Surface Recovery Image. So, my advice, do not install Intel drivers unless you are going to get rid of the hybrid drive, as I did.

SOFTWARE PREPARATION STEPS

Disable BitLocker in the Surface Studio original hard drive

IMPORTANT: The original hard drive of the Surface Studio might be encrypted with BitLocker. If you’d like to access to your current hard drive in the future (like setting it up into an external USB 3.0 hard drive case), you’ll first need to make sure that it is not encrypted.

Make sure that your drive is configured like in the following screenshot. Other than that, disable BitLocker from that same screen.

image

 

Create your Recovery Drive with the SURFACE STUDIO Windows 10 image

It is better if you create your USB recovery drive before tearing the Surface Studio down . Other than that you’ll need a different computer for creating the recovery drive.
I also think that it would be better if you generate that USB drive from the same SURFACE STUDIO and test that the generated USB actually works and you are able to boot from the SURFACE STUDIO with that actual USB drive before opening the SURFACE STUDIO..

So, here are the steps to create a Recovery Drive with the SURFACE STUDIO Windows 10 Image.

A. First of all, download the Surface Studio RECOVERY IMAGE from here:
https://www.microsoft.com/surface/en-us/support/warranty-service-and-recovery/downloadablerecoveryimage

A.1 Register your Surface Studio under your Microsoft account or login if you already have that done.

Once you have the Surface Studio registered under your Microsoft Account, when you login in this page, you’ll be able to select the “Surface Studio”, like:

image

Then, you are able to press the “Continue” button

A.2 Download the actual Surface Studio Windows 10 image (.ZIP file)

image

A.3 Create your Recovery USB Drive with the Windows 10 Image files within it

You should see now all the steps in the download page which are similar to the following:

After you download the Surface Studio Windows 10 IMAGE, you can then create the actual Recovery USB Drive.
Important
Creating a recovery drive will erase everything that’s stored on your USB drive. Make sure to move anything you want to keep to another storage device before using your USB to create a recovery drive.
Step 1:
Connect your USB drive to your Surface Studio (use a USB 3.0 drive if you can). The USB drive should be at least 16 GB.
Step 2:
In the search box on the taskbar, type recovery, and then select Create a recovery drive. You might be asked to enter an admin password or confirm your choice.
Step 3:
In the User Account Control dialog box, select Yes.
Step 4:
Make sure Back up system files to the recovery drive isn’t selected, and then select Next.
Step 5:
Select your USB drive, and then select Next > Create. A number of files need to be copied to the recovery drive, so this might take a while.
Step 6:
When it’s done, select Finish.
Step 7:
Go back to the recovery image .zip file that you downloaded and open it.
Step 8:
Drag the files from the recovery image folder to the USB recovery drive you created. Then choose to replace the files in the destination.

Now you have your recovery drive ready, including the Windows 10 image files with all the Surface Studio drivers, etc.

Test that the USB Recovery drive works OK

Before tearingdown the Surface Studio, I’d recommend that you test the Recovery Drive you just created by just trying to boot the Surface Studio (without re-installing Windows).
Test that it boots ok with these steps:

Step 1: Shut down your Surface.
Step 2: Insert the bootable USB drive into the USB port on your Surface.
Step 3: Press and hold the volume-down button on Surface.
Step 4: While holding down the volume-down button, press and release the power button.
Step 5: The Surface Studio should boot from the USB drive and prompt a few options in regards the language to install with a “CHOOSE THELANGUAGE” menu, like this:

At this point, you can exit or turn off the Surface Studio, as it looks like the USB drive will work.

Prepare your SATA SSD drive (Format it through an external USB case)

In my case, I mean to format the Samsung 850 PRO SATA SSD from any external USB 3.0 case.
This is a simple step that I advise to take. I didn’t do it and therefore I had a few other issues when trying to install Windows from the recovery drive that I’ll explain at the end of the post.
But, looks like the Recovery Drive won’t  work properly if the SSD drive doesn’t have any partition already created and formatted (Available drive).
Therefore, if you are able, set the SATA SSD into an external USB 3.0 case, connect it to any computer through any USB 3.0 port, create a regular volume and format it. Once you see it as a disk from Windows Explorer, it is ready.
About the M.2 socket SSD, you don’t need to do anything, as that drive will be setup after Windows 10 is installed by using the Intel RST drivers.

 

HARDWARE STEPS

Surface Studio Teardown and drives upgrade

I based my upgrade procedure on the following pages and videos. However, they are not providing all the details I had in my experience, especially in regards software/Windows installation. That’s why I thought it would be good to create a detailed blog post about it.

https://www.ifixit.com/Teardown/Microsoft+Surface+Studio+Teardown/74448
https://www.ifixit.com/Guide/Surface+Studio+2.5-Inch+Hard+Drive+Replacement/75605
https://www.ifixit.com/Guide/Surface+Studio+M.2+SSD+Replacement/75600

https://www.youtube.com/watch?v=smVoBtzeP1A

Step 0: Requirements

I couldn’t have done it if I didn’t have the following tools:

a. iFixIt set of tools with torxs screw drivers, etc.

image  image

I got it here last year for other “teardown” operations that I did:

https://www.ifixit.com/Store/Parts/Classic-Pro-Tech-Toolkit-/IF145-072

In concrete, you’ll need a screw driver for torx #8, torx #6 and a mounting post #5, as shown here:

image

 

b. Suction Cups (these might not be needed, but they help a bit)

image

c. Project tray and Magnetic project pad: These are not strictly needed but I find them very helpful in every complex teardown project I do. I don’t want to lose any torx.. Winking smile

image

Step 1: Position the Surface Studio

Tip the Surface Studio onto its back and inspect the base, hoping to find your way in..

image

Step 2: Remove the round rubber foots

At each corner you find a round rubber foot concealing a Torx screw. Remove all of them.

This step was NOT so easy as it looks like… Do it carefully so you don’t scratch the base of your Surface Studio.

image

Step 2: Remove the 4 torxs

At each corner, remove the torxs with a torx screw driver #8, as shown below.

image

Step 4: Remove the cover

As recommended, I used suction cups  to help yank it free. Although you might be able to do it with your bare hands with some more work..

You can find these buddies at iFixit or a lot cheaper at Amazon.

https://www.ifixit.com/Store/Tools/Heavy-Duty-Suction-Cups-Pair/IF145-023

https://www.amazon.com/Heavy-Duty-Suction-Screen-Repair/dp/B01M8N0HY3/ref=sr_1_6?ie=UTF8&qid=1485740741&sr=8-6&keywords=Suction%2BCups%2Bheavy&th=1

They work the same way. I have both and except the external packaging when you get it, they work the same way, so… it is up to you…

Here’s the picture when I did it.

image image

 

Here’s a photo of how you see it once opened:

image

This reminds me a BMW engine. You don’t initially see anything because it is covered…  Let’s remove stuff so we can start seeing the guts… Winking smile

I just see the power supply on the left and two delta-made exhaust fans. They’re sized quite differently. The bigger is kind of a dedicated fan for the GPU and a second and smaller for the CPU.

Here’s an explanation (from ifixit) about all the torx screws that need to be removed:

image

Step 4: Remove the fans and the midframe

 

There’s a strict and slightly perilous order of operations here. First out are two fans, but they remain anchored by wires with hidden leads.

4.1. Remove the 6 torxs (#8) and leave it as shown below:

image

4.2 So, I first removed the below and smaller fan, then the upper and bigger fan.

The thing is that, as mentioned, they are anchored by wires with hidden leads which you won’t see until you remove the midframe.

So, as a temporal step, I put a small box under the fans to hold them up, as shown below:

image

4.3 That midframe will have to come out before we can proceed further. So, remove the eight torks (also #8 tork) and then extract the midframe as shown below.

image

Here’s another picture from ifixit, although I prefer not to have the fans holding their weight just by their cables..:

image

 

4.4 You see now the two black cables connected to each fan plus the third multi-color cable (must be for a speaker) connected from the midframe to a cable on the motherboard.

image

4.5 Disconnect first the multi-color cable from the black cable with a connector coming from the motherboard.

IMPORTANT: Notice that, at least in my case, that connector is kind of being hold by under that piece of metal. That’s something important to put it the same way when you re-assembly the machine, so the cable doesn’t touch the fan.

image

For now, just take it our, and disconnect the multi-color cable so you can now pass the fans through the hole and completely remove the midframe from our way, as here:

image

As we lift away the midframe, it brings an attached speaker out with it.

So you should see the Surface Studio now like below:

image

4.6. Disconnect the fans’ cables. IMPORTANT: Those connectors are not super easy to disconnect, so be patient because if you break those connectors that are attached to the motheroard, you’ll be in trouble. So, take it easy and do it slowly.

image

Now, you should have the fans out of the way, as well as the midframe that we removed previously, too:

image

 

Step 5: Stare at the Surface Studio’s internals…

This is now super interesting… It is probably the best moment when tearing it down, except the moment when you test the computer after upgrading and it works.. of course.. Winking smile

image

5.1 You can see the M.2 SSD drive in the upper right corner. The regular hard drive on the left, and the places where the CPU (Intel i7) and the NVIDIA GPU are situated, as highlighted below. I upgraded the 128GB M.2 NVMe SSD to a much faster and larger 1TB M.2 SSD, plus, instead of a regular hard drive, I set a 2GB SATA SSD.

image

 

Step 5: Remove the M.2 SSD

With the midframe removed, the M.2 SSD is now accessible

5.1 Remove the torx screw holding the M.2 SSD to the motherboard. In this case it is a torx screw #6, so you’ll need to change the torx screw driver tool:

image

5.1 Remove the actual M.2 SSD

Then, just remove it like in the below picture (which is not mine, that’s why it is slightly different, looks like the 64GB M.2 SSD), but you remove it the same way.
IMPORTANT: Pull the SSD straight back. Do not pull the SSD upward or risk damaging the M.2 socket.

image

So far, so good. But now, in order to remove the hard drive, that is not so simple, as it is placed under the heat sink or thermal system.

Step 6: Remove the heat sink

Therefore, you need to extract the shiny heat sink. The heat sink offers quite a bit of cooling power in a tiny package.
Heat pipes coming off of each processor (CPU and GPU) flow out to exhaust radiators, each of which has a dedicated fan to blow all that hot air out of the system.

6.1 Remove seven torxs (#8) and the two highlighted below:

image

The 7 torxs are pretty similar to the previous ones. Although 3 of them are removable and 4 of them will stay on the heat sink, but lose.

However, the other two “mounting posts” need a different tool.

6.2 Remove the two mounting posts

Once you use the right tool, it is really simple to remove those…

image image
image
image

 

6.3 Remove the actual heat sink

You just need to pull it out, carefully. It might be a little bit stuck because of the thermal compound used on the CPU and GPU, but it should be easy. Just pull carefully from the heat sink and you’ll get it out, as shown below:

image

You can see the thermal compound, kind of dry, on top of the GPU and CPU and their related areas on the heat sink.

In my opinion, they put too much thermal compound on top of the GPU, but it looks like it doesn’t impact to the motherboard. It must not be invasive… Smile

We’ll deal with that (cleaning the thermal compound) after removing the hard drive.

Here’s the other side of the thermal sink. Pretty cool!  Winking smile

image

Step 7: Remove the Hard Drive

As mentioned, there’s a standard SATA hard drive connector in here—and attached to it, a standard SATA hard drive.
It took a little work to get here, but you are indeed able to swap a fancy SSD here instead of the oldish hard drive. Smile

I actually cleaned the thermal compound before removing the hard drive..But I think it would have been a bit easier to remove the hard drive first out of the way, so that’s how I’m recommending to do it below.

image

7.1. Remove the 3 remaining torxs so you can take the hard drive out.

image

 

7.2 Remove the hard drive and detach it from the cable’s connector

Remove the hard drive and disconnect the SATA/SATA power cable

image image

 

 

Here’s the hard drive detached from the Surface Studio:

image  image

 

Before going any further and putting new components, you might have noticed that you might need to clean the thermal compound up…

Step 8: Clean the thermal compound from the CPU, GPU and thermal sink

You need to clean it up so it is ready to put fresh thermal compound before putting the thermal sink back again.

You can see it has a bunch of thermal compound all over it:

GPU
image
CPU (Intel i7)
image

 

For cleaning this up, I used the “ArctiClean Thermal Material Remover” and the “ArctiClean Thermal Material Purifier”, but using plain 99% Alcohol might be enough, as well.

For actually cleaning it up, the best thing to use are coffee filters.. Smile

image

image

You can see how clean they are..

The GPU was even reflecting the “W” of “Seahawks” from my T-shirt!!

I realized after taking the picture as I didn’t know why there was that W on the GPU now… Only in the picture reflection… Winking smile

image

Go Hawks! Smile

And now, the fun part about putting the new stuff in the Surface Studio!! Wooohooo!

Step 9: Install the new M.2 SSD and SATA SSD

So, here are the two new buddies that are going to improve my Surface Studio!

They really are “black belts”. We’re not talking about SAMSUNG EVO SSDs but SAMSUNG PRO SSDs. A lot faster.. Smile

image image

 

9.1 Remove the drive mounting brackets from the old hard drive and set it up on the new SATA SSD, like here:

image              image

 

9.2 Connect the SATA cable to the SATA SSD:

image

9.3 Install the new SATA SSD on the motherboard by attaching it with the 3 torx screws.

Remember, do NOT put the mounting post just yet, as it also has to hold the heat sink, afterwards

image

9.4 Set the M.2 SSD up into its socket and secure it with the torx screw (#6 size) that you should have, so it looks like the following:

image

 

Well, well, this is looking pretty good with my new SAMSUNG PRO SSDs! Super cool! Smile

image

 

 

Put Thermal Compound to the GPU and CPU areas.

I used Arctic Silver 5 which is pretty popular.

image

I didn’t take a picture of the way I put the thermal paste, but you can take a look to many places in the internet on how to apply it:

Techniques to apply thermal paste/compound: https://www.pugetsystems.com/labs/articles/Thermal-Paste-Application-Techniques-170/

 

Step 10: Set the Heat sink back

Do it carefully, as you are putting it with the thermal compound…

Then, set the torx screws back, so it looks like below:

image

 

Step 11: Connect the fans back

Now, we’re just doing “reverse steps”, easy-peasy!! Smile

Remember the order of the cables. Larger connects to the upper connector.

Smaller to the below connector.

In any case, the fan’s connectors are in different sizes, so you cannot be wrong.

image

 

Step 12: Position the midframe and connect its multicolor cable (And place it right!)

12.1 – Pass the fans through the hole and just position the midframe.

image

 

12.2 IMPORTANT: Connect the multicolor cable (speaker’s cable) and set the cable under the metal piece

CAUTION: After connecting the multi-color cable to its related connector and cable, remember to set the cable underneath the metal piece over the motherboard.

If you don’t do this, that cable could be touching the fan, doing noise and potentially it could even break the fan. I guess that’s why it was originally placed in that position.

a. Connect
image
b. Place it right!
image

 

 

Step 13: Set the fans back into the Surface Studio

Easy steps, just in reverse order.

image

 

Step 14: Put the cover back and the gum protectors

Again, easy steps, just in reverse order.

image

 

And there you go! We are done with the HARDWARE part! Smile

 

SOFTWARE STEPS

 

Step 15: Install Windows 10 through the Recovery Drive with the Surface Studio Image (Windows 10 plus drivers)

 

Boot from the USB Recovery Drive with the Image files

15.1: Insert the bootable USB drive into the USB port on your Surface.
15.2: Press and hold the volume-down button on Surface.
15.3: While holding down the volume-down button, press and release the power button.

You should see an screen like the following:

image

I selected “English (United States)”

15.4 Enter into the Troubleshoot option:

image

 

15.5: Go into the “Recover from a drive” option:

image

 

15.6: Select “Just Remove my files”

image

 

15.7: The Recovery Process will be ready to start. Hit the “Recover” button:

image

 

15.8: TPM Change warning. Since it is installing everything from scratch, the TPM is also being cleared. Anything encrypted with the previous TPM won’t be accessible any more.

That’s why I recommended at the begining of this procedure to disable BitLocker from the original hard drive, just in case you’d want to access to it in the future.

image

 

15.9 Windows setup steps:

Now you’ll get the typical Windows 10 setup steps, like:

image

image

 

Until you get Windows installed.

Successful Windows 10 installation with the new SSD drivers

And finally, below you can see how my Surface Studio has the two SSDs installed and working and all the drivers seem to be properly installed. Woohoo! Smile

image

 

 

Issues and Troubleshooting

“No available drive” when installing from the USB Recovery Drive with the Surface Studio Image

The process that I explained above was not exactly the experience that I had, but what it should have been..

The thing is that I installed a brand new SATA SSD and when booting from the USB Recovery Drive, it was not able to “see” any drive and was kind in a loop saying “No available drive”…

WORKAROUND

The way I solved it was by installing a regular Windows 10 from a regular Windows 10 setup Boot USB drive, because a regular Windows setup is able to see the drives and format it at “setup time”. However, because that regular Windows 10 setup was not a SURFACE STUDIO image, it didn’t have many drivers installed, etc., so after I made sure that the computer was certainly “seeing” my new SSD drives,

Before starting again the setup with the Recovery drive with the original Surface Studio image, I also took advantage of that “temporal Windows 10 installation” and installed the “Intel Rapid Storage Technology (Intel RST)” from the INTEL web site here so I was able to see the M.2 SSD, as well, like in the below screenshots.

I made sure that I was NOT using my M.2 SSD as cache, so not using “Acceleration using SSD”, as my main drive is also an SSD, it doesn’t make sense to use this mechanism and I prefer to have clean SSDs installed.

image

But, using INTEL’s software I made sure that the SSD drives were both working properly:

image

image

 

Also, I didn’t want the “Link Power Management” to be enabled…

image

I made sure that both drives were working:

Drives_in_Computer_Management Drives_in_Disk_Management Drives_in_Windows_Explorer

 

After doing quite a few other tests, I started over the installation process with the original Recovery Drive with the SURFACE STUDIO Image (with all the drivers), and it did finished successfully, that second time. So far, it was just a small issue that I solved pretty quick.

All in all, everything is working great and super fast!

Enjoy! Smile

—-

Other intertesting links and discussions:

How do you install Windows to the PCIe SSD?
https://www.ifixit.com/Answers/View/369055/How+do+you+install+Windows+to+the+PCIe+SSD

Other links:
https://www.ifixit.com/Device/microsoft_surface_studio
https://www.ifixit.com/Teardown/Microsoft+Surface+Studio+Teardown/74448
https://www.ifixit.com/Guide/Surface+Studio+M.2+SSD+Replacement/75600
https://www.ifixit.com/Guide/Surface+Studio+2.5-Inch+Hard+Drive+Replacement/75605
https://www.ifixit.com/Answers/View/356944/Can+I+upgrade+Surface+Studio+hard+drive+to+full+SSD

How to upgrade your Surface Studio and make it much faster (video)


http://www.theverge.com/2016/11/29/13775320/microsoft-surface-studio-ifixit-teardown

 

2017 年に .NET 開発者が知っておくべきこと

$
0
0

 

本記事は、マイクロソフト本社の Scott Hanselman Blog の記事を抄訳したものです。
【元記事】 What .NET Developers ought to know to start in 2017 2017/1/11

 

.NET Componentsずいぶん前にも、「.NET 開発者が知っておくべきこと」というタイトルでブログ記事 (英語) を書いたことがあります。このときは質問集のように掲載したのが悪かったのか、リクルーターなどにリトマス試験紙のように使われてしまいました。

.NET については情報量が膨大なので、便利なリストを作成して学習ガイドや用語集として使ってもらおうと思い立ち、Jon Galloway と協力して用語とリソースをリストにまとめました。

始めてみて最初に思うのは、「やることが多すぎるじゃないか。だから .NET は嫌なんだ」ということかもしれませんが、参入時にはどのプラットフォームでも同じような壁 (用語集の作成) にぶつかるものです。3 文字略語のない言語やコンピューター エコシステムはありません。あまり深く考えずに、知っておくべきことを知ることからゆっくり始めていきましょう。どこまでやるかはご自身で判断すればよいのです。すべてを知らなくても大丈夫です。その代わり、どのレイヤーでもラベルでも、今どんなプログラムに向き合っているにせよ、その奥にはまだ知らない事実が隠れている可能性があるということだけは知っていてください。

あなたにとって知っておかなければならない項目には線を引いておいてください。その部分を理解したら、他も見てみるようにしてください。詳しい内容を知りたい人もいれば、そうでない人もいます。根本から学ぶべきか、ユーザー視点で学ぶべきかを考え (英語)、自分のスタイルで楽んでください。

まず、.NET と C# ですが、オンライン (https://dot.net英語) で学ぶことができます。また、F# は http://www.tryfsharp.org (英語) で学習できます。どちらのサイトでもダウンロードなしでコードを作成し、ブラウザー上で作業できます。

https://dot.net (英語) で .NET Core と Visual Studio Code を入手したら、さっそく読み始めてみましょう!

知っておくべきこと (必須)

  • .NET はいくつかの主要なコンポーネントからなります。ランタイムと言語から見ていきましょう。
  • 主要なランタイムは、以下の 3 つです。
    • .NET Framework – Windows PC、デバイス、サーバー上で稼働するモバイル アプリケーション、デスクトップ アプリケーション、Web アプリケーションを作成できます。
    • .NET Core WindowsLinuxMac 上で稼働するサーバー アプリケーションを作成するための高速なモジュール型プラットフォームを提供します。
    • Mono for Xamarin – Xamarin により .NET を iOS Android で使用できます。スキルやコードを再利用しながらネイティブの API やパフォーマンスを利用できます。Mono (英語) はマイクロソフトが Xamarin を買収する前に作成されたオープン ソースの .NET です。Mono は、オープン ソースで柔軟なもう 1 つの .NET ランタイムである .NET Standard をサポートします。また、Mono は Unity ゲーム開発環境でも使用されています。
  • 主要なプログラミング言語は以下のとおりです。
    • C# – C スタイルの言語の表現性と美しさを備えた、シンプルで強力でタイプセーフなオブジェクト指向のプログラミング言語です。C やそれと似た言語を使い慣れている方なら、難なく使いこなせるはずです。C# の詳細については、「C# ガイド (英語)」を参照してください。ブラウザーで実際に試してみることもできます (https://dot.net、英語)。
    • F# – 従来のオブジェクト指向の命令型プログラミングもサポートする、クロス プラットフォームで関数ファーストなプログラミング言語です。F# の詳細については、「F# ガイド (英語)」を参照してください。ブラウザーで実際に試してみることもできます (http://www.tryfsharp.org、英語)。
    • Visual Basic – .NET 上で稼働するさまざまなアプリケーションを構築できる言語で、学ぶのも簡単です。何年も前になりますが、私も VB から始めました。
  • 入手場所
  • ランタイムと言語の次は、プラットフォームフレームワークです。
    • フレームワーク – 使用可能な API を定義するものです。.NET 4.6 Framework、.NET Standard などがあり、名前で参照することもありますが、コードや構成ファイルでは TFM (下で説明) で参照します。
    • (.NET の) プラットフォーム – Windows、Linux、Mac、Android、iOS などがあり、ビット数も含めると、x86 Windows と x64 Windows の違いもあります。今では、各 Linux ディストリビューションにも独自のプラットフォームがあります。
  • TFM (英語) – ターゲット フレームワーク モニカー (Target Framework Moniker) のことで、目的のフレームワークとバージョンを指すモニカー名 (文字列) です。たとえば、net462 (.NET 4.6.2)、net35 (.NET 3.5)、uap (Universal Windows Platform) などがあります。詳細については、こちらのブログ記事 (英語) を参照してください。TFM を選択すると、使用可能な API と、コードを実行するフレームワークが決まります。
  • NuGet (英語) – NuGet は .NET をはじめとするマイクロソフト開発プラットフォーム向けのパッケージ マネージャーです。NuGet クライアント ツールを使用して、パッケージを生成したり利用したりすることができます。NuGet Gallery はパッケージの中央リポジトリで、すべてのパッケージの作者とユーザーが使用できます。
  • アセンブリ (英語) – 一般にコンパイルされたコードが格納された DLL または EXE のことです。アセンブリは .NET Full Framework アプリケーションの構成要素で、デプロイメント、バージョン管理、再利用、アクティベーションのスコープ指定、セキュリティ許可の基本単位となります。.NET Core では、アセンブリとさらにメタデータが格納された NuGet パッケージが基本単位となります。
  • .NET Standard (“netstandard”) (英語).NET Standard (英語) はバイナリ互換フレームワーク間の参照を簡素化します。1 つのフレームワークから他の複数のフレームワークを参照できます。.NET Standard Library はすべての .NET ランタイムで使用可能な .NET API の正式仕様です。
  • .NET Framework .NET Core の違い (英語) – .NET Framework は Windows アプリと Windows システム向けである一方、.NET Core はサーバー アプリ、コンソール アプリ、Web アプリ向けのより小さなクロス プラットフォーム フレームワークで、他のシステムを構築するコア ランタイムです。

知っておくべきこと (任意)

  • CLR – 共通言語ランタイム (Common Language Runtime) のことで、マイクロソフトの .NET Framework の仮想マシン コンポーネントで、.NET プログラムの実行を管理します。実行時コンパイルはコンパイルされたコードをマシンに対する命令に変換し、コンピューターの CPU がこれを実行します。
  • CoreCLR (英語) – .NET ランタイムです。.NET Core が使用します。
  • Mono (英語) – .NET ランタイムです。Xamarin などが使用します。
  • CoreFX (英語) – .NET クラス ライブラリです。.NET Core が使用します。Mono もソース共有に使用します。
  • Roslyn (英語) – C# と Visual Basic のコンパイラです。ほとんどの .NET プラットフォームやツールが使用します。ソース コードの読み取り、作成、分析用の API を提供します。
  • GC – ガベージ コレクション (Garbage Collection) のことです。.NET は GC によってプログラムの自動メモリ管理を提供します。GC はメモリ管理に対して緩いアプローチを取っており、メモリの即時回収よりもアプリケーションのスループットに重点を置いています。.NET GC の詳細については、「ガベージ コレクションの基礎」を参照してください。
  • “マネージ コード” – マネージ コードは、実行が CLR のようなランタイムによって管理されているコードのことです。
  • IL – 中間言語 (Intermediate Language) のことで、高水準の .NET 言語で書かれたコードのコンパイルにより生成されます。たとえば、C# が「りんご」だとすると、IL は「りんごソース」で、JIT と CLR は「りんごジュース」と考えることができます。 😉
  • JIT – 実行時コンパイラ (Just in Time Compiler) のことです。IL をコンパイルしてネイティブ コードとして実行する準備をします。
  • .NET がある場所ですが、.NET Framework は C:WindowsMicrosoft.NET に、.NET Core は C:Program Filesdotnet にあります。Mac では通常 /usr/local/share にあります。.NET Core はアプリケーションとバンドル可能で、アプリケーションのディレクトリに自己完結型アプリケーション (英語) として格納されます。
  • 共有フレームワークと自己完結型アプリの違い – .NET Core は共有フレームワーク (同じマシンの複数のアプリが共有) を使用できますが、アプリケーションはそれ自体で自己完結型にすることができます。「xcopy でデプロイ可能、bin でデプロイ可能」という言葉を耳にすることがあるかもしれませんが、これはそのアプリが完全に自己完結しているという意味です。
  • async await (英語) – キーワードの async と await は、長時間実行 (待機) する関数呼び出し (データベースのクエリや Web サービスの呼び出し) のスレッドを解放する IL を生成します。これによりシステム リソースが解放されるので、待機中もメモリやスレッドなどを大量に消費せずに済みます。
  • Portable Class Libraries – 「最小公倍数」的なライブラリで、プラットフォーム間でコードを共有できます。PCL はサポートされていますが、パッケージ作成者は代わりに netstandard をサポートする必要があります。.NET Platform Standard は PCL の進化系で、プラットフォーム間でのバイナリの移植性を実現します。
  • .NET Core は以下の部分からなります。
    • .NET ランタイム (英語) – 型システム、アセンブリの読み込み、ガベージ コレクター、ネイティブの相互運用性、その他の基本的なサービスを提供します。
    • フレームワーク ライブラリ群 (英語) – プリミティブ データ型、アプリケーションの構成タイプ、基本的なユーティリティを提供します。
    • SDK ツール群 (英語)言語コンパイラ (英語) – 基本的な開発作業を可能にします。.NET Core SDK に含まれています。
    • ‘dotnet’ アプリ ホスト – .NET Core アプリの起動に使用されます。ランタイムを選択し、ホストし、アセンブリ読み込みポリシーを提供し、アプリを起動します。同じホストを使用してほぼ同じ方法で SDK ツールを起動することもできます。

知っておくとよいこと

  • GAC – グローバル アセンブリ キャッシュ (Global Assembly Cache) のことで、Windows 上の .NET Framework が共有ライブラリを保管する場所です。”gacutil /l” で内容を一覧表示できます。
  • アセンブリの読み込みとバインド – 複雑なアプリでは、アセンブリを面白い方法でディスクから読み込むことができます。
  • プロファイリング (メモリ消費量、GC など) (英語) – C# や .NET のコードを評価 (プロファイリング) できる優れたツールがたくさんあります。このようなツールが Visual Studio には多数用意されています。
  • LINQ – 統合言語クエリ (Language Integrated Query) のことで、オブジェクトやデータベースを宣言的にクエリする高度な手段です。
  • CTS と CLS 共通型システム (Common Type System) と共通言語仕様 (Common Language Specification) のことで、オブジェクトがどのように使用され、渡されるかを .NET が機能するあらゆる場所で機能するよう相互運用可能な形で定義します。CLS は CTS のサブセットです。
  • .NET Native (英語) – いつの日か中間言語ではなく、ネイティブ コードにコンパイルできるようになるでしょう。
  • .NET のロードマップ (英語) – 2017 年の .NET の展望です。
  • モダン” C# 7 (英語) – C# には毎年新機能が実装されます。最新版は C# 7 で、優れた機能が多数揃っているのでぜひご覧ください。
  • Reactive Extensions (英語)– Reactive Extensions (Rx) は、監視可能なシーケンスと LINQ スタイルのクエリ演算子を使用して非同期かつイベント ベースのプログラムを作成するためのライブラリです。LINQ スタイルの演算子をデータ ストリームに適用して、クリーンかつ非同期に実行する洗練されたイベント ベースのプログラムを作成できます。

注: 一部の文章は、そのトピックに関する Wikipedia の記事から引用し簡潔にまとめたものです。Creative Commons 表示 – 継承 3.0 非移植。一部の文章は、こちらの .NET ドキュメントから引用しています。この記事は、ブログへのリンクやテキストをまとめたものです。一部の内容は筆者独自の考えですが、多くは異なります。


謝辞: この場を借りて、Raygun に感謝したいと思います。既に 40,000 人を超える開発者が Raygun でアプリを監視しています。ぜひ皆様もお使いください。ソフトウェア アプリケーションのエラー、クラッシュ、パフォーマンスに関する問題の根本原因を知ることができます。インストールはわずか数分で済みます。さっそくお試しください (英語) !

寄稿者について

719c91f5c3013e43ee46ed2bdc67f883 Scott Hanselman は、大学教授やファイナンスのチーフ アーキテクトを務めた経験を持ち、現在は講演者やコンサルタントとして活動しています。また、父親、糖尿病患者でもあるマイクロソフト社員です。コメディアンとしては失敗しましたが、コーンロウ ヘアーのスタイリスト、著者としての肩書も持っています。

 

APS Blocked Partition Switch

$
0
0

 

In SQL Server, when you perform a partition switch, a schema lock is acquired briefly to do the operation. This can get blocked by read operations that have a schema-s lock on the table. APS works a little differently as a lot of the locking is controlled within the PDW code before the request even gets sent to the SQL Server. The same result happens, though, and you cannot always use the same techniques as you would in SQL Server to mitigate it (blocking detection, managed lock priority, etc). You need to look for the process to be in a queued status within PDW. You can see this easily in the sys.dm_pdw_lock_waits DMV.

To show demonstrate, I created a partitioned table from the FactInternetSales table in AdventureworksPDW2012 as well as an empty partition aligned table to swap partitions:

CREATE TABLE  FactInternetSale_PartitionSwapTest
WITH
(
    CLUSTERED COLUMNSTORE INDEX,
    DISTRIBUTION = HASH(ProductKey),
    PARTITION
    (
        OrderDateKey RANGE RIGHT FOR VALUES
        (
        20000101,20010101,20020101,20030101,20040101,20050101,20060101,20070101,20080101,20090101,
        20100101,20110101,20120101,20130101,20140101,20150101,20160101,20170101,20180101,20190101,
        20200101,20210101,20220101,20230101,20240101,20250101,20260101,20270101,20280101,20290101
        )
    )
)
AS SELECT *
 FROM FactInternetSales;



 --create shadw table for partition swap

 CREATE TABLE [dbo].[FactInternetSale_PartitionSwapTest_AUX] (
    [ProductKey] int NOT NULL,
    [OrderDateKey] int NOT NULL,
    [DueDateKey] int NOT NULL,
    [ShipDateKey] int NOT NULL,
    [CustomerKey] int NOT NULL,
    [PromotionKey] int NOT NULL,
    [CurrencyKey] int NOT NULL,
    [SalesTerritoryKey] int NOT NULL,
    [SalesOrderNumber] nvarchar(20) COLLATE Latin1_General_100_CI_AS_KS_WS NOT NULL,
    [SalesOrderLineNumber] tinyint NOT NULL,
    [RevisionNumber] tinyint NOT NULL,
    [OrderQuantity] smallint NOT NULL,
    [UnitPrice] money NOT NULL,
    [ExtendedAmount] money NOT NULL,
    [UnitPriceDiscountPct] float NOT NULL,
    [DiscountAmount] float NOT NULL,
    [ProductStandardCost] money NOT NULL,
    [TotalProductCost] money NOT NULL,
    [SalesAmount] money NOT NULL,
    [TaxAmt] money NOT NULL,
    [Freight] money NOT NULL,
    [CarrierTrackingNumber] nvarchar(25) COLLATE Latin1_General_100_CI_AS_KS_WS NULL,
    [CustomerPONumber] nvarchar(25) COLLATE Latin1_General_100_CI_AS_KS_WS NULL
)
WITH (CLUSTERED COLUMNSTORE INDEX, DISTRIBUTION = HASH([ProductKey]),  PARTITION ([OrderDateKey] RANGE RIGHT FOR VALUES (20000101, 20010101, 20020101,
20030101, 20040101, 20050101, 20060101, 20070101, 20080101, 20090101, 20100101, 20110101, 20120101, 20130101, 20140101, 20150101, 20160101, 20170101,
 20180101, 20190101, 20200101, 20210101, 20220101, 20230101, 20240101, 20250101, 20260101, 20270101, 20280101, 20290101)));

To create the blocking situation, in another session, I ran the following to keep the select active:

BEGIN TRANSACTION
SELECT * FROM FactInternetSale_PartitionSwapTest

Then, in another session, run the ALTER TABLE Statement. It will just run indefinitely:

 ALTER TABLE FactInternetSale_PartitionSwapTest SWITCH PARTITION 26 to FactInternetSale_PartitionSwapTest_AUX PARTITION 26

You can see that it is in a queue’d status by querying the sys.dm_pdw_waits DMV:

select * from sys.dm_pdw_waits where state='queued'

 

How can I tell what is causing it to be queued? You can join back to the DMV in order to get the blocker and waiter information. In this query I also filtered on the waiter containing SWITCH within the command to capture only queued sessions that are performing partition switch operations:

select
Q.object_name ObjectName,
Q.session_id as QueuedSession,
Q.request_id QueuedQID,
datediff(ms,Q.request_time, getdate()) as WaitTimeMS,
QR.command as QueuedCommand,
B.session_id as BlockerSession,
B.request_id BlockerQID,
B.type as BlockerLockType ,
BR.Total_elapsed_time,
BR.Start_time,
BR.End_time,
BR.command as BlockerCommand
from sys.dm_pdw_lock_waits Q
inner join sys.dm_pdw_lock_waits B
       on Q.object_name=B.object_name
inner join sys.dm_pdw_exec_requests QR
       on Q.request_id = QR.request_id
inner join sys.dm_pdw_exec_requests BR
       on B.request_id = BR.request_id
where Q.State='Queued' and B.State='Granted'  and Q.Type='Exclusive' and QR.command like '%SWITCH%'

 

 

Now, if you know that you want to perform some action, you can programmatically do that. In this case, I am going to automatically kill any sessions that have been running over 5 minutes that are blocking my partition swap:

CREATE TABLE #BlockedXLocks
WITH
	(DISTRIBUTION=ROUND_ROBIN,
	LOCATION=USER_DB)
AS
select
Q.object_name ObjectName,
Q.session_id as QueuedSession,
Q.request_id QueuedQID,
datediff(ms,Q.request_time, getdate()) as WaitTimeMS,
QR.command as QueuedCommand,
B.session_id as BlockerSession,
B.request_id BlockerQID,
B.type as BlockerLockType ,
BR.Total_elapsed_time,
BR.Start_time,
BR.End_time,
BR.command as BlockerCommand
from sys.dm_pdw_lock_waits Q
inner join sys.dm_pdw_lock_waits B
       on Q.object_name=B.object_name
inner join sys.dm_pdw_exec_requests QR
       on Q.request_id = QR.request_id
inner join sys.dm_pdw_exec_requests BR
       on B.request_id = BR.request_id
where Q.State='Queued' and B.State='Granted'  and Q.Type='Exclusive' and QR.command like '%SWITCH%'
--AND CRITERIA TO KILL - WAIT Time, etc)
--and BR.Total_elapsed_time &gt; 300000 -- the select has been running for more than 5 min BUT this does NOT work in a case of a transaction
and datediff(mi, BR.Start_Time, getdate()) &gt;=5

WHILE ((SELECT count(1) from #BlockedXLocks) &gt; 0)
BEGIN
    DECLARE @BlockerSID nvarchar(10) = (SELECT top 1 BlockerSession FROM #BlockedXLocks)
	DECLARE @sql_code nvarchar(1000) = ( 'kill ''' + @BlockerSID + '''' )
	--print @sql_code
	EXEC sp_executeSQL @sql_code

	DELETE FROM #BlockedXLocks where BlockerSession = @BlockerSID

END

 

Now you have a way to keep long running reports from blocking your load processes that are trying to switch in new partitions into your fact table. Typically, the threshold would be longer than 5 minutes, or you may want to filter on a certain user account (non service account possibly), etc. There are a ton of options to help you decide what is a session that should be killed and what shouldn’t.

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>