Often while working with custom actions, the site actions menu can get crowded and sharepoint tends to add scrollbar to it. This doesn't look very appealing.
Various conic sections (hyperbolas, parabolas, ellipses) have interesting reflective properties. Parabolas are used for antennae and even car headlight and flashlight reflectors to focus a beam of light.
In the last post https://blogs.msdn.microsoft.com/calvin_hsia/2018/02/28/reflect-laser-beams-off-multiple-mirrors I showed some code to bounce a laser off various mirrors that the user can draw on the screen. I also mentioned that an elliptically shaped water tank will reflect waves and that Lithotripter uses ellipses to remove kidney stones Let’s enhance that code: • Add an ellipse • Allow multiple lasers, initially emanating from a single point, thus simulating a point source of light • Added a context menu that allows the user to set the initial laser point. E.g. what happens if you put the lasers initially at a focus of the ellipse? • Added an options dialog that allows the user to control the number of lasers
The code involves multiple files, one of which has 1300 lines and thus is hard to publish as a single blog entry, so I’ve put it at https://github.com/calvinhsia/ReflectCpp Just clone the repo locally, open the solution ReflectCPP.sln and run it.
Note how the code has multiple kinds of mirrors (currently 2), all of which implement an interface IMirror, which has methods like Draw, IntersectingPoint and Reflect
Things to try: • Slow it down, use 1 laser, watch how it reflects off the ellipse walls • Use your mouse and draw a mirror or several and watch how it affects the pattern. • Try to isolate the light into patterns stuck in just a portion of the ellipse without a mirror touching the ellipse (try setting the laser count to 1 first)
The name “ReflectCPP” implies it is the C++ version of the code, and that there’s another version. Next time! The patterns generated from the code are pretty remarkable. You can see the symmetry around the focal points (the little dots)
We are pleased to announce the release of the Microsoft OLE DB Driver for SQL Server, as we had previously announced! This new driver follows the same release model as all the other SQL Server drivers, which means that it’s maintained out-of-band with the SQL Server Database Engine lifecycle. You can download the new driver here.
The new Microsoft OLE DB Driver for SQL Server is the 3rd generation of OLE DB Drivers for SQL Server, introduces multi-subnet failover capabilities, and keeps up with the existing feature set of SQL Server Native Client (SNAC) 11, including the latest TLS 1.2 standards. As such, backwards compatibility with applications currently using SNAC 11 is maintained in this new release.
This new Microsoft OLE DB Driver for SQL Server (msoledbsql) supports connectivity to SQL Server (versions 2012 to 2017), Azure SQL Database and Azure SQL Data Warehouse. Also, keep in mind SNAC OLE DB (sqlncli) and MDAC/WDAC OLE DB (sqloledb) continues to be deprecated. For more information, refer to the page OLE DB Driver for SQL Server.
To use the new driver in existing applications, you should plan to convert your connection strings from sqlncli<x> or sqloledb, to msoledbsql. For example, for a trusted connection using SQL Native Client (SNAC11), plan to convert from:
In this video in Spanish and English we are going to show you an example about which is the impact to use or not connection pooling, using a C# application.
As probably you know connection pooling is a special connection cached, that is enabled by default using ADO .NET with a maximum capacity, by default, of 100 concurrent connections.
Using connection pooling we will have an improvement in the time spent in every connection attempt made to our Azure SQL Database.
I came across this issue while working with my developer customer(s) who was developing an Office Web Add-in (Outlook/OWA specific)
- They designed an Outlook Web add-in, the user is asked to allow a dialog box to be displayed. - Now the user chooses Allow, and it throws the error message (in both IE and Edge) - The error message states, "The security settings in your browser prevent us from creating a dialog box. Try a different browser, or configure your browser so that [URL] and the domain shown in your address bar are in the same security zone."
In order to overcome the issue, we tried adding the domain of the add-in to the list of trusted sites in Internet Explorer – it worked. Just make sure its a trusted add-in!!
The post Math Accents discusses how accent usage in math zones differs from that in ordinary text, notably in the occurrence of multicharacter bases. Even with single character bases, the accents may vary in width while in ordinary text the accent widths are the same for all letters. The present post continues the discussion by describing the large number of accents available for math in Unicode and in Microsoft Office math zones and how they are represented in MathML, RTF, OMML, LaTeX, and UnicodeMath.
Unicode math accents
As noted in Section 3.10 Accent Operators of the UnicodeMath specification, the most common math accents are (along with their TeX names)
These and more accents are described in Section 2.6 Accented Characters and 3.2.7 Combining Marks in Unicode Technical Report #25, Unicode Support For Mathematics. More generally, the Unicode ranges U+0300..U+036F and U+20D0..U+20EF have these and other accents that can be used for math.
The Windows Character Map program shows that the Cambria Math font has all combining marks in the range 0300..036F as well as 20D0..20DF, 20E1, 20E5, 20E6, 20E8..20EA. The range 0300..036F used as math accents in Word looks like
Image may be NSFW. Clik here to view.Except for the horizontal overstrikes and the double-character accents shown in red, all these work as math accents in Microsoft Office apps, although many aren’t used in math. In keeping with the Unicode Standard, UnicodeMath represents an accent by its Unicode character, placing the accent immediately after the base character. There’s no need for double-character accents in Microsoft Office math since the corresponding “single” character accents expand to fit their bases as in
In UnicodeMath, this is given by (a+b)~, where ~ can be entered using the TeX control word tilde. This is simpler than TeX, which uses widetilde{a+b} for automatically sized tildes rather than tilde{a+b}.
The combining mark in the range 20D0..20EF that work as accent objects in Office math zones areImage may be NSFW. Clik here to view.You can test accents that don’t have TeX control words by inserting a math zone (type alt+=), type a non-hex letter followed by the Unicode value, alt+x, space. For example, alt+=, z, 36F, alt+x, space givesImage may be NSFW. Clik here to view.
Accents in MathML
MathML 1 was released as a W3C recommendation in April 1998 as the first XML language to be recommended by the W3C. At that time, Unicode was just starting to take hold as Microsoft Word 97 and Excel 97 had switched to Unicode. [La]TeX was developed before Unicode 1.0, so it relied on control words. Accordingly, it was common practice in 1998 to use control words or common spacing accents to represent accents instead of the Unicode combining marks even though many accents didn’t have a unified standardized representation. Unicode standardized virtually all math accents by using combining marks. One problem with using the combining marks in file formats is that they, well, combine! So, it may be difficult to see them as separate entities unless you insert a no-break space (U+00A0) or space (U+0020) in front of them. UnicodeMath allows a no-break space to appear between the base and accent since UnicodeMath is used as an input format as well as in files. Only programmers need to look at most file formats (HTML, MathML, OMML, RTF), so a reliable standard is more important for file formats than user-friendly presentation.
MathML 3’s operator dictionary defines most horizontal arrows with the “accent” property. In addition, it defines the following accents
02C6 ˆ modifier letter circumflex accent
02C7 ˇ caron
02C9 ˉ modifier letter macron
02CA ˊ modifier letter acute accent
02CB ˋ modifier letter grave accent
02CD ˍ modifier letter low macron
02D8 ˘ breve
02D9 ˙ dot above
02DA ˚ ring above
02DC ˜ small tilde
02DD ˝ double acute accent
02F7 ˷ modifier letter low tilde
0302 ̂ combining circumflex accent
0311 ̑ combining inverted breve
Presumably the operator dictionary should be extended to include more math combining marks and their equivalents, if they exist, with the spacing diacritics in the range U+02C6..U+02DD.
Here’s the MathML for the math object 𝑎̂.
<mml:mover accent="true">
mm<mml:mi>a</mml:mi>
mm<mml:mo>^</mml:mo>
</mml:mover>
Accents in OMML
“Office MathML” OMML is the XML used in Microsoft Office file formats to represent most math. It’s an XML version of the in-memory math object model which differs from MathML. The math accent object 𝑎̂ has the following OMML
The Rich Text Format (RTF) represents math zones essentially as OMML written in RTF syntax. Regular RTF uses the uN notation for Unicode characters not in the current code page. The math accent object 𝑎̂ has the RTF
We are aware of issues within Application Insights and are actively investigating. Some customers may experience Data Gaps. The following data types are affected: Availability,Customer Event,Dependency,Exception,Metric,Page Load,Page View,Performance Counter,Request,Trace.
Work Around: None
Next Update: Before 03/31 07:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Mohini
We are aware of issues within Application Insights and are actively investigating. Some customers may experience Data Gaps. The following data types are affected: Availability,Customer Event,Dependency,Exception,Metric,Page Load,Page View,Performance Counter,Request,Trace.
Work Around: None
Next Update: Before 03/31 07:30 UTC
We are working hard to resolve this issue and apologize for any inconvenience. -Mohini
As part of Microsoft Internal MOOC course “Big Compute: Uncovering and Landing Hyperscale Solutions in Azure”, I was introduced to CycleCloud and learned how to setup CycleCloud in my Azure subscription. I would like to blog about some of my HPC learning + steps followed to setup one.
What is HPC? High Performance computing(HPC) is a parallel processing technique for solving complex computational problems. HPC applications can scale to thousands of compute cores. We can run these workloads in our premise by setting up clusters, extend the burst volume to cloud or run as a 100% cloud native solution.
1) Azure Batch –>managed service, “cluster” as a service, running jobs, developers can write application that submit jobs using SDK, cloud native, HPC as a service, Pay as you go billing
2) CycleCloud–>acquired by MS, “cluster” management software aka orchestration software, supports hybrid clusters, multi cloud, managing and running clusters, one time license, you have complete control of the cluster and nodes
3) CrayComputer –>partnership with CrayComputer, famous weather forecasting service
Azure Batch doesn’t need intro as it is there for quite sometime, setting up a Batch is very easy. Tools like Batch Labs helps us to monitor/control the Batch job effortlessly. Batch SDK helps us to integrate with existing legacy application easily to submit the job or manage the entire batch operation using their custom developed application. The end uses need not to login to Azure portal for submitting the jobs.
What is CycleCloud? CycleCloud provides a simple, secure, and scalable way to manage compute and storage resources for HPC and Big Compute/Data workloads in Cloud. CycleCloud enables users to create environments in Azure. It supports distributed jobs and also parallel workloads to tightly-coupled applications such as MPI jobs on Infiniband/RDMA. By managing resource provisioning, configuration, and monitoring, CycleCloud allows users and IT staff to focus on business needs instead infrastructure.
5) Edit the vms-params.json file to specify the generated rsaPublicKey parameter from Step3. The cycleDownloadUri and cycleLicenseSas parameters have been pre-configured, but if you procure license then you need to update these two params as well. For now, I am leaving as it..
7) Post the deployment, you will find the above set of resources created in our resource group say “cycle-rg”. Select the Cycleserver VM and copy the IP address to see if you can browse CycleCloud setup page.
8) Pls note, the installation uses a self-signed SSL certificate, which may show up with a warning in your browser. So, it is safe to ignore the warning and add it as exception to get the page like the after setting up the cluster (refer configure “CycleCloud Server” section from this page). If you get the below page after all the setup, then we are ready to create new cluster and submit the jobs.
11) Now our Grid Engine cluster is ready for the job submission, For security reasons, the CycleCloud VM (CycleServer) is behind a jump box/bastion host. To access CycleServer, we must first log onto the jump box, and then ssh onto the CS instance. To do this, we'll add a second host to jump through to the ssh commands.
From Azure portal, retrieve the admin box DNS and construct the SSH command as in screenshot. The idea is to “ssh –J” to our CycleServer through CycleAdmin box. One cannot directly ssh into CycleServer which is for security.
12) Once we get into CycleAdmin@CycleServer, first change into root user and call CycleCloud Initialize command. You need to enter the username and password for that machine.
we could also set the autoscaling feature from CycleCloud cluster settings, so the Azure VM’s comes and goes away once the job is completed. We have submitted 100 jobs per our command so it will request 100 cores. Based on the cluster core limit, it will decide whether to scale till that or not. Let say, if we have set 100 cores as cluster scale limit, then we would see many other VM’s also getting created to complete the task in parallel.
[cyclecloud@ip-0A000404 ~]$ qsub -t 1:100 -V -b y -cwd hostname
Once the job is completed, we can terminate the cluster and also delete the RG if you don’t want to retain which is our last step. I know it’s a bit of learning + confusing to start for the first time, but once you hands-on then it is easy to setup whenever you require and dispose off after completing our jobs.
As you can see, this will establish a new service principal (an identity) with privileges on the entire subscription or a specific resource group. This may be too much access to grant and moreover, it may well be that it is not the user of VSTS/TFS responsible for access policy. In this blog, I will briefly describe how to create a service principal with no access and then granularly give it access (e.g., on a resource by resource basis).
In the example below, I will be creating a service principal in Azure Government and connecting it to a VSTS project. In the Azure portal, do the following 5 steps:
1. Create a new App Registration in Azure Active Directory:
Image may be NSFW. Clik here to view.
Image may be NSFW. Clik here to view.
2. Make a note of the "Application ID" (a.k.a. Service Principal ID):
Image may be NSFW. Clik here to view.
3. Click the "Keys" pane and create a new key by giving it a name and duration and clicking save.
The key will only be visible right after you click save, so make a note of it:
Image may be NSFW. Clik here to view.
4. Make a note of your "Subscription ID" and "Subscription Name":
Image may be NSFW. Clik here to view.
5. Finally make a note of your Azure Active Directory Tenant ID:
Image may be NSFW. Clik here to view.
After this you should have 5 pieces of information:
Subscription ID: SUBSCRIPTION GUID
Subscription Name: NAME OF SUBCRIPTION
Service Principal ID: SERVICE PRICIPAL GUID
Service Principal Key: SOME-LONG-KEY
Tenant ID: TENANT GUID
You can think of these as a set of user credentials, which currently have no privileges in your subscription. But you can now selectively add this "user" to resources where you want to grant privileges. Let's suppose we want to use this service principal to publish into one specific Web App and nothing else. Find the Web App and bring up the IAM (Identity and Access Management) pane:
Image may be NSFW. Clik here to view.
Hit "+ Add" and search for the App Registration you created. In this example we are granting it contributor rights to a single Web App:
Image may be NSFW. Clik here to view.
Now let's move to VSTS and add a Service connection using the Service Principal. Instead of using the dialog above, hit the "use the full version of the endpoint dialog." link, which will take you to this dialog:
Image may be NSFW. Clik here to view.
After you have filled in all the details, make sure to hit "Verify connection" and you should get a green check mark. If you do not get a green check mark, it could be because one of the fields have been filled in incorrectly, but it will also happen if you have created a Service Principal without assigning it privileges to any resources yet. Please follow the steps above.
In the example above, we have named the connection "CICD-MAG" and this name will show up as a subscription when we set up deployments. For example, to deploy into a Web App:
Image may be NSFW. Clik here to view.
And then choose the right "Subscription":
Image may be NSFW. Clik here to view.
You should notice that you will only be able to pick a single "App service name", since we only granted privileges on a single Web App. You can of course use the Service Principal to grant access to any number of resources and even to entire resource groups. Unlike the simplified dialog discussed in the beginning, this gives you granular access.
And that's it, you now have the tools to delegate just enough control to specific credentials. Let me know if you have questions/comments/suggestions.
Image may be NSFW. Clik here to view. What is the higher-order bit in software development: individual productivity or feature team productivity? Five years ago, in The flow fallacy, I argued that responsive delivery of customer value was the goal, and that goal was best achieved by feature teams, not individuals. Thus, feature team productivity outweighs individual productivity.
While many readers agreed that feature team productivity prevails, many hyperventilated.
Some saw feature team productivity as an attack on the glorious state of flow. I delight in flow and agree about its importance, but flow shouldn't be allowed to isolate engineers completely from their feature teams.
Some saw feature team productivity as an excuse for clueless, coddled, corporate custodians (like me) to force engineers into open space. I despise open space, yet love team space—more on that seeming contradiction shortly.
Some questioned why feature teams are better positioned than individuals to deliver customer value. After all, can't teams be just as detached as individuals, and individuals be just as connected as teams? I failed to explain why feature teams are better positioned than individuals to deliver customer value—a major oversight. Correcting that oversight is the primary focus of this column.
Some attacked me and my ideas personally. The internet is a rough place. With a name like I.M. Wright, you reap what you sow.
Why are feature teams fundamentally better than individuals at delivering customer value, and why are team spaces fundamentally better than open spaces or individual offices? You've read this far—read just a little further. I promise this will be productive.
You talkin' to me?
It's true that entire feature teams can be detached from customers and miss critical issues in design reviews, code reviews, or usability studies. It's also true that a visionary individual alone can foresee customer needs and trends. However, the common case is that teams, collectively, spot issues and empathize with customers better than individuals do. Why? Probability.
Say you're part of a usability study (or design/code review). Being human, you'll miss some issues. Say you only spot 30% of the issues—not that great. Say five other people are part of the same study (or review), and each only spots 30% of the issues. What's the probability that there are issues everyone misses? The probability that you didn't see an issue is 70%. The probability that you and one other person missed it is 49% (70% x 70%). The probability that all six of you missed it is 12% (70% ^ 6). The group of you caught 88% of the issues—pretty good!
The odds improve if your group contains folks who excel at spotting issues, but even pedestrian teams are better together. Team members just need to discuss designs, review each other's work, and agree on direction to gain the benefits of teamwork. That means collaborating frequently and easily. Sitting near each other helps.
Eric Aside
Six team members is my favorite feature team size, as I derive in Span sanity—ideal feature teams. When I'm building teams, they may start smaller or grow larger than six, but over time I work to see that mature teams are that size.
So happy together
The worst work environment for feature teams is open space. Team members can't talk to each other without interrupting and being interrupted by other people, and they can't do focused work on their own without headphones and blinders. Lost in space—danger!
Isolated individual offices are better, since engineers can at least do focused work on their own. However, interactions with their feature team are more limited, as are the benefits of that interaction we just reviewed, not to mention the serendipitous ideas that arise organically from casual conversations among people working the same problems.
As I discuss in Collaboration cache—colocation, the best work environment for feature teams is team space. Think of it as an individual office for your feature team. Add one person from outside your feature team (are you listening admins?), and your team space transforms into open space—from best work environment to worst.
However, if only feature team members are in your team space, then collaboration is frequent and easy. Flow freaks might fume that you can't focus, but that's flawed foolishness. Remember, it's just you and your small feature team in the room. You can come up with your own focus hours and your own do-not-disturb rules. Plus, team spaces often have focus and code rooms for complete silence.
Many recently built or remodeled Microsoft buildings contain team rooms, but they are too big, holding 12 or more people. Remember, squeeze even one nonteam member into your team’s space and productivity is lost. Fortunately, facilities designers now recognize the issue, and realize how people get packed into every square inch, so they're designing new buildings and team spaces that are right-sized or resizable for feature teams.
In the meantime, you can create semiprivate team rooms by sitting your feature team members in offices across from each other in a hallway or by dividing up larger rooms with rolling whiteboards or partitions. These alternatives are better than sitting apart, but unfortunately you still get outsiders walking through your space—an imperfect compromise.
Nobody's perfect
Maybe you still think that individual productivity is king and feature team productivity is overrated. Maybe you still think I'm lame and my column is trash. You have a right to be wrong.
The customer is king, and while individual outliers like Steve Jobs exist, you're not Steve (and even Steve brainstormed ideas with his team). Rather than relying on yourself to understand every issue and design every interaction, engage your customers and your feature team.
Together, feature teams are better at identifying issues, generating ideas, and staying focused on our diverse customers. Yes, design by committee can go horribly wrong, but we're talking about peer collaboration, not bureaucracy.
Design individually (or as you prefer), but discuss and review your designs with your peers and your customers. Learn from each other. Share the same dedicated team space and delight in your innovation as our customers delight in our creations.
We are excited to announce the new version, 9.0.1113.10, of Voice of the Customer for Dynamics 365. This version of Voice of the Customer is compatible with Dynamics 365 version 8.2 and later.
This article describes the features improvements and the fixes that are included in this update.
New and updated features
Design interactive surveys by using answer tag
You can now personalize a question based on the answer to the previous question by using an answer tag. The answer tag stores the response in a tag that can be replaced in subsequent questions and answers in the survey at run time. For more details, click here.
Validate Voice of the Customer solution and survey
You can now validate the status for your Voice of the Customer solution. Validation allows you to check whether:
Voice of the Customer configuration and provisioning is in proper state.
Survey lifecycle is working properly for the organization.
Survey responses are being received properly.
You can also validate a survey for any issues in its configuration.
Send multiple surveys by using single email
You can now add multiple survey snippets to an email and send them to your respondents. This means, you now can send multiple surveys by using a single email.
Updates to survey import and clone functionalities
Following updates are made to the survey import and clone functionalities:
The Find and replace text section is removed from the survey import page.
The translation file is now cloned along with all the translation strings.
Robust survey translation
The translation file is now validated for missing string translations and incorrect HTML format. You can also translate the invitation link text to a different locale to personalize it as per the respondent’s locale.
Implement General Data Protection Regulation in Voice of the Customer
With GDPR being implemented, customers can contact you with view, export, update, and delete requests of their data stored in Dynamics 365. You must take appropriate actions based on the customer's request. For more details, click here.
Click here to access the Voice of the Customer official documentation.
Resolved issues
In addition to the above-mentioned features, this update resolves the following issues:
Footer URLs are opened in the same tab as survey.
Unable to enter negative value as the answer to the numerical response question.
All answer options of a mandatory question are highlighted if an option is not selected by the respondent.
The Response column under the Question Responses section in a survey response allows only 100 characters.
We hope that your business will be more productive with the new Voice of the Customer solution. Try the new version of Voice of the Customer and provide your feedback.
This post describes how you can leverage the Windows UI Automation (UIA) API to help your customers interact with text shown in an app.
Apology up-front: When I uploaded this post to the blog site, the images did not get uploaded with the alt text that I'd set on them. So any images are followed by a title.
Introduction
Some time back, an organization which works with people with low vision asked me whether it would be practical for me to build a tool which provides customizable feedback to indicate where keyboard focus is, and where the text insertion point is. The result is Herbi HocusFocus at the Microsoft Store, and I described the app's journey in the following posts:
(Note that for the rest of this post, I'm going to refer to the "text insertion point" as the caret, given that in some apps, the caret can appear in read-only text, and as such, isn't actually a text insertion point.)
The organization got back to me recently to let me know that some of their clients would like a line shown underneath the sentence that they're reading, and wondered whether it would be practical for me to update my app to have that line shown. This seemed like the sort of feature where in some high-profile target apps, the Windows UI Automation (UIA) API could really help. So I spent a few hours adding a first version of the feature to the app, and uploaded it to the Microsoft Store. I knew there'd be certain constraints with what I'd built, but I was hoping it would work well enough to generate feedback, and help me prioritize my next steps.
Certainly the initial response has been encouraging, as I've been told that the update is receiving very positive reactions. From the organization:
"Awesome! I went and tried the feature right away and it's certainly a very great addition for people with cerebral visual impairment and people who suffer from acquired brain impairment/injury."
One handy aspect of the new feature is that the visuals shown for the underline were very straightforward for me to implement. While the other custom visuals shown by the app involve dealing with transparency, this new feature simply moves a window around. The window's positioned below the current line in the text, and is as wide as the line. It has a fixed short height, and has a background of the same customizable color as the caret highlight. Given how straightforward that was, it's really only the UIA interaction that's of interest here.
This has been a reminder for me of how a feature that's relatively straightforward to add using UIA, can have an important impact on people's lives. As such, I'd encourage all devs to consider how you can leverage UIA to add exciting features to your own apps which could help your customers in new ways.
So what are the constraints in my new feature?
Before describing how I leveraged UIA to add the new feature, it's worth calling out a few scenarios which won't be impacted by my recent work. Depending on the feedback I get, I can consider which of these constraints are most impactful to people using my app, and so consider what I can do about that.
The UIA Text pattern needs to be available
In order for my app to learn where the current line is, the target app needs to support the UIA Text pattern. The Text pattern provides a great deal of information about the text shown in an app, and I talked about the Text pattern in the series of posts starting at So how will you help people work with text? However, not all apps support the Text pattern. So if the target app doesn't support the Text pattern, then there'll be no underline shown in the app.
The Inspect SDK tool can report whether a text-related part of an app's UI supports the Text pattern. If the Text pattern is supported, then the element's IsTextPatternAvailable property is true.
Image may be NSFW. Clik here to view.
Figure 1: The Inspect SDK tool reporting that the editable area in WordPad supports the UIA Text pattern.
The app only works with editable text
The app tracks changes to the caret position in an app, and so won't work in apps where there is no caret. This will typically be the case with apps that show read-only text, and provide no way to move the caret with the keyboard in order to select text.
Checking the UIA ControlType of the element
I wanted to ease into the new feature somewhat, so I could feel confident that things were working as required in a few well-known scenarios, before opening it up to work in as many places where it might work. As such, I checked the ControlType of the UIA element that claims to support the Text pattern, and decided to only show the line beneath the current line of text if the ControlType was Document or Edit. I expect I'll remove this constraint at some point.
You might be wondering what types of control other than Document or Edit would raise a UIA TextSelectionChanged event? Well, say an app presents read-only text, but provides a way to move the caret through the text with the keyboard, in order to select text. I've found one app which does this, and the ControlType of the element supporting the Text pattern is Pane.
A regrettable use of legacy technology
When I first added the feature of highlighting the caret position a few years ago, I made a poor choice. While all the focus-tracking action I took used only UIA, the caret tracking used a mix of UIA, and some legacy technology called winevents. I only included some use of a winevent because I was familiar with how that related to the caret position, and it seemed convenient for me at the time. I did it despite knowing how winevents are desktop-only technology, and so my feature wouldn't work if in the future it's downloaded through the Store to other platforms. And I did it despite UIA supporting a TextSelectionChanged event, which an app can raise whenever the caret moves. Well, I'm regretting doing that now.
It seems I've found a situation where an app raises the UIA TextSelectionChanged event, yet my event handler for the legacy winevent doesn't get called. So my app doesn't realize the caret has moved. So this means I need to ditch my use of the legacy winevent, and move to only use UIA for tracking the caret. This is probably something I would have done anyway at some point, but I now have a growing urgency. I wouldn't say this is my biggest regret in life, but I am kicking myself rather. I have little enough time to work on my app as it is, so to be adding to my workload simply because I chose to use legacy technology, really doesn't help. It's UIA-only for me now.
Using UIA to determine the bounding rectangle of the line of text where the caret is
So the goal with the new feature is to underline the line of text which currently contains the caret. That involves two steps. The first is to realize when the caret's moved, and the second is to get the bounding rectangle of the line of text where the caret is. My app already had code to react to the caret moving, and like I said above, that's not done by my app today in a way I'd recommend. Instead, I'd recommend that an app registers for notifications when the caret moves, by calling IUIAutomation::AddAutomationEventHandler(), passing in UIA_Text_TextSelectionChangedEventId. Your event handler will then get called as your customer moves the caret around the target app.
Note: Beware of what threads are being used in this sort of situation. Calling back into UIA from inside the event handler can cause unexpected delays, and I often avoid that by requesting when I register for the event, that certain data of interest relating to the element that raised the event, is to be cached with the event. Also, the WinForms app's UI typically won't be updatable from inside the event handler, so I may call BeginInvoke() off the app's main form in order to have the UI update made on the UI thread. Even having done that, I did once in a while find a COM interop exception thrown when trying to update the UI. I've not had a chance to figure out the cause of that yet, so I do have some exception handling in the code at the moment.
Ok, so let's say I know the caret's moved, and I need to find the bounding rect of the line of text which now contains the caret. I could take action every time the caret moves, to get the UIA element containing the caret, and check if it supports the UIA Text pattern. If it does support the pattern, get the bounding rectangle associated with the line of text containing the caret. But in practice, I don't need to check all that every time the caret moves. Rather, I could check whenever keyboard focus moves, does the newly focused element support the Text pattern? If it doesn't, then as the caret moves around that element, I know the Text pattern won't be available, and don't need to make the check for the pattern.
The code I ended up with is as follows:
// Cache the UIA element that currently has keyboard focus, so we don't need to retrieve
// it every time the caret moves.
private IUIAutomationElement _elementFocused;
// Call this in response to every change in keyboard focus or caret position.
public void HighlightCurrentTextLineAsAppropriate(bool focusChanged)
{
// Are we currently highlighting the line containing the caret?
if (!checkBoxHighlightTextLine.Checked)
{
// No, so make sure the window used for highlighting is invisible.
_highlightForm._formTextLine.Visible = false;
return;
}
// Using a managed wrapper around the Windows UIA API, (rather than the .NET UIA API),
// I tend to hard-code UIA-related values picked up from UIAutomationClient.h.
int propertyIdControlType = 30003; // UIA_ControlTypePropertyId
int patternIdText = 10014; // UIA_TextPatternId
// Are we here in response to a focus change?
if (focusChanged)
{
// Hide the highlight until we know we can get the data we need.
_highlightForm._formTextLine.Visible = false;
// Create a cache request so that we access the data that we know we'll
// need with the fewest number of cross-proc calls as possible.
Perhaps one of the most exciting steps above is the call to ExpandToEnclosingUnit(). This is the place where I get to learn about the line that contains the caret. That function is really handy in other situations too. For example, if I want to learn of the bounding rectangle of a word or a paragraph, or of contiguous text which has the same formatting. That's pretty useful stuff in a variety of scenarios.
I should add that while the feature seems to hold up well enough in some apps, (including WordPad and NotePad,) it's not as reliable as I'd like it to be in other apps, (including Word 2016). I'll bet that's due to my use of the legacy technology I mentioned earlier. Ugh. I really need to find time to move to using only UIA, like I should have done in the first place.
Still, even with all the improvements I should look into, for a first version of the feature, it works well enough to generate the feedback that I need in order to make it as helpful as I can.
Image may be NSFW. Clik here to view.
Figure 2: The line of text containing the caret being underlined in Word 2016.
Always keep in mind the accessibility of the app itself
Whenever I'm updating an app's UI, I need to consider the accessibility of the resulting UI. For this new feature, the only update to the UI is to add a radio button at a specific place relative to existing UI. Because I'm adding a standard control that's provided by the WinForms framework, I know I'll get a great head start on accessibility. For example, the control will be fully leverageable via the keyboard, and it'll be rendered using appropriate system colors when a high contrast theme is active, and the Narrator screen reader will be able to interact with the control. This is all great stuff, and in fact there were only two things I needed to check.
Focus order
As my customers tab through the UI, the order in which keyboard focus moves through the app must provide the most efficient means for my customers to complete their task. If keyboard focus were to bounce around the UI as the tab key is pressed, that would be at best a really irritating experience that no-one would want to have to deal with, or quite possibly make the app unusable in practice.
While my app's not web-based, the W3C web content accessibility guide Focus Order sums up the principal nicely for web UI, "focusable components receive focus in an order that preserves meaning and operability". I want that to be true in any app I build, be the UI HTML, WinForms, WPF, UWP XAML or Win32.
Fortunately, Visual Studio makes it quick 'n' easy for me to make sure I'm delivering an intuitive tab order. All I do in Visual Studio is go to View, Tab Order, and then select each control in the UI, (using either the mouse or keyboard,) in the order I'd like keyboard focus to move through the UI.
The screenshot below shows all the controls in the app UI with an accompanying number shown by each control, indicating the control's position in the tab order.
Image may be NSFW. Clik here to view.
Figure 3: The app form in Visual Studio's design mode, with the tab order shown by the controls in the form.
Note that when using the Tab Order feature, it is important to include the static Labels in the logical place in the order, even though keyboard focus doesn't move to the Labels as your customer tabs around. For some types of control, if an accessible name has not been set on the control, then WinForms might try to leverage an accessible name based on the text of a nearby Label. For example, with a TextBox or ComboBox which don't have a static text label built into the control. In those cases, having the associated Label precede the control in the tab order, can result in the control getting the helpful accessible name that your customer needs.
Programmatic Order
Whenever I'm updating UI, I need to consider both the visual representation of the app, and the programmatic representation as exposed through UIA. Both of these representations must be high quality for my customers.
In some situations, the path my customers take when navigating through the UI will be based on the order in which the controls are exposed through the Control view of the UIA tree. The Control view is a view which contains all the controls of interest to my customers, including all interactable controls and static Labels conveying useful information. (So the view might not contain such things as controls used only for graphical layout which are not required to be exposed either visually or programmatically.)
Having added the new CheckBox to the app, I pointed the Inspect SDK tool at the UI to learn where the control was being exposed in the UIA hierarchy. It turned out that the CheckBox was being exposed through UIA as the first child element beneath the app window. So programmatically, it existed before all other elements in the UI. The screenshot below shows the CheckBox is being exposed before all the other elements, which are its siblings in the UIA tree.
Image may be NSFW. Clik here to view.
Figure 4: Inspect reporting the UIA tree of the app, with the new CheckBox as the first child of the app window.
So say a customer using the Narrator screen reader encounters the app window. If they're not familiar with the app, they might choose to switch to use Narrator's Scan mode, to learn about the UI. By doing that, they may press the up and down arrows and move through the controls in the UI, including the controls which can't get keyboard focus. And the navigation path then taken through the controls must be a logical one based on the functionality in the app. Importantly, the path actually taken is impacted by the order of the elements as exposed through UIA. This means the first element they encounter will be the new CheckBox. Then they'll move to the static Label shown visually at the top of the app. And later, they'll move from the control shown visually before the new CheckBox, directly to a control following the new CheckBox. This is not the experience I want to deliver at all.
So to address this, I edit the designer.cs file for the app, and change the order in which the controls are added to the form. After I originally added the new CheckBox to the app, the related designer.cs code was as below. I've highlighted the line of interest in the code, which contains the new control called "checkBoxHighlightTextLine".
So I grabbed the line adding the new CheckBox to the form, and I moved it to be between the lines which add the controls logically before and after the CheckBox. I've highlighted the line of interest in the following resulting code.
Having done that, if I now point Inspect at the UI, it reports that the CheckBox element is sandwiched between the other controls in the UIA tree in a manner that matches the meaning of the UI.
Image may be NSFW. Clik here to view.
Figure 5: Inspect reporting the UIA tree of the app, with the new CheckBox exposed in the appropriate place in the UIA tree.
So what's next?
Over the next few weeks I hope to grab a few early hours here and there, and work on some of the points raised by people using the app. These include:
1. Customizable color, size and transparency for the underline. That should be quick to do, so I'll probably work on that first. And as always, I'll make sure the app's tab order and UIA hierarchy are intuitive after updating the UI.
2. I got the feedback, "her last wish is that the focus marking would work even better and would work in the Windows menu etc." This is going to be an interesting one, and I'll need to follow up to learn more about exactly which UI is of interest. For regular app menus, my focus highlight struggles, because the menu can appear on top of my focus highlight. I expect I can address that, by adding an event handler such that I can learn when my highlight isn't top-most, and then moving my highlight back on top when necessary. (And I'd need to make sure I can't get stuck in a loop with other UI which also tries to keep itself on top.) But what I suspect is really the request here, is that the highlight works on the Start menu, and that's not straightforward for me to resolve. For my app to achieve that, it would need UIAccess privileges, and as I understand things, apps downloaded from the Store today can't get that. So the only way I could get that to work would be for me to revert to shipping my app through my own site and installer, and signing it, which I'm really not set up for at the moment. Hmm. I'm not sure what I'll do about this.
3. The app needs to work more reliably in some apps, and in some cases, work at all. One app which I'm particularly interested in is Edge, when caret browsing's turned on. For this to happen, I need to ditch the code I have relating to use of the legacy winevent, and move to only use UIA.
4. This last point is not something I've had feedback on, but is something I'm curious about nonetheless. Sometimes my underline appears further below the line of text than might seem appropriate, and I expect that's due to the paragraph formatting on the line. So I wonder if I can get what Word calls the "Spacing after" for the line of text, and account for that when positioning my underline. Perhaps this would keep the underline at the same distance from the text shown visually on the line, regardless of the spacing after. I really don't know if that's possible, but it'll be interesting to find out.
Summary
It's been a pleasure for me to discuss with the organization which asked for the new underline feature, the human aspects of building software like Herbi HocusFocus. For me, software has always been a means to an end. That end being the impact it has on people's lives. Other feedback I've received from the organization is:
"I also agree with the fact the developers should look at the way they can create tools for people in their community. They can see it as a form of charity work, giving back to their community and they'll see how small tools or adjustments can mean a world of difference to people. Software developers should always wonder how their users will actually use their software and build it with their users in mind. People and usability come before the technology if you ask me. That's why accessibility and a user friendly design should be getting more attention. All users benefit from a good design and from accessibility options and some users even depend on them to be able to use a piece of software. I'm also a strong advocate for user testing before launching a product, most software developers have certain expectations of how their users will interact with their software. Sometimes they can be quite wrong about the way people use and view their software and which steps seem logical to the user."
I can't argue with any of that. In fact, after more than sixteen years of building exploratory assistive technology apps like Herbi HocusFocus, only a handful of my apps actually had any impact. And those were the apps where someone had contacted me, specifically asking whether I could build a tool for them, because there didn't seem to be a solution available to them already. I'm very grateful to have been able to learn from all those people, and to get a better understanding of where I can have most impact.
Overall, this exercise has been a reminder for me of how UIA can help devs add some seriously useful functionality to an app, with relatively little work. That doesn't mean to say UIA does everything you want. I once told a dev how UIA doesn't provide a simple way to get a collection of elements that lies within a rectangle. He replied saying "I find that difficult to believe". Well, we live in the world we live in, and there's lots of things that we might like to exist in this world, and they don't. As far as I'm concerned, we try to improve things for the future, and make the most of what's available to us today. And I believe that UIA has a lot to offer us and our customers today.
So please do consider how UIA might help you provide a powerful new feature to help your customers. Even with all my new feature's constraints, the first piece of feedback I got, was "Wow, I'm amazed by your actions!". Anyone who knows me, knows my actions are far from amazing. But I have the support of some very cool technology that can make me look pretty useful at times.
Oops, the private WordPress blog I administer during my off hours on an Azure WebApp went dark with 500 Internal Server Error on all web requests.
<Temporary failure to upload media to the blog, pending screenshot>
The App Service built-in Diagnose and solve problems / Diagnostics and Troubleshooting quickly pointed to an error spike in PHP logs for the past 24 hours:
PHP Fatal error: Class 'Twig_Loader_Filesystem' not found
Or variants of the same in PHP Log Analyzer / PHP Error Log Processing Report:
Class 'Twig_Loader_Filesystem' not found in D:homesitewwwrootwp-contentpluginsgallery-by-supsysticvendorRscEnvironment.php on line 171
After PHP update:
Uncaught Error: Class 'Twig_Loader_Filesystem' not found in D:homesitewwwrootwp-contentpluginsgallery-by-supsysticvendorRscEnvironment.php:171
(Note that you need to re-run "Collect PHP Error Logs" each time you want to validate if a change resolved your issue or got you to the next issue to investigate.)
Apparently I needed newer PHP in the first place to run Twig. My PHP in the Application Settings was at 5.6 so I jumped to 7.2. That wasn't enough of course, just a pre-req.
Then I needed Composer as installation mechanism. I found it in the App Service Extensions inside the portal. A restart of the App Service and some wait time was required before Composer started working in the Kudu console (I used the PowerShell version of the console).
<pending screenshot of extensions with composer>
The default start folder for the Kudu console doesn't work for Twig installation due to permission denied. I navigated just under site then (I'm unsure if that's the right place to install it, expert recommendation welcome).
By deleting gallery-by-supsystic (using Kudu console again) I regained access to both blog and admin page. If I uploaded it again, back to 500, even 'though the help page says the plugin should be disabled.
Time to post a support question to this plugin developer.
Azure App Service is a great platform for managed hosting of web applications. One of the features of the platform is App Service Authentication using a variety of authentication providers such as Azure Active Directory, Google, and Twitter. It is possible to enable authentication without changing a single line of application code. This is also know as Easy Auth.
If you use Easy Auth with your application, you have access to user details and tokens through the token store (if enabled). After authentication you can hit the /.auth/me endpoint and obtain user name, id token, access token, etc. As I have discussed in a previous blog post, you can use the tokens to access backend web APIs such as Microsoft Graph or your own custom APIs.
If you chose to leverage the provided tokens and login details in your application, you may find that local debugging of your application can be a bit of a hassle. If you run the application locally on your development system, Easy Auth will not be available and you will not have the access tokens, etc. that you may need in your application. In order to debug those features of your application, you will need to deploy to an Azure Web App. An alternative approach is to do the login and authentication workflow in the application code, but then you are no longer leveraging Easy Auth.
I this blog, I will discuss an approach that I use. It is a bit of a hack but it does allow me to do a lot of debugging cycles locally before deploying and it speeds up my development. You may find that you can leverage my approach or just use it as inspiration to come up with your own approach.
I will use an example from my previous post on accessing custom backend APIs from an application using Easy Auth. In that example, I explained how to write and deploy an API and then use Easy Auth to obtain a token to access it from a Web App. In the application code, I used the x-ms-token-aad-access-token request header as authorization to access the backend API. That header is only available if Easy Auth is enabled. The application code was pretty simple, so it did not take a lot of debugging cycles to get it working, but in more complex scenarios may take a lot of debugging where it would be nice to have those header fields available locally.
The approach that I have found useful is to access those header fields through a proxy that will use the Easy Auth header fields when available and otherwise populate from a local file, specifically, I will create a file wwwroot/.auth/me in my source code repository. To populate this file, I leverage previously deployed version of the application where Easy Auth is enabled. If you log into a deployed application, you can get all tokens, etc. from the /.auth/me endpoint. I take the content returned from that endpoint and copy it into my local file. If you now run your application locally, you should have access to the same endpoint for tokens, etc. and if you are developing single page app in something like angular, this alone maybe be enough for you to do your debugging. But if you have server code that needs to leverage the request header fields as mentioned above, we have to take it a few steps further and implement a simple proxy. I have published a revised version of the application that uses a proxy. Here is how it works:
Let's consider the following function, which calls a backend API to delete a list entry:
public async Task<IActionResult> Delete(int id)
{
string accessToken = Request.Headers["x-ms-token-aad-access-token"];
var client = new HttpClient();
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
var response = await client.DeleteAsync("https://listapi.azurewebsites.net/api/list/" + id.ToString());
return RedirectToAction("Index");
}
What I do is that I access the header field through a proxy:
public async Task<IActionResult> Delete(int id)
{
string accessToken = _easyAuthProxy.Headers["x-ms-token-aad-access-token"];
var client = new HttpClient();
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);
var response = await client.DeleteAsync("https://listapi.azurewebsites.net/api/list/" + id.ToString());
return RedirectToAction("Index");
}
This proxy can now be used to implement logic that will provide the header fields from the local file when that file is present. This can be a very simple class such as:
using System;
using System.Collections;
using System.Collections.Generic;
using System.IO;
using System.Net.Http;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Http;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
namespace ListClientMVC.Services
{
public class AuthClaims {
public string typ { get; set; }
public string val { get; set; }
}
public class AuthMe {
public string access_token { get; set; }
public string id_token { get; set; }
public string expires_on { get; set; }
public string refresh_token { get; set; }
public string user_id { get; set; }
public string provider_name { get; set; }
List<AuthClaims> user_claims { get; set; }
}
public interface IEasyAuthProxy
{
Microsoft.AspNetCore.Http.IHeaderDictionary Headers {get; }
}
public class EasyAuthProxy: IEasyAuthProxy
{
private readonly IHttpContextAccessor _contextAccessor;
private readonly IHostingEnvironment _appEnvironment;
private IHeaderDictionary _privateHeaders = null;
public EasyAuthProxy(IHttpContextAccessor contextAccessor,
IHostingEnvironment appEnvironment)
{
_contextAccessor = contextAccessor;
_appEnvironment = appEnvironment;
string authMeFile = _appEnvironment.ContentRootPath + "/wwwroot/.auth/me";
if (File.Exists(authMeFile)) {
try {
_privateHeaders = new HeaderDictionary();
List<AuthMe> authme = JsonConvert.DeserializeObject<List<AuthMe>>(File.ReadAllText(authMeFile));
_privateHeaders["X-MS-TOKEN-" + authme[0].provider_name.ToUpper() + "-ID-TOKEN"] = authme[0].id_token;
_privateHeaders["X-MS-TOKEN-" + authme[0].provider_name.ToUpper() + "-ACCESS-TOKEN"] = authme[0].access_token;
_privateHeaders["X-MS-TOKEN-" + authme[0].provider_name.ToUpper() + "EXPIRES-ON"] = authme[0].expires_on;
_privateHeaders["X-MS-CLIENT-PRINCIPAL-ID"] = authme[0].user_id;
} catch {
_privateHeaders = null;
}
}
}
public IHeaderDictionary Headers {
get {
return _privateHeaders == null ? _contextAccessor.HttpContext.Request.Headers : _privateHeaders;
}
}
}
}
To make this proxy available in a controller, I use Dependency Injection. In the Startup.cs file, the proxy is added as a service:
And if you would then like to use it in a controller, you can do something like this:
public class ListController : Controller
{
private IEasyAuthProxy _easyAuthProxy;
public ListController(IEasyAuthProxy easyproxy) {
_easyAuthProxy = easyproxy;
}
//Rest of the class
}
I keep the proxy class described above handy and I add it to many of my .NET Web Apps to make my debugging easy. In some cases I will do some application specific modifications to it, but it most cases I use it without any changes. When the app is deployed to an Azure Web App, it will leverage the header fields provided by Easy Auth, when I debug it locally I can still test many scenarios.
I quick note on token expiration. The information available from the /.auth/me endpoint contains tokens with expiration time. Consequently, the information you copy to the local file will quickly become stale. Depending on the timeout parameters, you will need to update your local information from time to time.
As I mentioned in the beginning, this approach is a bit of a hack, but it does allow me to develop faster, so I thought I would share it. If you have a similar or better approach, I would love to hear about it. In general, let me know if you have questions/comments/suggestions.
En esta serie de artículos, Alexander Gonzalez, Microsoft Student Partner, te explica cómo utilizar Tensorflow en Azure para el análisis y detección de objetos utilizando como ejemplo obras de arte. Forma parte de su Trabajo Final de Grado para el Grado en Ingeniería Informática.
Nowadays, all major technology companies are committed to innovative projects using technologies such as Machine Learning, Deep Learning. Both will be the most important technologies for 2018, making a difference on the currently way of living and working. Machine learning and Deep learning are constantly making new business models, jobs, as well as, innovative and efficient industries.
Machine learning is around us more than we think. It can be found in mobile phones, cars, homes and in our own job, helping us to make better decisions, access to higher quality information and faster. According to several surveys and analytical companies, these technologies will be present in all the new software products in 2020.
From the business point of view, artificial intelligence (AI) will be used a stepping stone to improve and increase the effectiveness and efficiency of the services of any company as well as quality for the user. The trends indicate that most sectors will see radical changes in the way they use all their products, the only risk of such change is to completely ignore it. In the future, products that we currently know will change thanks to the AI . Furthermore, and it is calculated that in the next three years, around 1.2 trillion dollars will change hands thanks to this new field in computing. Consequently, this means that every year the AI is taking a lot of strength and support; therefore, it will leave a mark setting differences between companies in the up-coming years.
The objective of these post series is to show the possibilities that we currently have to perform machine learning and computer vision projects which they will be published in 4 parts:
1. The first part is dedicated to the installation and explanation of the software that we will need to take any project in this case, with TensorFlow, CUDA and CuDNN.
2. The second part will cover step by step the necessary processes to make our dataset, in this case artworks images, followed by training. Finally, the evaluation and obtaining of relevant graphics for the preparation of documentation will be explained. [Link to be added]
Image of Fast-RCNN on surface webcam with python program
Image of Fast-RCNN on MS Surface webcam with python program
Image of SSD-Mobilenet on LG mobile
3. The third post will explain another way of recognizing and classifying images (20 artworks) using scikit learn and python without having to use models of TensorFlow, CNTK or other technologies which offer models of convolved neural networks. Moreover, we will explain how to set up your own web app with python. For this part a fairly simple API which will collect information about the captured image of our mobile application in Xamarin will be needed, so it will inference with our model made in python, and it will return the corresponding prediction. With that methodology we can get easy classification without heavy hardware like TensorFlow or CNTK. [Link to be added]
Image may be NSFW. Clik here to view.
4. Finally, the last post is dedicated to a comparison between the use of TensorFlow and CNTK, the results obtained, the management and usability, points of interest and final conclusions about the research that has been carried out for the development of all the posts. [Link to be added]
1. How to install Tensorflow 1.5 on Windows 10 with CUDA and CudaDNN
Prerequisites
• Nvidia GPU (GTX 650 or newer. The GTX1050 is a good entry level choice)
• Anaconda with Python 3.6(or 3.5)
• CUDA ToolKit(versión 9)
• CuDNN(7.1.1)
If we want to get results quickly the first thing to think about is with what hardware we will face our computer vision project, since the demands are high in terms of GPU and processing. My advice is to use Data Science Virtual Machine that Azure supports. They are complete virtual machines preconfigured for the modelling, development and implementation of science data. Below, I highlight several documents provided by Azure team that will help us understand the provisioning of these machines and prices:
First of all, we will have to install CUDA Tool Kit:
• Download version 9.0 here: https://developer.nvidia.com/cuda-downloads
• Currently version 9.0 is supported by Tensorflow 1.5
Image may be NSFW. Clik here to view.
Installer type exe(network) is the lighter way one. More complete one is exe(local)
Set your environment Variables:
• Go to start and search “environment variables”
• Click th environmen variables button
• Add the following paths:
o C:Program FilesNVIDIA GPU Computing ToolkitCUDAv9.0extrasCUPTIlibx64
o C:Program FilesNVIDIA GPU Computing ToolkitCUDAv9.0libnvvp
o C:Program FilesNVIDIA GPU Computing ToolkitCUDAv9.0bin
In data Science machine we have CUDA install and we can find this path already install
• Variable name: CUDA_PATH_V9_0
• Variable value: C:Program FilesNVIDIA GPU Computing ToolkitCUDAv9.0
• Extract the zip file and place them somewhere (C:// directory for example)
• Add a in environment variable path to the bin folder: For example: C:/cuda/bin
Test your tensorflow install:
1. Open anaconda prompt
2. Write “python –-version”
3. Write “python”
4. Once the interpreter opens type the following: import tensorflow as tf hello= tf.constant('Hello, Tensorflow!') Sess = tf.session() print(sess.run(hello))
In the next post, we will cover step by step the necessary processes to make our dataset. In this case, artwork images, followed by training and finally the evaluation and obtaining relevant graphics for document preparation. Index of the next post:
1. Labelling pictures
2. Generating training data
3. Creating a label map and configuring training
4. Training
5. Exporting the inference graph
6. Testing and using your newly trained object detection classifier
Kind regards,
Alexander González (@GlezGlez96)
Microsoft Student Partner