Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Infraestructura como código (IaC)

$
0
0

iac

DevOps está transformando las organizaciones, está permitiendo agilizar muchos de los procesos que en un momento se realizaban de manera manual, además está permitiendo una mejor manera de trabajar unidos los desarrolladores (incluido testers) con el personal de operaciones o los IT Pros como muchos los conocemos y también los usuarios . Esto ha permitido a muchas organizaciones que han adoptado o están en fases de implementación, no solo agilizar y tener mayor control sino también brindar una entrega continúa de valor a los usuarios finales con pequeños ciclos y no en largos tiempos de espera, además de incrementar la calidad y seguridad. Sin embargo, la entrega continúa que se realiza no es solamente un software sino es una solución como un todo, esto incluye configuración de servidores, redes, scripts de bases de datos, código fuente de las aplicaciones, etc. Con esto vamos a la siguiente pregunta:

¿Qué es Infraestructura como código?

Podemos decir que es la administración del manejo de la infraestructura, sin importar si es en nube, on-premise o híbrido, podemos gestionar configuración por medio de scripts de switches, routers, firewall y demás componentes de redes, como también, configuración de máquinas virtuales, balanceadores de carga, topología de conexiones, configuración de roles y servicios dentro de los sistemas operativos o en las plataformas, entre todos lo demás componentes que comprende la infraestructura.

¿Por qué es importante IaC?

Bajo el mismo concepto que tenemos el código fuente de una aplicación, lo que se pretende con la infraestructura como código es que tengamos un control de versiones de los scripts que utilizamos para configurar infraestructura y de igual modo que una aplicación, buscamos que el despliegue (Release) sea más simple y con mayor control, evitando error humano y manteniendo una trazabilidad muy transparente de punto a punto, desde que se escribe un script hasta cuando se edita y se hace un despliegue en alguno de los diferentes ambientes.

¿Que pasos debo seguir para empezar con IaC?

El primer paso es mantener bajo un control de versiones el código los scripts que utilizamos para configuración de infraestructura, para esto Visual Studio Team Services (VSTS) permite mantener repositorios de código tanto en Git como en TFVC, si estos scripts son en PowerShell es muy importante mencionar que existe una extensión tanto para Visual Studio Code como para Visual Studio que permite trabajar directamente desde el IDE estos scripts, en caso de que tengamos scripts en otra plataforma, podemos utilizar Team Explorer Everywhere y/o cualquier cliente de Git para hacer la conexión al repositorio de VSTS.

powershellvspowershellvscode

Nota: es altamente recomendado vincular elementos de trabajo con el código que subimos a la plataforma, les comparto una imagen como ejemplo:

workitemrelated

El segundo paso es automatizar el despligue o la ejecución de estos scripts, para esto podemos utilizar el Release de VSTS, en esta solución podemos crear una o múltiples definiciones que contengan uno o múltiples ambientes, en cada uno de estos ambientes definimos las tareas que requerimos para la ejecución de estos scripts, por ejemplo, la copia de archivos, desencriptar archivo (OpenSSL), ejecución de un bash, Shell, paquetes NPM, Nugets, tareas con Windows Remote Management, Docker, Azure App Fabric, etc. Adicionalmente en cada uno de los ambientes podemos establecer configuraciones propias a ese ambiente así también como aprobaciones antes del despliegue, es decir, para poder realizar la ejecución del script en producción, se requiere que una o varias personas aprueben dicho proceso, de esta manera se permite verificar que en los ambientes donde se ejecutó previamente el script haya sido de manera exitosa, les comparto un ejemplo en la siguiente imagen:

releasedefinition

Espero que esto les sea de utilidad en sus procesos de DevOps!

Gracias por su atención, ante cualquier consulta quedo a completa disposición.


Investigating issues with Visual Studio Team Services – 02/23 – Investigating

$
0
0

Update: Thursday, 23 February 2017 18:39 UTC

We continue to investigate this issue and can see that 16 VS Team Services accounts are facing this issue. So far, the common extension that may be causing impact is the extension named ‘On-Premise Release Toolkit‘ and the impacted accounts may not have an updated version of this extension.

  • Work Around: If you are using this extension, please uninstall and re-install and you should be unblocked.
  • Next Update: Before 20:00 UTC


Sincerely,
Sri Harsha


Initial Update: Thursday, 23 February 2017 18:22 UTC

We are actively investigating issues with Hosted Build in Visual Studio Team Services. A subset of customers hosted on South Central US region may receive “An item with the same key has already been added.” while performing build and release activities. We are identifying the extension that may be causing these errors. We will update this blog post within the next 20 minutes with more information..

  • Next Update: Before 18:45 UTC


Sincerely,
Krishna

Guide to get started with Visual Studio Web Load Testing and Automation

$
0
0

Hello everyone! We are MS Testing Services team and we are restarting the TestingSpot blog at msdn.com. It has been a few years of inactivity but our first post of the year is going to be a guide for Web Load Testing with Visual Studio, we hope you like it and start coming back for more!

This article provides consolidated information in a concise presentation for anyone who wants to start a web load test project using Visual Studio. You may have noticed that there is plenty of information on MSDN and visualstudio.com that tells you a lot of information about how things work like cloud testing or on-premises requirements, agents, controllers, webtests, plugins, etc.

However, the amount of information can be easily overwhelming and is not in a single and concise location, this article can be used as a starting guide for newcomers to understand the basic setup and to find the best route into achieving the results desired.

1. Software Requirements. Versions and licenses

Visual Studio

The first thing you will need is Visual Studio. The Web Load & Performance Testing features of Visual studio are only available on versions: Visual Studio Enterprise 2015 or Visual Studio Ultimate 2013.

Other type of tests such as Coded UI, Manual tests and Unit testing are also available on Visual Studio Test Professional 2015 however these are all functional tests. We’ll focus on performance web tests for now, this is NOT available on VS Test Professional. You can find a feature comparison among VS versions here: Compare Visual Studio 2015 Offerings.

Virtual User Licenses

On premises. No virtual user licenses are needed. You can execute load tests with any number of virtual users (as far as your hardware resources allow you) as long as you have any of the following:

– Visual Studio Enterprise 2015 with MSDN

– Visual Studio Enterprise 2015 annual and monthly subscribers (check Visual Studio 2015 Licensing White Paper)

– Visual Studio Ultimate 2013 with MSDN (check Visual Studio 2013 Licensing White Paper)

* In Visual Studio Enterprise (or Ultimate) trial version, the virtual user count is limited to 250.

Cloud-Based testing. No extra software installation needed. You do need:

– A Visual Studio Online (VSO) account (get one here).

– That gets you 20,000 virtual user minutes every month to load test at no extra charge (check how virtual users minutes work and test duration limitations here: VSO Virtual users minutes).

Controller and Agents Software

On premises. If you are planning to load test an application in an on-premises environment, you will need a Controller and at least 1 agent machine (check next section for hardware requirements). You can find the installable software for these two components based on your Visual studio version here:

– Download Agents for Microsoft Visual Studio 2013 Update 5. This is the correct version of the Agents and Controllers software for load testing. Don’t use the Test Agents for VS 2015 for load testing. It is meant for continuous tests on build scenarios, it doesn’t have a test controller or a configuration tool.

 Cloud-Based Load testing. No additional software installation needed.

 

2. Infrastructure/Hardware requirements

Visual Studio (IDE) System Requirements

Visual Studio is used to record webtests, kick off Load Tests and visualize results. It is not recommended to install it on the Controller machine or Test Agents. The system requirements for the machine to host either Visual Studio Enterprise 2015 or Visual Studio Ultimate 2013 are:

Controller and Agents Hardware Requirements

On premises. If you are planning a load test project in a on premises environment, you will need a controller and at least 1 test agent, to set this up you will need the following:

  • OS/Framework requirements
    • Controller: Windows 7 SP1 / 8 / 8.1 or Windows Server 2008 R2 SP1 / 2012 / 2012 R2
    • Test Agents: Windows XP SP3 / 7 SP1 / 8 / 8.1 or Windows Server 2008 R2 SP1 / 2012 / 2012 R2.
    • Both: .NET Framework 4.5
    • More information here: System requirements for Controller and Agents

You can also execute local test runs (without controller or test agents) directly from your Visual Studio host, this is done normally for debugging purposes since you are limited by resources.

  • Hardware sizing

The table below shows the recommended hardware requirements. You can use this table to plan how many test agents and what size of controller suit your load testing needs:

Configuration

Component

CPU

HD

Memory (RAM)

< 500 virtual users

Test agent

2.6 GHz

10 GB

2 GB

< 1000 virtual users

Test agent

Dual processor 2.6 GHz

10 GB

2 GB

N x 1000 virtual users

Test agent

Scale out to N agents each with Dual 2.6 Ghz

10 GB

2 GB

< 30 computers in the test environment. This includes agents and servers under test.

Test Controller

2.6 GHz

Read load test repository section

N x 30 computers in the test environment. This includes agents and servers under test.

Test Controller

N 2.6 GHz processors

Read load test repository section

Tips:

  • A rule of thumb that I use is that a Test Agent in a VM can hold from 500 to 1000 webtest virtual users on a typical load test. If you have many extraction plugins or external dependencies(CSV files, database connections) that number will decrease since it takes more processing to do that.
  • Test Agents save the load test results in a temp folder before sending everything to the Controller. You may want to monitor the free space in the HD and periodically clean the temporary folders. It’s usually in this path: \<TESTAGENTVM>c$Users<USERPROFILE>AppDataLocalVSEQTQTAgent<TESTRESULTSESSION>AGENT01
  • Keep in mind other Server resources aside from CPU and RAM Memory. It’s always a good practice to add the Test Agents to your Scenario for monitoring.
    • In one of my test engagements I assumed that 3 BIG Test Agent VMs would be equivalent to 9 SMALL Test Agent VMs because the specs CPU and RAM specs matched up. However during the load test I found that this caused a bottleneck on the Test Agents because my virtual users were in queue waiting for the Agents to free threads.
  • Load Test Repository

Load tests results may be stored in the Load Test Results Repository, which is a SQL database. The Results Repository database is created by setup for controllers (or created automatically on the first local run of a load test), a the database will be created automatically if the load test schema is not present.

SQL Server 2012 Express LocalDB, which is installed with Visual Studio is the default database server for load tests. If SQL Server Express is detected with an existing load test database, Visual Studio Enterprise will try to connect to it and use it.

However, for heavier database needs you should consider upgrading to a full SQL Server to provide further scaling potential. Also, SQL Express is limited to using a maximum of 4 GB of disk space. If you will run many load tests over a long period of time, you should consider configuring the load test results store to use an instance of the full SQL Server product if available. More on Load Test results repository.

Tip: In my experience, a typical single 1 hour load test can take from 100MB to 500MB of storage depending on the number of webtests, performance counters, VMs you are monitoring, etc. Plan storage needs accordingly and take in consideration that this information is temporarily stored in the Test Agents. When the test ends the results are collected in the Controller’s RAM memory before finally being placed in the SQL Repository, so also plan RAM Memory needs accordingly.

  • Alternative: Testing using VSO or TFS instead of Controller

For test scenarios (still on premises) using Visual Studio Online (VSO) or Team Foundation Server (TFS) 2015, you won’t need a test controller because Agents for Microsoft Visual Studio 2015 handle orchestration by communicating with VSO or TFS 2015. For example, you’re running automated tests with your build and release workflows in VSO or TFS 2015.

Cloud-Based Load testing

You can use cloud-based load testing to avoid having to use your own resources and machines. In this case you don’t need a Controller or Test Agents. The cloud-service provides virtual machines in the cloud that generate the load needed to test your target application/website.

All you need is a Visual Studio Online (VSO) account. Check ‘Getting started with Load Tests’ further down in this document.

 

3. Getting started with Webtests

Basic components

At the most basic level you will need to do the following steps to get started:

  • Create a web performance and load test project
  • Record a web performance test.
  • Create a load test.
  • Run and analyze your load test

 Follow this article to learn how to create the most basic components.

Validation

Validation rules help verify that a Web application is working correctly by validating the existence of text, tags, or attributes on the page returned by a Web request. Validation rules can also verify the time that it takes a request to finish, and the existence of form fields and their values.

Check this article to find out how to add predefined Validation rules to your webtest. You can also create your own Validation rules, check Customization section further down on this page.

Parametrization

o    Constant parameters. You can create a Context Parameter for any value in a request of a WebTest. The constant parameters are displayed at the bottom of your webtest. A simple case where you would want to do this is for host or website URLs. Read this article for more information.

o    Dynamic parameters. Web sites and applications use dynamic values, this is a value that is generated every time that a user runs the application. A dynamic parameter can cause your Web performance test playback to fail because it has a different value every the test is run. Therefore, you cannot play back recorded values. (E.g. a session ID). To solve this situation you have a couple of options (follow link to articles):

– Auto-detect dynamic values and promote them to WebTest parameters. Use VS auto-detect feature to identify and parameterize dynamic values (it may not catch all values).

 

– Add an Extraction Rule. Find dynamic values by using ‘quick find’ on recorded results then add an extraction rule:

  • Extraction rules extract data from the responses to Web requests and store the results in the test context as name/value pairs. Extraction rules are predefined in Visual Studio (you can also create your own, check Customization section), you can use them to extract form fields, text, attributes, headers, regular expressions, and hidden fields, Check full article here.

Data Binding

 You can bind parameters (e.g. form post parameters) on the requests of your WebTest to data sources to provide different values to the same test. This makes your load test more solid and realistic. You have several options for the data source type:

– SQL database (several versions permitted, check FAQ)

– CSV text files

– XML files (or SOAP XMLs)

 Check this article for a step by step guide.

 

4. Getting started with Load Tests

Load Test setup

Load Test can be created in a Load Test projects. If you followed the article at the start of Section 3 of this page you must already have a simple Load test. Some of the Load test settings and scenarios can be setup on the initial Wizard or directly on the load test afterwards. Below are descriptions for the main sections:

  • Scenarios

Load tests contain scenarios, which contain Web performance tests. Scenarios are important because they give you flexibility in configuring test characteristics that allow for simulation of complex, realistic workloads. You can have more than one scenario per load test.

 Use the Load pattern, test mix model and test mix to specify what webtests to use, the manner in which the load will be applied to your application and pacing.

 You can also use Browser mix, Network mix to make your workload more realistic. Check MSDN article on editing Load tests scenarios.

  • Counter Sets, Counters and Mappings

Counter sets that are useful when you analyze performance counter data. The counter sets are organized by technology such as Application, .NET Application, IIS, or SQL. You can create counter sets during the initial New Load Test Wizard or later on the load test file itself.

There is a set of predefined counter sets or you can create your own. You can also add counters to the default Counter Sets or to your own. The counters are added from Perfmon.

On the Run settings section of a load test you can specify Counter Set mappings where you can add computers (by DNS or IP) and map them to Counter sets. Visual studio will collect data for every counter on a set for each computer where it is mapped to.

 To learn more about Counter Sets, thresholds, sampling and other tasks, check this article on MSDN.

  • Run Settings

Run settings are properties that organize the manner in which a load test runs. You can have more than one run setting in a load test but only one can be active at a time while the other settings provide a quick way to select an alternative settings.

Settings are used to specify things like test duration, Logging, SQL server collection, warm up duration, sampling rate or validation setting between others. You can also add Counter Set Mappings to map computers to Counter Sets.

Check this article for full description of tasks available.

Running Load Tests On Premises

  • Test Settings file

Make sure your Load test is configure to run On premises by checking the General section of your active .testsetting file on your Solution. Test Settings are not the same as Run Settings.

 Other items to configure on this window are Roles and Timeouts. On the Roles section, test execution method should be set to ‘Remote execution’ if you want to use your Controller and Test Agents. Check this couple of article to learn more about Test settings: Managing Test Controllers and Test Agents with Visual Studio, Edit Test Settings.

  • Controller-Agents connectivity

Also make sure the Controller can communicate with the Test Agents by checking the Load Test > Manage Test Controllers. That dialog should show if the controller has any Test Agents connected to it. More information here: Manage Test Controllers.

  • Running the Load Test

From that point forward you should be ready to run a Load Test. During or after run time you have access to Graphs and Tables with throughput, response times information, performance counters for the computers you configured and details on vusers activity. Check this tutorial to learn more: Run and analyze your load test.

Running Load Tests using the Cloud Service

There are a couple of options for running a Load test with the Cloud Service:

  • You can run a Load test from Visual Studio (Enterprise 2015 or Ultimate 2013) and Visual Studio Team Services.
    • As mentioned above you will need a VS Team Services account.
    • Your solution needs to be connect to TFS.
    • In your solution; if you open your active .testsettings file, on the General section select ‘Run tests using Visual Studio Online’:

VS_1

  • From there you can specify the Location you want your load to come from on the Load Test file and run the Load test. Check the full tutorial here, specifically the ‘Run and analyze your load test’ section.
  • Your second alternative is to run a basic (and much more limited) load test directly in Visual Studio Team Services. The up side is that you don’t need a load test project (or Visual Studio). You do need a Visual Studio Enterprise monthly or annual subscription or a MSDN subscription.
    • To do this you need to go to the Load Test Hub on your Visual Studio Team Services account page.
    • Setup is simple, just specify URL, Load location and some simple Test settings. Check the com article here.

 

5. Customization

Visual Studio offers extendibility options to increase the built-in functionality. Follow the articles below to learn more about the different options:

Custom Validation Rules. You can create your own validation rules. To do this, you derive your own rule class from a validation rule class. Validation rules derive from the ValidationRule base class.

Custom Extraction Rules. You can create your own extraction rules. To do this, you derive your own rules from an extraction rule class. Extraction rules derive from the ExtractionRule base class.

Custom Plug-in’s. You can create custom plug-in to write code and attach it to a load test or a Web performance test. You can use the load test API and the Web Performance test API to create custom plug-ins for tests to expand to the built-in functionality.

FAQ

 

Please feel free to ask questions in the Comments section of this blog post. I can try to answer them and add them to this FAQ.

 

How many vusers can I run per agent?

As mentioned above the recommendation is up to 500 for a 2.6 GHz processor and up to 1000 for a Dual 2.6 GHz processor. This can change depending on the length of the webtests, the payloads of the application responses and the use of custom plug-ins.

What version of SQL do I need?

SQL Express or SQL Express 2012 Localdb for local test runs and debugging. Full SQL for heavier needs such as an Enterprise level application performance test project, for example the standard version of SQL Server 2014, 2012, or 2008 R2.

Can load tests use other test types in their test mix besides web performance tests?

Yes, you can include unit tests and coded UI tests.

Can virtual users simulate pausing between test steps?

Yes, you can specify think times to simulate the time spent by a user on a web page.

Can I mix versions of TFS, Microsoft Test Manager, the test controller, and test agent?

Yes, here are the compatible and supported combinations:

 

TFS

Microsoft Test Manager, with Lab Center

Controller

Agent

2015: upgrade from 2013

2013

2013

2013

2015: new install

2013

2013

2013

2015: upgrade from 2015 or new install

2015

2013

2013

2013

2015

2013

2013

 https://msdn.microsoft.com/en-us/library/dd648127.aspx#Anchor_2

What firewall permissions do I need?

The default port used by the test controller is 6901 and the test agent’s default port is 6910. The client (Visual Studio machine) uses a random port by default which is used to receive the test results from the test controller. For all incoming connections, the test controller authenticates the calling party and verifies that it belongs to specific security group. This is true regardless of the test rig configuration you are using: on premises or cloud based testing:

VS_TestAgents_Ports_Requirements

 

You should create exceptions for the mentioned ports on the firewall, you can also change the port. Check this msdn article for more information: Configuring Ports for Test Controllers and Test Agents

What databases can I use as a data source?

You can use the following:

  • Microsoft SQL Azure.
  • Any version of Microsoft SQL Server 2005 or later.
  • Microsoft SQL Server database file (including SQL Express).
  • Microsoft ODBC.
  • Microsoft Access file using the .NET Framework provider for OLE DB.
  • Oracle 7.3, 8i, 9i, or 10g.

What is a good place to learn more?

Aside from MSDN you can go to Channel 9 videos for a more visual learning experience:

https://blogs.msdn.microsoft.com/visualstudioalm/2015/12/10/visual-studio-2015-test-tools-getting-started-content/

Visual Studio (VS) 2015.3 does not detect any files added under the folder named ‘Release’ on Team Projects with Team Foundation Version Control (TFVC) on VSTS

$
0
0

We came across an interesting case where the user was having trouble adding new Visual Studio projects to TF version control.
After adding a new file, it was not being identified as a pending add (The plus sign that we were supposed to see with new add(s) before check in on the Solution Explorer was missing).
This was happening to specific branch by name ‘Release’ created on Source Control

We were able to reproduce this locally, and found that it does not like the Folder created by the name “Release” on Source control.

There was a design change implemented for VS 2015 Update 3 to start honoring .tfignore and default exclusion rules for file adds inside VS.  This was requested by numerous customers who are used to using other source control systems (ex: Git’s .gitignore files).
The default exclusion rules include common generated build folders like (bin, obj, Debug, Release, etc).  This could cause problems for customers who have used an excluded folder like Release in their folder hierarchy.

We had to do the following to get this to work:

Add a .tfignore file to source control as a peer of the Release folder. If the folder structure is $/Project12192016/Release/, then add the .tfignore file to $/Project12192016.

peer-branch
• To automatically generate a .tfignore file,

1. In the Pending Changes page, in the Excluded Changes section, select the “Detected” link.
The Promote Candidate Changes dialog box appears.
2. Select a file, open its context menu, and choose Ignore this local item, Ignore by extension, Ignore by file name, or Ignore by folder.
Choose OK or Cancel to close the Promote Candidate Changes dialog box.
3. A .tfignore file appears in the Included Changes section of the Pending Changes page. You can open this file and modify it to meet your needs.

blogThe .tfignore file is automatically added as an included pending change so that the rules you have created will apply to each team member who gets the file.
• The content of the file should be:

# Add an inclusion rule to negate the Release exclusion rule. The design of
# .tfignore requires that inclusion rules apply all the way to the last path
# part, so a wildcard is necessary.
!Release*
# Since the wildcard will cause other exclusion rules to stop working below
# this path, add back any exclusion rules that are needed to ignore things
# like build output.
bin
obj

Hope this helps!

Content: Shruti Sharanappa
Review: Romit Gulati

Monthly Sentiment Survey – February

PIX 1702.23.002 – visualizers, better warnings UI, memory capture type tracking, and MSAA sample inspection

$
0
0

Today we released PIX 1702.23.002 beta.  New in this release:

  • Rendertarget visualizers
  • Improved Warnings user interface
  • Memory captures now show allocated types
  • On some hardware, the Pipeline view can now inspect individual sample values from MSAA rendertargets and depth buffers
  • Shader debugging bugfixes
  • Improved performance on captures that use tiled resources

 

Rendertarget visualizers are available in the Pipeline view when inspecting the contents of a rendertarget.  The default Image visualizer just shows the image as normal:

visualizer_image

The Wireframe visualizer highlights whatever geometry was rendered by the currently selected draw call in wireframe.  Visible pixels are shaded green, while culled pixels (eg. backfacing triangles or failed depth test) are red:

visualizer_wireframe

The Overdraw visualizer colors the scene according to how many pixels were shaded and written to the framebuffer after passing the depth test.  This is useful for understanding how well things like front-to-back sorting or a z prepass are working to reduce overdraw:

visualizer_overdraw

The Depth Complexity visualizer is similar to Overdraw, but disables the depth test so the results indicate only how many triangles overlapped each pixel, regardless of sorting or z prepass:

visualizer_depth_complexity

 

The Warnings user interface is now a separate view that can be resized and rearranged to suit your needs.  It now provides hyperlinks to the events which caused each warning:

new_warnings

 

Memory captures have a new view which identifies the types associated with each allocation.  Details of the type allocated as well as a full layout of the structure are provided.  This data makes use of the __declspec(allocator) descriptor that was added in VS2015.  More information is available here:

https://blogs.msdn.microsoft.com/vcblog/2016/05/25/tracking-custom-memory-allocations-with-visual-studio-15-preview-2/

This display does not yet support identifying types allocated by custom heaps.

memory_capture_types

Using Oozie SLA on HDInsight clusters

$
0
0

Introduction

Often we have several jobs running on our HDInsight clusters that have tight timelines requirements associated with them. This could be in terms on how much time it takes for the job to start, how much time does the job run, what is the maximum time before which the jobs should complete etc. Oozie allows defining these SLA requirements in our workflows and coordinators to ease monitoring such metrics. Oozie SLA monitoring allows oozie to actively check the state of these jobs and notify when SLA is met or missed.

Three metrics that are tracked by Oozie are:

  • Start Time
  • End Time
  • Job Duration

For more details on Oozie SLA please refer to this Oozie Documentation

Configuring Oozie SLA

Oozie SLA monitoring requires configuring Oozie JMS using ActiveMQ to consume the notifications published by Oozie and further trigger actions like triggering emails. It further requires changes in oozie-site.xml to enable oozie to publish these notifications. All these steps can either be performed manually as detailed in below sections, or this entire procedure has been automated to allow for a simple one command configuration of Oozie SLA. Both these options are explained below.

Automated Installation

If you are configuring Oozie SLA on an already running cluster, you can do so either using Azure portal as defined in the below section or by logging into the headnode and running the automation script.

Script Action based installation from the Azure portal

You can also use script actions to install Oozie SLA from Azure portal both during cluster creation or once the cluster is created.

During Cluster Creation

  1. Start creating a cluster as described at Create Hadoop clusters in HDInsight.

  2. Under Optional Configuration, for the Script Actions blade, click add script action to provide details about the script action, as shown below:


  3. script_action

    Property Value
    Name  Oozie SLA Installation
    Script URI  https://ooziesla.blob.core.windows.net/ooziesla/oozie_sla_config.sh
    Head/Worker  Check only Headnode
    Parameters <CLUSTER_ADMIN_USERNAME> <CLUSTER_ADMIN_PASSWORD> <CLUSTER_NAME>

    Ex: admin DummyPassword oozieslasample


  4. Click Save to save the configuration and continue with cluster creation.

On a Running Cluster

If you are configuring Oozie SLA on an already running cluster use the steps described in Applying Script action on a running cluster with the same property values as defined above.

Installation from within the cluster [Headnode]

The automation script is also hosted on Github which can be used to configure Oozie SLA from the cluster’s headnode.

Run the below commands on the headnode to achieve the same.

Manual Installation

If you prefer to perform these steps manually for better control, the steps are detailed below

Steps to Configure JMS using ActiveMQ

  1. Create a dir /opt/ActiveMQ
  2. mkdir /opt/ActiveMQ

  3. Download the ActiveMQ from below link depending on the OS,  and extract it in the /opt/ActiveMQ directory
  4. http://activemq.apache.org/activemq-5143-release.html
    sudo tar -xvf apache-activemq-5.14.3-bin.tar.gz

  5. Give the directory appropriate permission
  6. chmod 775 /opt/ActiveMQ
    chown root:root /opt/ActiveMQ

  7. Go to bin directory and start the daemon
  8. cd /opt/ActiveMQ/apache-activemq-5.14.3/bin/
    sudo ./activemq start as root user.

Oozie Config Changes
Login to ambari and make the following changes to Oozie Config

  1. Add oozie.services.ext property in oozie-site.xml to include the following services.
  2. org.apache.oozie.service.JMSTopicService,
    org.apache.oozie.service.EventHandlerService,
    org.apache.oozie.sla.service.SLAService

    Your modified ext property should similar to this
    org.apache.oozie.service.JMSAccessorService,org.apache.oozie.service.PartitionDependencyManagerService,org.apache.oozie.service.HCatAccessorService,org.apache.oozie.service.ZKLocksService,org.apache.oozie.service.ZKXLogStreamingService,org.apache.oozie.service.ZKJobsConcurrencyService,org.apache.oozie.service.ZKUUIDService,org.apache.oozie.service.JMSTopicService,org.apache.oozie.service.EventHandlerService,org.apache.oozie.sla.service.SLAService

    Add the below properties in Custom-Oozie Site:

  3. Add the event handlers property
  4. <name>oozie.service.EventHandlerService.event.listeners</name>
    <value> org.apache.oozie.jms.JMSJobEventListener,org.apache.oozie.sla.listener.SLAJobEventListener,org.apache.oozie.jms.JMSSLAEventListener,org.apache.oozie.sla.listener.SLAEmailEventListener </value>

  5. Set Oozie Scheduler threads to 15 [Optional]
  6. <name>oozie.service.SchedulerService.threads </name>
    <value>15</value>

  7. Add JMS Properties
  8. <name>oozie.jms.producer.connection.properties</name>
    <value>default=java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://<ActiveMQ server>:61616</value>

  9. Add the JMS topic name
  10. <name>oozie.service.JMSTopicService.topic.prefix</name>
    <value></value> – Empty Value. [This can be used to append a prefix to the topic in oozie.service.JMSTopicService.topic.name. For eg: oozie.]

  11. Save all the settings and restart Oozie.

Sample workflow with SLA monitoring enabled

Below is a sample worfklow that shows SLA monitoring in action

Once this workflow is run, on an SLA miss, you will get an email similar to this if email notification is configured

oozie_sla_email

Further if you look at your Oozie UI, you will see a new tab for SLA.

oozie_web_console

Here if you search for your job, a result similar to this will give you its SLA status

oozie1

PS: Feel free to drop in your questions and provide any feedback in the comments section.

SQL Server Migration Assistant (SSMA) v7.3 is now available

$
0
0

 

Overview

SQL Server Migration Assistant (SSMA)  for Oracle, MySQL, SAP ASE (formerly SAP Sybase ASE), DB2 and Access lets users convert database schema to Microsoft SQL Server schema, upload the schema, and migrate data to the target SQL Server (see below for supported versions).

What is new?

  • Improved quality and conversion metric with targeted fixes based on customer feedback.
  • SSMA extensibility framework exposed via
    • Export functionality to a SQL Server Data Tools (SSDT) project
      • You can now export schema scripts from SSMA as an SSDT project and use that to make additional schema changes and deploy your database.
        exporttossdt
    • Libraries that can be consumed by SSMA for performing custom conversions
      • You can now construct code that can handle custom syntax conversions and conversions that weren’t previously handled by SSMA
        • Instructions on how to construct a custom converter can be found here.
        • Sample project for conversion can be downloaded here.

For more information see this post.

 

Ajay Jagannathan (@ajaymsft)

Principal Program Manager


Application Insights – Advisory 02/24

$
0
0
Starting from 2/24/2017 4:00 AM UTC customers who are sending data more than 8K eps will experience availability data throttling with Application Insights service. Our DevOps team is actively looking in to this issue and will update this thread once we made optimizations at our end.

We apologize for any inconvenience.

-Praveen

[Sample Of Feb. 24] How to extract text from PDF in Universal Windows Platform apps

$
0
0
image
Feb.
24
image
image

Sample : https://code.msdn.microsoft.com/How-to-extract-text-from-eef098e5

This sample demonstrates how to extract text from PDF in Universal Windows Platform apps.

image

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

Application Insights mění cenové plány. Jak ušetřit a přitom nepřijít o diagnostická data?

$
0
0

Služba Application Insights, která slouží pro diagnostiku a monitoring webových aplikací byla poslední měsíce intenzivně vylepšována. Koncem roku byla vydána zpráva o novém pricing modelu, který přijde v platnost v polovině února. Co to pro vývojáře znamená sepisuji v tomto článku.

Cenový model Application Insights

Dosavadní cenový model pracoval s jednotkou “data point”, který reprezentoval nejčastěji telemetrii přenesenou do Microsoft Azure. Protože se však datová velikost telemetrií výrazně lišila v závilosti na svém druhu, nový cenový plán již pracuje výhradně s objemem přenesených dat. Platby za službu jsou počítány na hodinové bázi a pro jeden měsíc se předpokládá 744 hodin.

ai-pricing

Plán Basic

Dobrá zpráva je, že Application Insights lze stále používat zdarma. V rámci plánu Basic je však možné přenést maximálně 32 MB za den (čili něco kolem 1 GB za měsíc). Po překročení tohoto limitu je účtována částka 1,94 EUR / GB. Upozorňuji tedy, že plán Basic NENÍ nutně vždy zdarma.

Do 1. března 2017 je v rámci programu Basic možné používat kontinuální export dat a konektor do služby OMS zcela zdarma.

Plán Enterprise

V rámci programu Enterprise je nutné platit paušální poplatek 12,65 EUR / měsíc a za jeden node. V rámci tohoto poplatku má vývojář k dispozici limit 200 MB přenesených dat za den. Po překročení limitu se účtuje stejně jako v případě plánu Basic částka 1,94 EUR / GB.

Plán Enterprise navíc nabízí funkci Multi-step web tests, která v rámci plánu Basic není vůbec dostupná. Cena za jeden multi-step web test je pevně nastavená na 8,43 EUR. Při této ceně se již multikrokové testy hodí skutečně jen pro definici komplexních scénářů nebo náhradu většího balíku standardních ping testů za jeden multi-step web test.

Zamezení placeného režimu

Pokud si nepřejete, aby Application Insights automaticky po překročení limitů začala účtovat dodatečná data, lze povolit funkci “Collect only session data” (výchozí nastavení). V opačném případě je nutné povolit “Pay for more automatically”.

ai-pricing-limits

Jak se vyhnout zbytečným platbám

Přestože restrikce působí na první pohled tvrdě, ve skutečnosti lze službu nastavit tak, aby nedocházelo k účtování zbytečných dat. Poslouží k tomu funkce filrování a preprocessing telemetrií, které jsou v Application Insights dostupné již několik měsíců.

Efektivní metoda filtrování telemetrií je sampling, který hledá data stejného charakteru a duplicitní data na základě nastavení zcela zahazuje. Pro vývojáře to však neznamená žádné informační zkreslení, protože Azure portál stále poskytuje nezkreslený pohled na agregovaná data. Sampling lze aplikovat na straně Azure, v aplikačním kódu na straně serveru ale i v případě Javascriptu.

Ingestion sampling (na straně Azure)

Funkce Ingestion Sampling je dostupná v sekci Quota + pricing na vybrané službě Application Insights v Azure. Aktivací funkce se defacto říká, kolik procent dat přijatých z SDK má být uchováno. Data se sice z aplikace odešlou proti Application Insights API, to ale na základě nastavení Ingestion Samplingu část dat nezpracuje. Velkou výhodou je možnost provést okamžitou změnu bez deploye aplikace.

ai-sampling

Adaptive / fixed-rate sampling (na straně aplikace)

Další typy samplingu umožňují přesněji specifikovat, jaké telemetrie se mají ze sběru ignorovat. Tato nastavení se provádějí přímo ve webové aplikaci buď pomocí ApplicationInsights.Config nebo přímo na úrovni kódu. Odkázat mohu na výživnou dokumentaci.

Preprocessing

Zatímco v případě samplingu se vzdáváme určitého množství duplicitních dat (nebo z větší části duplicitních), v případě preprocessingu vědomě vybíráme telemetrie, které nechceme do Azure posílat. Praktickým příkladem mohou být:

  • závislosti, které se odbavily do 1 ms a proběhly úspěšně
  • chyby typu 404 na stránky, které víme, že neexistují
  • chyby typu 401 v případě pokusu o vstup do sekce nepřihlášeným uživatelem
  • výjimky, o kterých víme, že vznikají a nechceme je dále vyšetřovat

Preprocessing umožňuje “prozkoumat” vlastnosti každé telemetrie a rozhodnout se, zda ji má smysl do Azure portálu posílat.

Závěr

S příchodem nových pricing plánů Microsoft z mého pohledu správně nutí vývojáře více se zamyslet nad tím, jaká data hodlá ve službě Application Insights analyzovat. Díky tomu lze očekávat, že služba bude pracovat s menším objemem relativnějších dat, což se vývojářům vrátí v podobě rychlejší práce s Azure portálem. Nejznatelnější zrychlení vývojář pocítí při práci s daty za větší časový úsek.

Miroslav Holec
1. února 2017

Hello, Android!

$
0
0

Willkommen zurück! Das ist die Episode 2 der Xamarin University Serie.

CaptureFalls du Episode 1. verpasst kannst du sie HIER nachlesen. Letztes Mal haben wir uns über Xamarin einen Überblick verschafft und  alle Entwicklungswerkzeuge installiert und aufgesetzt. Als Erinnerung: HIER kannst du kostenlos lernen mit C# zu programmieren. Auch für Fortgeschrittene ist der eine oder andere Kurs dabei. Vorbei schauen lohnt sich!

In diesem Beitrag wollen wir uns genauer mit dem Thema Xamarin.Android beschäftigen. Für alle die sich schon auf Xamarin University angemeldet haben, HIER  ist der Link zu dem nächsten Kurs. Der Xamarin.Android Kurs ist der 2. Kurs auf Xamarin University. Wie schon letztes Mal erwähnt kann der Kurs sowohl im Selbststudium als auch in Live-Kursen absolviert werden. In meinem Beitrag will ich euch einerseits durch den Kurs lotsen und andererseits an verschiedenen Stellen auf zusätzliche, hilfreiche Inhalte verweisen.
Ich hoffe die Installation von Visual Studio und der Xamarin Komponente hat bei dir geklappt und du hast dich schon mit der Oberfläche gespielt. Wenn es Fragen oder Anmerkungen gibt, dann wie immer an mich: t-nidobi@microsoft.com.

Damit sich die Fortgeschrittenen unter euch nicht unterfordert oder gar gelangweilt fühlen, hier eine kurze Auflistung mit hilfreichen Links damit Ihr euch schon zu einigen Themen vertiefen könnt.

Application Fundamentals;
User Interface;
Platform Features;
Deployment,
Testing and Metrics
;
Advanced Topics;
Under The Hood;
Troubleshooting;
Wear

Android 101

In diesem Kurs dreht sich alles um die Tools die du nutzen kannst um Xamarin.Android Projekte und Templates zu erstellen bzw. zu verwenden. du baust deine erste Xamarin.Android Anwendung (mit Xamarin Studio oder Visual Studio) und entwickelst ein Verständnis der Grundlagen der Entwicklung einer Android-Anwendung mit Xamarin. Entlang des Weges, wirst du in die erforderlichen Werkzeuge, Konzepte und Schritte eingeführt, um eine Xamarin.Android Anwendung zu bauen und zu implementieren. 1

2

Ganz allgemein und vereinfacht gesprochen: Die grafische Benutzeroberfläche für eine Android App ist aus Widgets wie Textfeldern, Schaltflächen und Checkboxen aufgebaut. Widgets können als Bausteine vorgestellt werden, mit denen Du eine Benutzeroberfläche erstellen kannst. View-Widgets werden verwendet, um Text und Grafiken anzuzeigen und mit dem Benutzer zu interagieren. ViewGroup Widgets sind unsichtbare Container, die andere Widgets auf dem Bildschirm anordnen.

Sehr zu Herzen legen kann ich dir an dieser Stelle WORKBOOKS. Dies ist eine Sammlung an interaktiven Tutorials, die dir bei deiner App Entwicklung helfen. Dort findest du unter anderem auch ein App Basic Tutorial zu Android welches Dir zeigt, wie man eine grundlegende Benutzeroberfläche für eine Android-App baut, indem du Widgets erstellt, diese auf dem Bildschirm verlegst und sie für die Benutzerinteraktion zusammenschließt.

Einführung zu Xamarin. Android

Die hier angeführten Youtube Videos findest du übrigens auch alle in Deinem Xamarin University Kurs. Ich habe diese jedoch angehängt für alle die sich die Videos gleich ansehen und alles an einem Ort haben wollen. Der Kurs enthält auch 4 Übungen, die dich praxisnah in die neu vorgestellten Konzepte einführen sollen. Eine kurze Beschreibung hab ich jeweils angegeben. Weiteres Material findest du online in deinem Xamarin University Kurs. Das erste Video gibt einen kurzen Überblick über Xamarin.Android und dient euch als Einführung in den Kurs.

Ganz nützlich um sich einen Überblick zu verschaffen ist diese interaktive Grafik aus dem Kurs. Hier kannst du die verschiedenen bestehenden Templates von Xamarin.Android näher kennen lernen indem du ein Projekt auswählst.

image

Übung 1: Diese Übung führt dich durch die Erstellung eines neuen Xamarin.Android Projekt. Hier ist es nicht erforderlich zu programmieren. Allerdings werden die Anweisungen auf einige wichtige Teile des Projekts hinweisen.

Abschnitt I: Erstelle eine Android Aktivität

Aktivitäten sind die grundlegenden Bausteine einer Android-Anwendung. In diesem Video werfen wir einen Blick darauf, was eine Aktivität ist und wie man sie erstellt.


In Android wird also jeder Bildschirm durch eine Aktivität gesteuert. Die Aktivität ist verantwortlich für die Verwaltung der Benutzerinteraktion innerhalb eines Bildschirms. Aktivitäten sind ein ungewöhnliches Programmierkonzept für Android. In der traditionellen Anwendungsentwicklung gibt es normalerweise eine statische Hauptmethode, die ausgeführt wird, um die Anwendung zu starten. Bei Android sind die Dinge jedoch anders. Android-Anwendungen können über jede registrierte Aktivität in einer Anwendung gestartet werden. In der Praxis werden die meisten Anwendungen nur eine spezifische Aktivität haben, die als Anwendungseintragspunkt angegeben ist. Wenn jedoch eine Anwendung abstürzt oder durch das Betriebssystem beendet wird, kann das Betriebssystem versuchen, die Anwendung bei der letzten geöffneten Aktivität oder irgendwo anders innerhalb des vorherigen Aktivitätsstapels neu zu starten. Darüber hinaus kann das Betriebssystem Aktivitäten anhalten, wenn sie nicht aktiv sind, und sie zurückfordern, wenn es wenig Speicher gibt. Es ist sorgfältig zu berücksichtigen, dass die Anwendung den Zustand korrekt wiederherstellen kann, falls eine Aktivität neu gestartet wird, insbesondere wenn diese Aktivität von Daten aus früheren Aktivitäten abhängt. In dem nächsten Video werden wir uns ansehen, wie man eine Benutzeroberfläche für eine Aktivität in Android baut.


Übung 2:
  CaptureDieses Lab verfolgt zwei Ziele: Du wirst eine Benutzeroberfläche manuell erstellen und den Xamarin Android Designer benutzen. Beide besitzen nützliche Fähigkeiten. Das Verständnis der Roh-XML kann dir helfen, deine Benutzeroberfläche genau so zu gestalten wie Du es willst. Wenn Du weißt wie du das Designer-Tool verwendest, kannst Du eine Benutzeroberfläche schneller erstellen, als diese mit XML einzeln und händisch zu programmieren.
In dieser Übung wirst Du an einer App arbeiten, um das Trinkgeld für eine Restaurantrechnung zu berechnen. Der Benutzer soll den Betrag der Rechnung eingeben und danach auf die Schaltfläche Berechnen klicken. Die App wird die Rechnungsmenge von der Benutzeroberfläche abrufen, das Trinkgeld und die Gesamtsumme berechnen und diese beiden Werte in der Benutzeroberfläche anzeigen. In dieser Übung wird von dir verlangt, die Benutzeroberfläche zu erstellen. Der Code wird in einer nachfolgenden Übung durchgeführt. Für alle die gerade ausgestiegen sind, hier ein Screenshot von dem gewünschten Ergebnis der Übung:

Das nächste Video zeigt dir, wie Du das Verhalten einer Aktivität programmierst.

Übung 3: Das Ziel dieser Übung ist es, C # – Code zu schreiben, um auf die Benutzeroberfläche zuzugreifen und diese zu manipulieren. Dein Code muss die Eigenschaften auf mehreren Textelementen lesen / schreiben und ein Ereignis auf einer Schaltfläche abonnieren. Um dies zu tun, musst du IDs zu einigen der Ansichten im XML zuweisen und die Ansichten von ID im Code nachschlagen. Du wirst weiterhin an der Trinkgeld App arbeiten und den Code hinter der Benutzeroberfläche programmieren.

Abschnitt II: Kompiliere Deine App

In diesem Video werden wir sehen, wie Android Apps kompiliert werden.

In diesem Video werden wir uns ansehen, wie man das Android SDK aktualisiert und pflegt.

Übung 4: Das Ziel dieser Übung ist es, den Android SDK Manager zu verwenden, um die neueste Version des Android SDK auf deinem Entwicklungscomputer zu installieren.
Viele der verwendeten Tools und APIs werden entweder automatisch aktualisiert oder du wirst benachrichtigt, wenn ein Update verfügbar ist. Das ist nicht der Fall für das Android SDK. Diese Übung zeigt dir die manuellen Schritte, die du befolgen musst, um deine Installation auf dem neuesten Stand zu halten. Du musst hier weder Code schreiben noch ist eine Xamarin.Android-Lösung erforderlich.

Sag’s Wien build with Xamarin (Best Practice Beispiel)

Ganz stolz dürfen wir an dieser Stelle auf die Applikation “Sag’s Wien”  der Stadt Wien verweisen. Diese wurde mit Xamarin erstellt und ist nun gleichzeitig für Windows Mobile, Android und iOS erschienen. Hervorzuheben ist dabei die intuitive und moderne Benutzeroberfläche. Schaut euch doch die App auf eurer Plattform an und lasst euch von den Stärken von Xamarin überzeugen.

Capture

 

3

Gratulation an alle die den Kurs erfolgreich abgeschlossen haben. Wenn du alle Einheiten des Kurses Xamarin.Android abgeschlossen hast, wird dein Fortschritt wieder in deinem Profil vermerkt. Nächste Woche Freitag (04.03) werden wir uns mit dem iOS Designer beschäftigen. Du hast Fragen zu Xamarin University? Dann bitte entweder als Kommentar auf unserem Blog oder persönlich als E-Mail:
t-nidobi@microsoft.com.

Gewinnspiel: Wir haben uns für diese Woche eine kleine Challenge überlegt.

  1. Dazu werde Fan unserer Xamarin University Facebook Seite.
  2. Schließe den Kurs “Orientation and Welcome” erfolgreich ab.
  3. Schließe den Kurs”Introduction to Xamarin.Android” erfolgreich ab.
  4. Schick uns einen Screenshot deines Zertifikatfortschritts an wettbewerb@microsoft.com

Die ersten 15 Personen bekommen von uns einen selbst gezeichneten Sticker zugeschickt. Dazu müsst Ihr jedoch in Österreich leben.
Die verschiedenen Designversionen findet Ihr hier unterhalb. Wir haben diese exklusive für euch von Wolfgang Hoffelner  entwerfen lassen. Für unsere erste und zugegeben noch recht einfache Challenge gibt es den ersten blauen Sticker (links unten) zu gewinnen.

Sticker

Für die neuesten Updates schau doch auch auf unserer neuen Xamarin University Facebook Seite vorbei: https://www.facebook.com/Xamarin-University-247178372403551/


Überblick zu den Beiträgen auf CodeFest:

Episode 1: Getting Started
Episode 2: Xamarin.Android

Azure and Linux deploying via Azure Resource Manager

$
0
0

image

 

The Azure platform SLA applies to virtual machines running the Linux OS only when one of the endorsed distributions is used. All Linux distributions that are provided in the Azure image gallery are endorsed distributions with the required configuration

 

image

See: https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-linux-endorsed-distros?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json

Azure Endorsed Linux Distributions Images on Azure Marketplace – https://azuremarketplace.microsoft.com

Azure MarketPlace has a collection of Published, maintained and supported Linux images produced by partners

All images provided on the Azure MarketPlace are curated & tested by Microsoft to ensure that a

Most endorsed distros maintain repos in each Azure region for fast updating

There are selection of imaged available

Standard Images

Premium Images

image

Using Scripting for Deployments

“Life is already really complicated, so we insist on making it even more complicated.”

The infrastructure for your application is typically made up of many components – maybe a virtual machine, storage account, and virtual network, or a web app, database, database server, and 3rd party services.

You do not see these components as separate entities, instead you see them as related and interdependent parts of a single entity. You want to deploy, manage, and monitor them as a group.

DevOps Deployment Automation – Requires a faster, more flexible deployment approach than the old ASM gateway and powershell could provide – ASM PowerShell can be very complex to define and troubleshoot. https://msdn.microsoft.com/en-us/powershell/dsc/LnxGettingStarted

Resources for Powershell & Linux Hands on Labs

https://github.com/Microsoft/TechnicalCommunityContent/tree/master/Open%20Dev%20Framework/PowerShell%20for%20Linux

Resources for Devops and Hands on Labs

https://github.com/Microsoft/TechnicalCommunityContent/tree/master/DevOps/DevOps

Azure Resource Manager

An Azure Resource Manager template is a JSON formatted document that can be deployed to create a ring-fenced group of resources

Azure Resource Manager provides a new way for you to deploy and manage the services that make up your applications, most, but not all, services support Resource Manager, and some services support Resource Manager only partially. Microsoft will enable Resource Manager for every service that is important for future solutions, but until the support is consistent, you need to know the current status for each service.

https://azure.microsoft.com/en-us/documentation/articles/resource-manager-supported-services/

 https://azure.microsoft.com/en-gb/features/resource-manager/

•Azure Resource Manager enables you to work with the resources in your solution as a group. You can deploy, update or delete all of the resources for your solution in a single, coordinated operation.

•You can repeatedly deploy your application throughout the app lifecycle and have confidence your resources are deployed in a consistent state.

•You can use declarative templates to define your deployment and you can define the dependencies between resources so they are deployed in the correct order.

•You can apply access control to all resources in your resource group because Role-Based Access Control (RBAC) is natively integrated into the management platform.

•You can apply tags to resources to logically organize all of the resources in your subscription

•When using Resource Groups, limits that once were global become managed at a regional level with the Azure Resource Manager.

•It is important to emphasize that quotas for resources in Azure Resource Groups are per-region accessible by your subscription, and are not per-subscription, as the service management quotas are.

https://azure.microsoft.com/en-gb/documentation/articles/azure-subscription-service-limits/

ARM Template

Written in JSON, describes entities

Template execution engine can interpret dependencies and relationships and orchestrate the deployment of the resources for you

Resources at the bottom of the stack are deployed first

Parameterized input and output – reuse template in different environments

Templates are represented in the gallery https://azure.microsoft.com/en-us/resources/templates/

image

Role Based Access Control

Azure Role-Based Access Control (RBAC) enables fine-grained access management for Azure for example you would only give students contributor access to Azure resources whilist academics and RA would have owner rights.

Azure allows you to utilise RBAC, you can segregate duties within your students projects and teach DevOps, RBAC simply allow you to grant only the amount of access to users that they need to perform their roles and taks which are required .

https://azure.microsoft.com/en-us/documentation/articles/role-based-access-control-configure/

Resources

Azure QuickStarts

https://azure.microsoft.com/en-us/documentation/templates/

https://azure.microsoft.com/en-us/documentation/articles/resource-group-authoring-templates/

http://download.microsoft.com/download/8/E/1/8E1DBEFA-CECE-4DC9-A813-93520A5D7CFE/World%20Class%20ARM%20Templates%20-%20Considerations%20and%20Proven%20Practices.pdf

https://azure.microsoft.com/en-us/documentation/articles/resource-group-template-functions/

https://github.com/Azure/azure-quickstart-templates

Guest post: Cyprein Marie – Developing Windows 10 Language Apps

$
0
0

Last month at BETT 2017 we were lucky enough to meet literally thousands of educators, school leaders, students, and those who are involved in education in countless ways. It’s such a great opportunity for us to hear about the different ways in which teachers are making the most of the available technology to connect with learners and drive their practice forwards. One such person we met is Cyprein Marie – a language teacher and developer from Bristol, who has contributed the following blog.


profilepictureI have just come back from the Bett Show in London and absolutely loved the atmosphere and the buzz there. What an exciting time for edtech! And even more so if you are a Windows user!

Windows 10 for Education is gaining momentum and schools are upgrading their computer systems in numbers. Gone are the days of Windows 7 in schools’ computer rooms; we now have a brand new ecosystem, live tiles and universal apps!

As a teacher of French and German in a large Bristol secondary school, I’ve had a burning desire to give students the tools they need for their language revision. This is what led me to learn programming for Windows recently. What a journey! C# and XAML are no longer foreign languages to me.

Students, I find, rely too much on their teachers and can easily get stuck when working on their own. The language apps I’ve created are based from my first-hand experience as a teacher. They are designed specifically to meet the needs of secondary school language students. I believe they need particular help with: pronunciation, memory techniques and vocab learning

So, last year, I decided to start theLanguageApp project and created the following free Windows 10 apps:


Hear it first!

Hear it first! is a text-to-speech app to help students with pronunciation (in 11 different languages). This app comes in handy when revising for a speaking test: copy and paste the text you want to hear, specify the language and press play. You can also save your work and export it as an audio file for extra convenience.

screenshot01


Memorize it

Memorize it! is an app to help you learn your text off by heart. The idea came from seeing pupils use mini-whiteboards for their revision in class. They’d create a jumbled-up version of a learnt-paragraph and test each other orally. ‘Memorize it!’ serves this very purpose: enter the text you want to memorise, select a memorising technique from a range of options and press ‘OK’. Your text now appears scrambled-up on screen and it’s over to you to practise out loud. If you need a ‘hint’, pressing and holding will fill the blanks.

screenshot02


My French Exam

My French Exam is your complete and interactive vocab list, all in one app! Students can learn vocab as they go along, take a quiz and get an estimated grade. Vocab learning is an essential part of exam revision and this app is here to help students get ready for their listening and reading comprehension exam.

screenshot03

The apps are in constant evolution. Users can leave reviews, send feedback and suggestions; meaning students themselves are playing a big part in shaping the project.

I really like the speech capabilities of Windows 10 and I am currently exploring ways to include a “pronunciation coaching” feature. Other languages will also be added in future updates and the concept will be tailored to suit students in other European countries.

If, like me, you are excited about the direction Windows 10 is taking, please visit www.thelanguageapp.uk for more information. I would love your feedback!

Happy language learning on Windows 10!

 

Cyprien Marie 

Language teacher / Windows developer

Bristol, UK

Web: www.thelanguageapp.uk  

Email: thelanguageapp@outlook.com

Twitter: @thelanguageapp

Effective feature isolation with TFVC – what do you think of the work-in-progress sandbox?

$
0
0

We’re looking for ways to be more transparent with the development of our guidance, give the developer community an opportunity to give early feedback, and deliver value quicker.

The guidance sandbox is where we’re planning to collaborate on new DevOps and Branching guidance, before committing to aka.ms/techarticles.

imageExample

For example, we just created this pull request, which includes a DRAFT for the new Effective feature isolation on TFVC article.
SNAGHTMLfb1adb2 

3ximages

Here’s three questions for you to ponder over:

  1. Your thoughts on the sandbox and the opportunity to collaborate with us, while we’re working on the guidance?
  2. Do you prefer the classic or the new article based guidance in aka.ms/techarticles?
  3. How could we do this better?

image… thank you for your thoughts!


Create an End-to-end IOT Scenario with Azure Services

$
0
0

IOT or Internet of things is taking over the world and we are encountering more and more of it in our daily lives. But when we come to think of it what does the word IOT actually mean? What are these IOT solutions and how do we start building them?

To answer these questions, lets first start with defining what is IOT? There is no standard definition for IOT but moreover it is any solution that essentially has following four components.

  1. Things – Things can be anything that can send some kind of useful data. It can be a simple phone or a complex array of sensors sending the data.
  2. Connectivity – These things need a way to communicate with each other and with internet and for that they need connectivity and protocols.
  3. Data- The data can be a simple telemetry data, an image or even a video feed.
  4. Analytics- The data that is gathered needs to be analyzed to derive insights from it. These analytics can be real time analytics such as sending an alert or taking an action or a long term analytics where you might use machine learning on historical data to detect a pattern or predict an outcome.

So how would a typical event processing would happen in an IOT Scenario. Lets a take a look.

image1

Starting with producers, these are your devices or things which would be sending data. The data can either be directly sent to cloud if the devices are internet enabled. If the devices are legacy devices and not internet enabled, then some kind of gateways or aggregators can be used to collect the data from all such devices and then send it to cloud. Then you need an ingestion mechanism – a service or solution which is capable of ingesting data from any number of devices at a time and scale as per the need. Next, you would need a service which would process the incoming data, either to generate a real time action or to aggregate it and store it for further analysis. You would also need  data storage capabilities to store the huge amount of data that will be coming in and at the last you might want to have a way to visualize, search or query this data or to use it for your machine learning experiments.

How Azure fits into the picture ?

Microsoft Azure services give you a way to create your entire IOT solution with the use of many different components. Its not a one way solutions but rather a platform consisting of different services from which you can choose the services best fitted for your own scenario. Here is just a glimpse of the all the available services that can be used in your IOT Scenarios.

Microsoft Services

Let’s build a scenario, shall we?

We will create a simple IOT scenario, where we will gather data from different devices and ingest it in cloud. Once it is ingested, we will analyze the data to detect an anomaly and then raise an alert. We will use following Azure services in this scenario.

Scenario

Step 1 – Devices

Azure IOT Hub has a way to interface with many different types of devices. This repository contains both IoT device SDKs and IoT service SDKs. Device SDKs enable you connect client devices to Azure IoT Hub. Service SDKs enable you to manage your IoT Hub service instance. https://github.com/Azure/azure-iot-sdks

The SDKs are available for different programming languages such as C, Python, Node.js, Java, .Net and also can be used with a broad range of OS platforms and devices such as Linux, Windows and real-time operating systems. The other way to connect to IOT Hub is via IOT Hub Rest APIs if for some reason you can not use a SDK.

In this scenario, we are going to simulate a device using SDK for .NET.

This simulator will generate random values in a given range and send those values to Azure IOT Hub at a specific Interval.

The Link to Code https://github.com/gsamant/IOTEndtoEndDemo

Step 2- Create and setup Azure IOT Hub

Azure IOT Hub helps you to establish bi-directional communication with billions of IOT devices and to set up individual identities and credentials for each of your connected devices.

(You need an active Azure subscription to create Azure IOT Hub)

Create an IOT hub using steps described here. (For this scenario, follow the steps only till the creation of IOT Hub.)

Once your IOT Hub is created, the next step is to register a device, this can be done via Azure SDKs, Rest APIs or Azure CLI. You can also use the tools such as Device Explorer (Windows) or IOT Hub  Explorer.

Once the device is registered you will have a device connection string which will have IOT Hub Name, Device ID and a Shared Access Key unique to your device. Once you have the connection string, use the values from the connection string to update the following lines in Program.cs file

static string iotHubUri = “<IOT Hub URI>”;
static string deviceId = “<Device ID>”;
static string deviceKey = “<Device Shared access key>”;

Step 3- Create and setup Azure Event Hub

Azure Event hub is a managed service that can intake and process massive data streams from websites, apps and devices. This service can be used to directly used to ingest data from devices as well however unlike IOT hub, Event Hub can handle only device to cloud communications.

In this scenario however, we are going to use Azure event hub as an output for the Azure stream analytics job.

Create an Event hub using steps described here. And Note the connection string.

Step 4 – Create a Stream Analytics job

Azure Stream Analytics is  a fully managed real-time event-processing engine. Events can come from sensors, applications, devices, operational systems, websites, and a variety of other sources.

Create a Stream Analytics job using steps described here. (For this scenario, follow the steps only till the creation of Stream analytics job). You will have an empty job as below

Stream Analytics

Once the Stream analytics job is created, Add the input, output and update the Query as shown below.

Input

Output

Output

Query

Query

Start the Stream Analytics Job.

Step 5 – Create an Azure function to process the Event hub queue

Azure Function App is a serverless event based Compute service. The processing can be triggered via an event, a timer or a SaaS activity. In this scenario, we will create an Azure function which will be triggered when the event hub defined in Step 4 receives an event.

  1. Go to the Azure portal and sign-in with your Azure account.
  2. Search for Function App and Click Create.

    Azure Function Creation

    Azure Function Creation

  3. In your function app, click + New Function > EventHubTrigger – C# > Create. This creates a function with a default name that is run on the trigger from an Event Hub. It will also ask you to add the name of your Event Hub and the Event hub connection string. (For connection string click new, give a name to your connection and paste the connection string from Step 4)
  4. Copy the code from ProcessEvent.txt file from the GitHub link

    Azure Function Code

    Azure Function Code

  5. Leave the Client.BaseAddress URL blank for now. We will update this once we create the Logic App.

Step 6 – Create an Azure Logic app to send an Alert

Azure Logic Apps enable you to develop integration solutions with ease and let you automate and simplify business workflows across on-premises and the cloud.

In this Scenario we will create a Logic App which will be triggered from an HTTP request and will send SMS using a Twilio Connector.

  1. On the Azure portal dashboard, select New.
  2. In the search bar, search for ‘logic app’, and then select Logic App. You can also select New, Web + Mobile, and select Logic App.
  3. Enter a name for your logic app, select a location, resource group, and select Create. If you select Pin to Dashboard the logic app will automatically open once deployed.
  4. After opening your logic app for the first time you can select from a template to start. For now click Blank Logic App to build this from scratch.
  5. The first item you need to create is the trigger. This is the event that will start your logic app. Search for Request in the trigger search box, and select it.
  6. In the Request Body JSON Schema field, paste following Schema.

    JSON Schema

    JSON Schema

  7. After you Click Save. The HTTP Post url will be generated. Copy this URL and paste it in Client.BaseAddress URL in Azure Function that is created previously.

    Logic App -Request

    Logic App -Request

     

  8. Before going for the next step, we will need to create a Twilio Account. You can create a Free trial Account here. Before moving to next step, you need to have a verified Twilio Phone number that can send/receive SMS.
  9. Click on New Step and search for Twilio. Select Twilio-Send Test Message (SMS)
  10. Enter a connection Name for Twilio, Twilio Account ID and Twilio Access Token (You can get these two from your Twilio dashboard.)

    Twilio Account Setup

    Twilio Account Setup

  11. Once the connection is saved. Enter the to and from Phone numbers and Text for message.

    Twilio SMS Setup

    Twilio SMS Setup

  12. Save the Logic App changes.

So far, we are done with the setup of our scenario. Lets test the scenario.

Testing the scenario

  1. Run the SendEvent application. You will see events going to the IOT hub.

    App Run

    App Run

    Stop the application and in Program.cs file, uncomment the following lines. This will allow you to manually enter the Temperature value more that 60.

    Code snippet

    App Run

    App Run

  2. Check the SMS Alert on your phone.

    SMS Alert

    SMS Alert

  3. Also, check the Azure Function log.

    Azure Function Log

    Azure Function Log

     

Recap

We created a simple scenario where we simulated a device sending telemetry information. Then we used IOT Hub to ingest the data from the device and Stream analytics job to process the data in real time. The stream analytics query, checked if the temperature is going above a threshold and when it did it sent that event to an Event hub.

We then created an Azure function which will get triggered as soon as there is a new event on the Event hub and will send the Event data to a Logic App using HTTP Request Endpoint. On the Logic App trigger, we created a Twilio connector which will read the temperature from the payload and send the Alert.

 

 

 

 

 

Issue with Update tile in Microsoft Dynamics 365 for Operations platform update 4 release on LCS

$
0
0

We recently announced the availability of Platform update 4 through Lifecycle Services. Since the release, we have discovered issues with the Servicing workflow for the Platform update 4 environment. The Updates tile, in the Environment details view, should only show updates that are applicable to your environment. However, for customers who have deployed a Platform update 4 environment, the Updates tiles shows updates from older releases. Specifically, it has been reported that the following tiles display

  • Platform X++ updates: On a Platform update 4 environment the platform models can’t be overlayered in Dynamics 365 for Operations version 1611 and therefore this tile shouldn’t be available.
  • All binary updates: This tile shows updates for older releases that are not applicable.

To work around this issue, customers on Platform Update 4  should not download or apply the updates shown on these tiles.

This blog post will be updated when the issue is resolved.

How to delegate the Power BI Administrator Role via to the Office Administrator Portal

$
0
0

In October we added the ability to make it easier to administer Power BI with the Power BI Service Administrator Role.

https://powerbi.microsoft.com/en-us/blog/making-it-easier-to-administer-power-bi/ 

The documentation showed how to add this via PowerShell; today i was  asked for a pictorial walk through of adding a User to the Power BI Role via Office Administrator Portal which can be done in ~5 easy steps:

Starting from within Power BI:

image

Step 1. Click on the Administration Gear, choose Manage Users and Go to 0365 Admin Center

image

Step 2.  Select Edit a User from the landing page

image

Step 3.  Select the User you want to grant the Power BI Role to.

image

Step 4.  Edit the User Roles

image

Step 5. Select the Power BI Administrator Role and save the changes

image

A shift from SRx back to blogging all things SP Search…

$
0
0

Over the last year, I’ve been putting far more focus into the Search Health Reports (SRx) and incorporating my lessons learned into this PowerShell tool-kit to empower anyone using it. As such, my time for blogging has suffered and let this site get a bit stale.

That is soon going to change now that the SRx is fairly robust and moving into more of a maintenance mode. Now, I’m actively working on several new blog posts using the SRx for more effective troubleshooting and to help you get more out of the knowledge baked into it. I also plan to highlight and demo some of the alerting/reporting functionality provided by the SRx Dashboard that we sell as a Premier [Axis] engagement (*have your TAM reach out to us at SearchEngineers@microsoft.com for any additional details in the meantime). To branch out a bit, I’m also working on a new Crawl Freshness test for SPO/On-Premises using Azure Functions and the Graph API. And finally, I’m working to build several out new tests specifically for Cloud Hybrid Search… so much more to come.

I appreciate anyone reading this blog and finding value from this.

Best,
–Brian Pendergrass (@bspndr)
Premier Field Engineer – SharePoint Search

 

Developing and Deploying a Service Fabric Voting App to Azure

$
0
0

While preparing for a partner meeting on Service Fabric, I went through two labs, Part I by the Service Fabric team and Part II by my colleague Max Knor, and created a working Voting app on Windows. If you are interested to learn how to develop a Service Fabric app, I highly recommend that you go through the labs. If just want to see how the app works and review the code, you can find the completed project at Github.

This simple app shows how a stateless service and a stateful service, two separate microservices, work together to accomplish the goal of tallying voting counts by candidate name. The stateless service provides a user interface (single page app) for data entry, and communicates with the stateful service through a remote client service proxy. The stateful service keeps track of voting counts by storing data in the Service Fabric built-in data collection, ReliableDictionary. Each stateful service comes with one Primary replica for read and write requests, and multiple active Secondary replicas for read only requests, to achieve high availability. No database or cache is used in the example.

The diagram below illustrates the high-level architecture of the Service Fabric Voting app. For more detail on Service Fabric, check out the online documentation or download the live pdf file (about 800 pages).

image

The lab documents provide detail on how to debug, test and upgrade the app and how to publish it to the local Service Fabric cluster in Visual Studio. In the post I will focus on deploying the app to Azure.

Step 1 – Create a Service Fabric cluster on Azure

You can use the Azure portal or PowerShell scripts. If you want to secure the Service Fabric, you’ll need to create Azure Key Vault. If you don’t have commercially available certificates, you can create and use self-signed certificates. See the detail here.

Step 2 – Publish the app to the Service Fabric cluster

You can use Visual Studio to publish the app directly to Azure. Alternatively, you can create a package for the app, and then use PowerShell scripts to publish it to Azure.

In Visual Studio, open Cloud.xml in PublishProfies in the app project. Replace <ClusterConnectionParameters ConnectionEndpoint=”” /> with something similar to the following. Replace the ServerCertThumbprint value and FindValue with the thumbprint value you obtained at Step #1. Save the changes and go back to the Publish screen, you should be ready to publish the app to the cloud. If you have configured Azure AD for the cluster on Azure, use the ClusterConnectionParameters for Azure AD and make similar changes.

<ClusterConnectionParameters ConnectionEndpoint=”mycluster.westus.cloudapp.azure.com:19000″
                             X509Credential=”true”
                             ServerCertThumbprint=”0123456789012345678901234567890123456789
                             FindType=”FindByThumbprint”
                             FindValue=”9876543210987654321098765432109876543210
                             StoreLocation=”CurrentUser”
                             StoreName=”My” />

 

In case you run into an error as shown here, ensure the value in Connection Endpoint is valid.

image

Step 3 – Launch Service Fabric Explorer

Open the browser and navigate to https://clustername.region.cloudapp.azure.com:19080/Explorer If you use a self-signed certificate, you can ignore the website security certificate warning. On the next screen, choose the same certificate that you uploaded to Azure Key Vault at Step 1.

image

If you don’t see it on the list, as it happened to me because the certificate disappeared after initial use, open Internet Options in IE and import the pfx file you used at Step 1. Restart the process.

image

You should now be able to see the Service Fabric Explorer, as you have seen the local version.

image

Step 4 – Configure the cluster load balancer

With the local cluster, you can access the user interface as shown above by accessing http://localhost:34001/api/index.html which you can find from the node of the stateless service. Now that the app is deployed on the Service Fabric cluster on Azure, the services including the single page app are running behind the load balancer. If you use the public IP address of the cluster and the port 20002, e.g. http://<publicipaddress>:20002/api/index.html, which is made up of VMs in VM Scale Set, or the cluster address, http://clustername.region.cloudapp.azure.com:20002/api/index.html, you will quickly find that the web app is not accessible. The reason is that either port 80 or port 20002, which you find from Service Fabric Explorer, is not set up for the load balancer.

To address the port issue, go back to your Azure account and create two rules, one under Health Probes and one under Load balancing rules. The Health Probes rule uses TCP protocol to check the heath status of the node every 5 seconds. The Load balancing rule forwards the request from port 80 to a backend port 20002 which the stateless service is listening on. Choose LoadBalancerBEAddressPool for Backend pool, the newly created Health Probes rule for Health probe, and keep all default values for other fields. If you choose to enable Floating IP, the front port and backend port much match.

imageimage

With that, you should be able to see the single page app. It is available via http, not https. You can secure the app using Azure AD.

image

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>