Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

Support HTTPS in Azure Marketplace image for Jenkins

$
0
0

Create a Jenkins server on an Azure Linux VM from the Azure portal has introduced how to configure a standalone Jenkins VM in azure based on Azure Marketplace image for Jenkins. This image is published by Microsoft, as it has configured Nginx and Jenkins environment well and also make sure they are started automatically,it is very fast and easy to setup the Jenkins based on this image. Based on current implementation, the Jenkins console is inaccessible through unsecured HTTP, there is instruction shown below when accessing the HTTP endpoint. That means in order to secure the conversion, ssh is needed to setup the tunnel.

Recently, I got some questions regarding to how to access Jenkins portal via HTTPS, and this article is about this requirement.

Generate Let's Encrypt Certificate

In order to setup HTTPS, we need either create self-signed certificate or require certificate from certificate authority. We will require certificate from Let's Encrypt. Before moving forward, make sure we have already had our own domain name and also make sure a CNAME entry is created for the target VM which should have been provisioned in Azure. Then ssh into the VM and execute the below commands. The first git command is used to clone the letsencrypt GibHub repository into the VM locally, as the image has installed git already, there is no need to install git anymore. When executing letsencrypt-auto command, there are three options nginx, standalone or Apache to choose, I have tried option 2 (standalone) to generate the certificate successfully, just please make sure to "service nginx stop" firstly, as nginx has been configured to bind to 80 port be default, let's encrypt standalone server will fail to bind to 80 port because of that.

git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt
./letsencrypt-auto certonly

After filling the target domain name and also email address, finally the certificate will created into /etc/letsencrypt/live/customdomain/, we will use the fullchain.pem and privkey.pem in next step.

Setup SSL Offloading in Nginx

Though there are quite a lot of web servers in Linux to support SSL offloading, nginx has been installed in the VM and we will go ahead to achieve our requirement just based on a little bit configuration change. The following is the default configuration in /etc/nginx/sites-available/default. As we can see the url rewrite to Jenkins-on-azure, that's the reason why it is showing the ssh instruction page when accessing the portal via HTTP endpoint.

server {
    listen 80;
    server_name jacjenkins.centralus.cloudapp.azure.com;
    error_page 403 /jenkins-on-azure;
    location / {
        proxy_set_header        Host $host:$server_port;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto $scheme;


        # Fix the “It appears that your reverse proxy set up is broken" error.
        proxy_pass          http://localhost:8080;
        proxy_redirect      http://localhost:8080 http://customname.centralus.cloudapp.azure.com;
        proxy_read_timeout  90;
    }
    location /cli {
        rewrite ^ /jenkins-on-azure permanent;
    }

    location ~ /login* {
        rewrite ^ /jenkins-on-azure permanent;
    }
    location /jenkins-on-azure {
      alias /usr/share/nginx/azure;
    }
}

Let's modify the above file to the following, it will let nginx listen on 443 port and use the Let's Encrypt certificate to setup the TLS conversation. Save the modified file, execute "service nginx restart", also make sure 443 port is opened in the NSG inbound rule, we should be able to access the Jenkins portal by HTTPS endpoint now. For more detail about nginx SSL setup, please check NGINX SSL Termination.

server {
    listen 443 ssl;
    server_name customdomain;
    ssl_certificate /etc/letsencrypt/live/customdomain/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/customdomain/privkey.pem;
    location / {
        proxy_set_header        Host $host:$server_port;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto $scheme;


        # Fix the “It appears that your reverse proxy set up is broken" error.
        proxy_pass          http://localhost:8080;
        proxy_read_timeout  90;
    }
}

【 App Development 編】技術トラック担当者からのメッセージ

$
0
0

皆さん、こんにちは。

App Development トラック担当 の 井上 章です。

 

App Developmentトラック概要

このトラックではアプリケーションの開発に必要な知識を広く深く学ぶことができます。最新の開発ツールに加えて、実装したコードの PaaS サービスへの自動展開、デバッグなど、開発に必須の一連の流れを身に着けることができます。量子コンピューティングや、サーバレスなどの最新トレンドも扱いますので、新しい App Development 方法をぜひ吸収ください。

 

技術トラック担当者からのメッセージ!

多くの開発者の皆様におススメの App Development トラックでは、今押さえておきたい Cloud Native に向けたアプリケーション開発のビジョンや採用すべき技術、そして最新のアプリケーション アーキテクチャや開発手法を中心に、de:code 2018 のセッション トラックの中で最多となる 34 のテクニカル セッションを予定しています。MobileContainersServerlessDevOps といった技術はもちろんのこと、その界隈で著名なスピーカーも国内外から多数お招きして、今後のクラウド時代のアプリケーション開発のあるべき姿を深く学ぶことができるセッションを取りそろえています。皆様のご参加をお待ちしております!

 

--------------------------------------------------------------------------

  • 公式ウェブサイトはこちら
  • 早期割引申込はこちら
    • 早期申込割引締切日:2018 年 4 月 24 日(火)
  • セッション情報はこちら
  • SNS

--------------------------------------------------------------------------

Episode 5: IT IQ Series – Are teachers in danger of app-overload?

$
0
0

Summary: Educators need to consolidate their software platforms if they want to develop even more innovative digital lessons for their students.

The days of the computer-free classroom are well and truly over. In fact, the government has been at the forefront of supporting the use of technology in Australian schools, with plans like the National Assessment Program ensuring students are equipped with digital skill sets that will enhance their employability in the future. According to Austrade, Australia has more than 350 edutech companies servicing the entire education ecosystem. But are teachers at risk of being overwhelmed by the sheer multitude of platforms?

“With Australia’s edutech market expected to grow to $1.7 billion by 2022, we’re seeing more choices for digital learning on-market than ever before,” says Jesse Cardy, a Microsoft Solutions Specialist for the education industry. “By consolidating their platforms, rather than simply adopting whatever looks most attractive in the short-term, teachers can keep IT management simple and students’ data safe–even as they endeavour to develop even more innovative curricula for the next generation of digital natives.”

Smarter students for the future

Ideally, a school’s apps and programs should all run on a single platform that can be accessed anytime, anywhere via secured sign-ins. That way, students can collaborate ‘live’ in the cloud using real-time data, with input and changes instantly visible to all. Doing so not only makes project teamwork more efficient, but permits greater autonomy amongst students without potentially putting core systems or data at risk.

At Glenwood High School in New South Wales, for example, a group of Year 10 Geography students built their own sustainable suburb in a 3D environment, complete with roads, underground plumbing, and internet cabling. Doing so became possible thanks to Minecraft, a popular video game that lets large groups of players create their own world using blocks, cubes, and other virtual building materials. Minecraft runs on Windows 10 for Education, the Microsoft 365 for Education solution that combines productivity, mobility, and core OS functions on a single integrated platform.

“With a single hub at the centre of all your apps, you can create far more intuitive and innovative student experiences,” Cardy says. “Students can connect and co-author projects in real time across a range of platforms, regardless of whether they’re inside or outside the classroom. You’re essentially emulating the sort of tech environment they can expect once they finish school, but in a way that minimises cross-platform frictions that might otherwise interrupt learning.”

Simplifying the teaching experience

A single-platform approach to digital learning also minimises a different sort of friction: time-consuming manual tasks. When apps and platforms aren’t well integrated, teachers often spend hours on manual tasks like transferring information from one network or app to another. In fact, a 2016 survey of 13,000 teachers in Victoria found that teachers are struggling to give their best in class amidst an ‘out of control’ workload–something which smoother back-end processes could help to fix.

“Educators must balance more teaching and administrative duties than ever before, with online learning creating much higher demands for feedback and updates that go well beyond school hours,” Cardy explains. “Cloud technology can make life simpler for teachers, but only if it recognises that processes must flow between apps and remove manual steps wherever possible. The design of Microsoft 365 for Education, for example, makes communication much simpler and more natural between not only teachers, but with students as well.”

In one Brazilian high school, teachers use OneNote to share their lessons with students, who in turn can add to those plans with their own notes during class. At the same time, the school uses shared calendars in OneDrive to schedule activities and due dates for assignments, which every student can then see and use to plan their own individual or group activities. “Getting everyone on a common platform can really cut down the time spent on mundane tasks like circulating information,” Cardy stresses.

Safety for data and networks

However, digital learning can’t afford to grow more efficient at the expense of security. In 2017, the Australian Cyber Security Centre (ACSC) released a report revealing that universities are becoming an attractive target for hackers due to the intellectual property of their wide-ranging research. The report also highlighted an increase from 4% to 5% between 2016 to 2017 of cybercrime incidents targeting education bodies.

With distance learning now the norm in universities and Bring Your Own Device (BYOD) policies entering many Australian high schools, teachers and IT staff need platforms that put embed security in every app and device. At Somerset College in Queensland, for example, parents can log on to a secure online network to retrieve semester dates, reports and school news via a single sign-on made possible with Azure Active Directory Premium, the mobile security module of Microsoft 365 for Education.

“We’ve been automatically upgrading schools on our platforms to Microsoft 365 for Education simply because the security component is so important," Cardy highlights. “The new solution incorporates mobile device management and security controls at every level of operation, so that educators don’t need to keep worrying about the implications when they introduce a new app or adapt how they’re using current tools. Vigilance remains important, but this approach takes away much of that burden and lets teachers get on with developing lessons and getting their students excited about digital.”

 

Find important updates on the new Microsoft 365 for Education and news around the Suite upgrading for schools on http://docs.microsoft.com/education

Watch Jesse Cardy talk about why Microsoft 365 Education is becoming the leading MDM package for Australian schools on our YouTube channel.

Our mission at Microsoft is to equip and empower educators to shape and assure the success of every student. Any teacher can join our effort with free Office 365 Education, find affordable Windows devices and connect with others on the Educator Community for free training and classroom resources. Follow us on Facebook and Twitter for our latest updates.

Guest Post Nathan Belling, Insync Technology: Microsoft Inspire – It’s all about early access!

$
0
0


Nathan Belling
General Manager
Insync Technology

 

At Insync Technology we build and deliver digital business experiences with the Microsoft Cloud. We help organisations to become more efficient, people to be more productive, and for everyone to stop doing the things they hate! We are a 100% Microsoft partner business focussed around the modern workplace and so it’s no surprise that the idea of attending a worldwide Microsoft conferenced appealed to us. Microsoft Inspire (or Worldwide Partner Conference as it was then known) seemed to offer everything we needed.

But there was just one problem.

The wake up call

My co-founder Stuart Moore and I started Insync Technology in 2013. When the idea of going to the Microsoft Worldwide Partner Conference first came up, we were still very much in start-up mode, with the business running us rather than the other way round. We both wanted to attend the conference but the reality was that for the business to continue to run, one of us had to be on the ground making it happen.

The realisation that I wouldn’t be able to attend was an important wake-up call for me. It helped me recognise that it was never my intention to start a business to be tied to it 365 days a year. This awareness prompted us to truly work on the business, putting the right people and processes in place so that both of us could attend Microsoft Inspire in 2017.

The results of that hard work meant we came back from Microsoft Inspire with a business that was still successfully running. We had been able to take advantage of everything the conference offered us, immersing ourselves for the entire week on the content and the people at Inspire, knowing that our business was ticking over like clockwork on the home front.

First to know

I now appreciate that I could not have chosen a better time to attend Microsoft Inspire. It was at the 2017 conference that Microsoft started to announce a huge amount of organisational change. Being there meant that we were privy to that insight from day one. We were finding out about the Microsoft transformation at the same time as the rest of the Microsoft team and partners — we were all part of this journey discovering the changes at the same time.

We also began to look at our own business and make decisions in a new, informed way that aligned with Microsoft’s changes. We rebranded and grouped a number of our teams to align with the Microsoft terms, specifically Modern Workplace and Apps and Infrastructure. From there we re-worked our marketing, our branding, our website, and our go-to-market; we invested a lot of time and money changing our business to fit with Microsoft and it was Microsoft Inspire which gave us the confidence to do that.

The defining moment

While it is hard to calculate the ROI on these changes, the true measurement of success became clear to us a couple of months after Microsoft Inspire when we attended the One Commercial Partner roadshow event hosted by Mark Leigh. The event was well attended and a whole group of partners were present in the session. Only about a quarter of the people in the room had been at Microsoft Inspire and it was clear that they “just got it”, it was something that they had worked on already, they had digested the information, it was already a part of their businesses – the other three quarters of the room, you could tell, just didn’t get it.

That was the defining moment for me; we actually had a three-month head-start on a lot of our competitors.

Why you should go

Microsoft Inspire is much more than content sessions and keynote speeches, it’s the early access to information, what the direction is and what the priorities are that makes it really meaningful. You can gain the knowledge by watching the streams anywhere in the world, but it is the conversations you have afterwards with both the Microsoft team and the partners that gives you deep insight into how things are happening.

Microsoft Inspire gave me a week to spend with like-minded people, it wasn’t simply learning about and growing our business; it was also a week of great personal development. But, it is the early insight and consistent messaging about what is happening and how it’s happening that means you will find me there again this year.


Don't miss out register for Microsoft Inspire today!

docker: Error response from daemon: driver failed programming external connectivity on endpoint

$
0
0

This one is going to be quick post where I ran into following error with docker while trying to start a mssql container

E:>docker run -p 1433:1433 --name mssqlverification -v mssqldata:/var/opt/mssql/data -d company/mssql-external
ba97afdcaf554a8af81c946896940ad071c3bb9b3924889dae05bcb7821e56b1
docker: Error response from daemon: driver failed programming external connectivity on endpoint mssqlverification (07e38668d0f1a46dddd97585377310f80b330ddb281771d33f5556900b5da9fd): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:1433:tcp:172.17.0.2:1433: input/output error.

There are many github issues like this one and this,this  opened for the same issue . I followed these steps and it worked fine for me.Hope any of this steps help you

  • Check if the port is already used by any other process with these command
netstat -ano|find ":1433"

1433 is the port you are looking for .If this command  returns any output,you have to stop the process using that port.

e.g for port 3001

E:>netstat -ano|find ":3001"
 TCP 0.0.0.0:3001 0.0.0.0:0 LISTENING 8548
 TCP [::1]:3001 [::]:0 LISTENING 8548

E:>tasklist|find "8548"
vpnkit.exe 8548 Console 3 24,116 K

In the above command ,I was running another docker on port 3001 and vpnkit was used by docker itself.

  • Check the hello world container works for you

 

E:>docker run -it hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
 (amd64)
 3. The Docker daemon created a new container from that image which runs the
 executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
 to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

Also if you are on windows ,you can also check whether you are using Linux Or Windows Containers.You can change to windows containers from the docker tray on windows as shown below

switch to windows containers

Now you have to rerun the hello world again and verify.

E:>docker run -it hello-world
  • If the issue is with the Linux Containers,stop the docker and restart your machine.  This should solve the issue.

Visual Studio Code C/C++ 扩展2018年3月更新

$
0
0

[原文发表地址] Visual Studio Code C/C++ extension March 2018 update

[原文发表时间] 2018/3/29

今天,我们很高兴地宣布20183月的Visual Studio Code C/C++ 扩展更新!此次更新包括改进了本地和全局范围的自动完成功能,包括对系统includesdefines的配置过程的简化,都为了实现更好的智能感知体验。你可以在发行说明中找到完整的更改列表。

我们要感谢所有使用我们本月的内部构建并向我们发送反馈的人!我们修复了你们所报告的问题并改进了你们向我们提出的一些功能建议,这些帮助我们塑造了今天这个发布版本。如果你还不是内部人员,但有兴趣,我们很乐意你加入VS Code C/C++内部计划

 

本地和全局范围的自动完成功能

尽管此功能并非是一个全新的功能,但当你输入本地和全局变量或是函数时智能感知提供了自动完成建议的语义感知列表。与以前的方法相比,新的自动完成体验为你提供了更短的和更相关的建议列表,使编写C/C++代码变得更加容易。

从编译器自动检索系统includesdefines

智能感知现在从基于GCC / Clang的编译器中自动检索系统includesdefines,从而无需在“includePath”和“定义”设置中进行手动配置。 在MacLinux上,智能感知引擎通过在系统上搜索已安装的编译器来自动选择编译器作为默认编译器。 你可以在c_cpp_properties.json文件中新加的“compilerPath”设置来检查正在使用哪个编译器,并可以根据需要更改该设置的值。“compilerPath”设置也接受影响系统定义返回的编译器参数。

另外,新加的“cStandard”和“cppStandard”两个设置允许为智能感知设置明确的语言标准。

强制智能感知处理任意的头文件

如果你希望智能感知可以处理那些未明确在#include语句列出的头文件,现在可以使用新加的设置“forcedInclude”来指定此功能。 智能感知引擎在查看#includes之前将首先处理这些头文件。

告诉我们你的想法

下载Visual Studio CodeC / C ++扩展,试用它,让我们知道你的想法。GitHub上提出问题和建议。如果你尚未向我们提供反馈意见,请参阅此快速调查以帮助你制定符合你需求的扩展程序。你也可以在Twitter上找到我们(@VisualC)。

AX Performance Monitor 101 – Tips and tricks to deal with performance counter files

$
0
0

Windows Performance Monitor for Dynamics AX

In my previous blog post, I explained how to setup Performance Monitor (PerfMon) to proactively capture performance data while cleaning old files to keep disk space under control. This is, let's say, our ideal scenario, but sometimes setup is not that specific and we need to deal with suboptimal files that contains the performance data we need to analyze:

  • We have too many files
  • We have too few files
  • We have some huge file that makes analysis or processing it too slow
  • We have files captured in different languages

 

Let's have a brief description on how we can deal with some situations by introducing a couple of small but useful tools:

 

PAL - Performance Analysis of Logs

PAL is a small but really useful tool created by Clint Huffman that takes one perfmon counter file and creates a nice HTML report with graphs and descriptions that can be used as starting point for performance analysis. It's not that the tool replaces a manual in-deep analysis of any potential problem, but it helps giving some tips that can be used to start looking for something else.

This tool has really interesting options like restrict the analysis to a specific date and time range, or re-sampling the counters to reduce the output length. For instance: if counters are taken every 10 seconds the report may be too large, so it can "slice" it at 30 min or 1-hour intervals (sampling interval is configurable) to make it easier to understand and allowing this report to be easily comparable with reports taken originally with different sampling values. The tool also prepared queue multiple files and let it running until the processing of all of them finished (that can take some hours, depending on the files and setup).

More information and download:

 

PLT - Perfmon Log Translator

PLT is a small tool that allows you to... well, seems obvious with this name, it translates performance counters between known languages. Not all languages are available, but the tool allows creating new languages files to extend it.

More information and download:

  • http://pal.codeplex.com/releases/view/21261 (original source in CodePlex)
  • This folder was previously in codeplex but is not yet available in GitHub, I uploaded the last version here.

 

RELOG

Relog is a standard command-line tool available in Windows operating systems to extract and manipulate performance counters from performance counter logs into other formats or files. Relog is what PAL and PLT are using under the hoods to manipulate the counter files, but it has some interesting options not available on these visual tools. Let's see some (but not all) situations where relog may be useful:

  • We got a huge file taken during several days of data capture for a single server: we can create a new file with a slice of the original file with the date and time range we want to analyze, making it smaller and easier to manage by other tools:
relog "X:MIHUGEFILE.blg" -b 05/31/2017 00:00:00 -e 06/01/2017 00:00:00 -o "X:ONEDAYFILE.blg"

  • We got a lot of files taken with multiple PerfMon templates during different time ranges: we can merge all these files and then cut the output by a specific time range, all in one operation:
relog "X:FILE1.blg" "X:FILE2.blg" "X:FILE2-2.blg" -b 05/31/2017 00:00:00 -e 06/01/2017 00:00:00 -o "X:ONEDAYFILE.blg"


We don't usually get the counter files in the format or size we want them but we can quickly prepare them with this relog, so we can bypass the limitations of the previous tools. For example, PAL only accepts one file as input for a report and can only analyze English counters. PLT only accepts one file at a time, too. Therefore, creating a single file limited by the date-time range we want to analyze is the first step to PLT (if counters are not in English), and then to PAL.

Check the documentation to see all the options, relog can even save PerfMon counters out to a SQL database!

 

While out of the scope of this blog series, here are some extra links to learn more about this topic:

 

José Antonio Estevan

Premier Field Engineer

 

Changes in solution management following the BPF model redesign

$
0
0

In the Dynamics 365 Fall 2016 release, we made significant redesign of business process flow model. Each BPF is now represented in dual way from the solution management point of view:

  • as process definition of BPF type
  • and as an entity (as a consequence, an instance of a process is represented as a record)

The benefits from this model change are described in the following blog: Concurrent business process flows in Dynamics 365. Let’s mention some of them: we can run simultaneous business process flows on the same record, we can have enhanced reporting possibility related with business process efficiency and execution quality, etc.

To align the new design model and assure the metadata consistency between the BPF process and entity definition, we are adding an additional check on the solution level, which is not allowing to import the solution with only BPF process definition but without BPF entity. The BPF entity is created when the BPF is first time activated; for this reason, before exporting the solution from unmanaged environment, assure that the BPF was at least activated once.

If you will try to import the solution with BPF processes and without BPF entities definition, the error which you will see will be following: “Microsoft.Crm.CrmException: Failed to export Business Process “{0}” because solution does not include corresponding Business Process entity “{1}”. If this is a newly created Business Process in Draft state, activate it once to generate the Business Process entity and include it in the solution”.

There are some solution management strategies, which impose separation of specific customization objects type in distinct solutions, which means separate solution for entity definitions, processes, reports and dashboards etc. In this case, the recommendation would be to add the BPF entities to BPF solution and to not keep their duplicated references in another solutions, in order to avoid dependency conflicts between solution layers.

 

Separate solution for entity and process definitions

 

If for any reason is not possible to follow this recommendation, keep the old BPF entities in both: solution with entities and solution with BPF definitions and add new created BPF entities only to solution with BPF definitions.

 

Other resources explaining latest modifications in BPF design and possible implementation scenarios:

Business process flow automation in Dynamics 365

Best practices for automating business process stage progression in Dynamics 365

Model business process flows

Sample: Work with business process flows

 


IoT: Zapojení Arduina do Azure IoT Hub a vizualizace s Time Series Insight

$
0
0

Začínám si trochu hrát s IoT, ale nečekejte něco bůh ví jak sofistikovaného. Pokud také začínáte, vezměme Azure do hrsti a pojďme do toho společně. Dnes si vezmeme IoT senzor, připojíme na Azure IoT Hub a posbírané údaje nahrneme do vizualizační platformy Time Series Insights.

Kde vzít senzor

Já začínám s IoT DevKit AZ3166 od MXCHIP. Je to „Azure Ready“ destička postavená na Arduinu a má na sobě rovnou krásně připravené senzory teploty, vlhkosti, tlaku, magnetometr, gyroskop a akcelerometr. Kromě toho i rozhranní jako je audio čip a k němu patřící mikrofon a výstup na sluchátka, WiFi modul a modul na infračervenou komunikaci. Dvě uživatelská tlačítka, diody a pěkný OLED displej.  Na zařízení mrkněte zde: https://microsoft.github.io/azure-iot-developer-kit/

Na stránkách projektu najdete mnoho hezkých příkladů, ale já jsem si chtěl do toho taky šáhnout a žádný mi nevyhovoval přesně. Chtěl jsem odesílat data ze všech senzorů a tlačítkama si moci volit interval odesílání. Navíc u akcelerometru mi dávalo smysl dělat odečty relativně často o odesílat maximum (největší naměřené zrychlení v rámci intervalu) a průměrné zrychlení v rámci intervalu. Nezbylo, než zkusit v tom Céčku něco nabastlit.

Začněte návodem na stránkách projektu a stažením kompletní sady nástrojů včetně Visual Studio Code a modulů do něj: https://microsoft.github.io/azure-iot-developer-kit/docs/get-started/

Pro svoje příklady budu používat tento můj pokus – pokud chcete, nahrajte si ho do zařízení: https://github.com/tkubica12/iot-demo-arduino

Napojení na IoT Hub

IoT Hub je řešení pro příjem dat ze senzorů, obousměrnou komunikaci, správu zařízení včetně autentizace a umí toho ještě víc. Data přijímá buď na HTTP API nebo AMQP protokol či MQTT. Já použil přímo IoT Hub SDK pro Arduino a při provisioningu credentials jsem postupoval dle návodu na GitHubu DevKitu.

Takhle můj IoT Hub vypadá po připojení senzoru.

Data nám do něj vesele padají a můžeme je zpracovávat. Já chci ale začít jednoduše, takže využiji hotovou platformní službu na vizualizaci. Ta bude mít nastarost si z Event Hub data vysosat, parsovat je, dlouhodobě uložit, vizualizovat a agregovat. Takové hotové řešení se jmenuje Time Series Insights.

Pokračovat ve čtení

Extending the Outlook Add-in Experience in Microsoft Dynamics 365 Business Central and Microsoft Dynamics NAV

$
0
0

One of the showcase features of Dynamics 365 Business Central is the ability to use the product within Microsoft Outlook clients using Outlook add-ins. There are two add-ins that come out of the box with Dynamics 365 Business Central: The Contact Insights add-in and the Document View add-in.

From an email within Outlook, Contact Insights enables the user to go straight to the Contact, Customer, or Vendor Card that is associated to the sender or recipient of the email message. From there, information about the contact may be viewed or edited and documents may be created and sent directly in Outlook. Given an email that contains a document number within the body of the message, Document View enables the user to directly open that document within the context of the email message; from there, the document may be edited (if it is still a draft), posted, or emailed to the customer. Figure 1 shows the Contact Insights add-in opened inside of the Outlook web client.

That explains the default add-ins in a nutshell. But what happens if there is a scenario that is not supported by the default add-ins? That is exactly what this article is about – enabling new scenarios through new or modified Outlook add-ins for Business Central. To understand how to extend the existing Outlook add-ins or create new add-ins, we first need to understand how the Outlook add-ins work with Business Central. If you don’t care and just want to try creating an add-in yourself, feel free to skip to part two.

Also, the same steps apply to the Outlook add-in for Dynamics NAV, provided that your Dynamics NAV deployment uses either Azure Active Directory or NAVUserPassword as the authentication mechanism. For more information, see Credential Types for Dynamics NAV Users.

Part 1 - The Outlook add-in Architecture

In the simplest terms, an Outlook add-in is a frame that loads some web page. In our case, that web page happens to be the Dynamics 365 Business Central web site. There are three different pieces to the Business Central Outlook add-ins: the add-in manifest, the code that generates the manifest, and the code that handles the add-in session. The Outlook add-in manifest contains information about how to load the add-in. It tells Outlook what buttons should appear in the Outlook client, what text those buttons should contain, the images that should appear on those buttons, and, most importantly, what to do when those buttons are clicked. The code that generates the manifest is on the Business Central side. This code takes a manifest template (defined in a particular codeunit) and puts the correct strings, resource image links, and URLs in the manifest based on the system language and server configuration. This is an important point to understand – the manifest will look different depending on the tenant from which the manifest was generated and the language. The last component in this system is the actual code on the Business Central side that handles the incoming add-in session. This consists of a web page made specifically for the Outlook add-ins (OfficeAddin.aspx) as well as C/AL code that loads the correct page and record based on the incoming add-in context. The context contains information about the email from which the add-in loaded – such as: sender or recipient(s) of the message, the email subject, etc. This enables a custom experience depending on the contents of the email message.

Manifest File

Let’s dig into the manifest file first. You can take a look for yourself by jumping over to the Office Add-in Management page (Page 1610), selecting the “Contact Insights” row in the table, then clicking the “Download Add-in Manifest” action. This will prompt your browser to download the XML manifest file, which is what gets deployed to Exchange during the add-in setup step. There are two main pieces to the manifest file. The top portion is what describes the add-in itself. It contains information such as the name, description, and icon for the add-in. When you manage your add-ins in the Outlook portal, this is the information that will appear for the Contact Insights add-in.

The top portion of the manifest also contains the web address to load when a user launches the add-in from the horizontal pane. This is the single Business Central button right above the body of the email message that is shown in the first screenshot above, as opposed to the branded icons, which are add-in commands.

The add-in commands are defined in the VersionOverrides element. You can use this portion of the manifest to define different actions. In the Contact Insights add-in, we have a button that performs the default action as well as a menu button that contains several different actions for creating new documents for the contact in the email message. Each of the buttons in the Contact Insights add-in are links to the OfficeAddin.aspx page that specify a particular command as a query string that will get processed later by C/AL code. All of the strings related to the buttons as well as the URLs are defined at the bottom of the manifest file – within the Resources element. Figure 3 shows how the OfficeContext and Command query strings are specified in the button URL.

Manifest Generation

The Office Add-in Management page can be used to add new add-ins to the system that can later be deployed to a user’s mailbox or to the whole organization. To create a new add-in in the system, you must first write the manifest file for your new add-in. You can use either of the two default add-ins or the manifest in the example below as a reference. Just make sure to change the id of your new add-in. You’ll need to decide whether the manifest that you’ve built is deployable by itself or if the manifest needs to be customized at add-in deployment time based on the Business Central system settings. As an example of the latter, the Document View add-in manifest is generated at deployment time because the system puts information about the system’s number series into the manifest so that the add-in can recognize document numbers in emails. In most cases, however, the manifest could stand by itself.
Once the manifest is created, the add-in can be created in Business Central by clicking the “Upload Default Add-in Manifest” action in the ribbon of the Office Add-in Management page. This will create a new record in the table. Now, the system will pull the name, description, and version from the manifest and use those in the table. The “Manifest Codeunit” field is used to specify a codeunit that will make any deploy-time customizations to the manifest that you just uploaded. If that’s not necessary, it can be left as 0. At this time, the add-in could be deployed using the “Set up your Business Inbox in Outlook” wizard. For more information, see Using Business Central as your Business Inbox in Outlook.

Handling the add-in

Up to this point, we’ve only discussed the generation of the add-in, but nothing about what causes the correct Business Central pages to open once the add-in is launched. The whole flow can be described in these seven steps:

  1. Outlook loads the manifest for the add-in and loads the frame.
  2. The frame launches the URL specified in the manifest, which is the OfficeAddin.aspx page on the Business Central web server.
  3. OfficeAddin.aspx makes use of some of the Office JavaScript libraries to pull contextual information from the email item.
  4. This page then launches the Business Central client on page 1600 and passes the contextual information to the page as filters.
  5. Page 1600 wraps all the information it got into an “Office Add-in Context” temporary record.
  6. Page 1600 then passes this record to Codeunit 1630 (Office Management), which determines what to do based on the incoming context.
  7. The correct page is rendered in the client and shown to the end user.

I encourage you to look at the attached PowerPoint file to get a step-by-step diagram of this flow. If you would like to understand more about how this flow works and you have access to the code, I also encourage you to check out the code for the objects that are referenced.

Part 2 - Creating a new, custom add-in

Let’s walk through the end-to-end process of creating a new add-in. That means: writing the manifest, uploading the manifest into the system, writing the code to handle the add-in session, and deploying the add-in through the Business Central system.
In this example, we will be writing a new add-in that simply shows the company’s product list in Outlook. It will do this by launching the Item List page.

The Manifest

There are a few things we need for our new manifest: the add-in information (id, version, name, etc.), the icon url, and the url to point the add-in when the user launches it. All but the last one are straightforward. The actual URL to point the add-in is what is most important here. If this isn’t formatted properly, then the add-in will not work. Let’s look at how this add-in is formatted. See figure 4.

  1. This is the URL to your Business Central instance.
  2. The OfficeContext tells Business Central what add-in it needs to be concerned with. It is how BusinessCentral differentiates between the Contact Insights add-in, the Document View add-in, and any new add-ins you might create.
  3. This is the version of the add-in. It should be the same as the version at the top of the manifest file. Note: If these versions are not the same, your add-in will not load correctly.

Create the add-in in Business Central

Once your manifest looks correct, it’s time to upload the manifest into your Business Central instance. To do this, open the Office Add-in Management page and choose the Upload Default Add-in Manifest action. Browse to your manifest file on your machine and choose Open. Observe that a new row is inserted into the table that contains the same name, description, and version that was specified in the manifest file. For this demonstration, we will not be inserting any deploy-time settings into the manifest file, so we will leave the manifest codeunit as 0.

Handling the add-in request

We previously mentioned the Office Management codeunit (1630) and how it is responsible for deciding what to do inside of the add-in. There is a function inside of this codeunit called “GetHandlerCodeunit”, which looks at the HostType of the add-in (specified through the OfficeContext query string in the url – see Figure 4) and checks if that HostType is for one of the default add-ins.

It also has a publisher function that gets the codeunit number to run in the case that the host type doesn’t fit one of the default add-ins. We need to write a new codeunit that subscribes to this function and tells Office Management to run the new code we write. The codeunit that we write needs to have two things regarding this: first, the subscriber function that I just mentioned and second, logic in OnRun that does what we want, which in this case is show the Item List page. See figure 7 to see how to set the properties on the event subscriber in the new add-in handling codeunit.

We also need to add two more subscribers that will allow the add-in engine to get the Office Add-in record for our new add-in. Both of published functions are in codeunit 1652 – Office Add-in Management: GetManifestCodeunit and GetAddin. The resulting code should look something like figure 8. Containing the OnRun logic and the three event subscribers. Note that in figure 8 relies on two text constants:

  1. ProductListHostTypeTxt – This is the same value as the OfficeContext in the manifest, which in this case is “Outlook-ProductList”.
  2. AddinIdTxt – This is the Id of the add-in, which is the GUID that we generated and put in the add-in manifest. It is also the primary key of the Office add-in table.

All the pieces are now in place for our new Outlook add-in for Business Central. The only thing left to do is deploy it through the assisted set up wizard. Do that now and then launch your Outlook client. If you are using OWA (Outlook Web Access), you will need to do a hard refresh of the page so that the client can load your new add-in manifest that you just deployed. Now click on an email, and see that your new add-in is available in the email. You should be able to just click the Product List link to launch the addin and then see the Item list page.

Summary

This example was very simple, but you can use the same steps to do virtually anything you’d like with your custom add-in. In addition, you might have already figured out how you could change the default add-in functionality by changing the OfficeContext in some of the URLs in the manifest and then creating your own handler codeunit for the new functionality. There are essentially five steps that we took to create our own custom add-in:

  1. Create the manifest XML file for the new add-in that specifies an OfficeContext in the URLs.
  2. Upload the manifest in the Office add-in management page.
  3. Implement a new codeunit that will handle the add-in session for your specific OfficeContext.
    1. This codeunit must include the three event subscribers we talked about.
  4. Deploy the add-in to your mailbox.
  5. Launch it from your Outlook client.

Interesting C/AL Objects

PAG1600 – The entry point into Business Central from the add-in.

TAB1600 – The container for all the context-specific information that comes from Outlook.

COD1630 – The engine of the add-ins. All add-ins go through this codeunit when initialized, and all AL objects that need to access the add-in go through this codeunit.

COD1636 – This finds a contact/customer/vendor based on the email context and redirects to the appropriate card page.

COD1637 – This finds a referenced document number (Document View add-in) and opens the related page.

COD1642/1643 – These handle custom add-in manifest (XML) generation when deploying the add-in.

IoT Edge ランタイムの再起動が必要な場合について

$
0
0

IoT Edge は現在プレビューですが、IoT Edge ランタイムの再起動iotedgectl restartが必要な場合についてのご質問と回答をご案内します。

 

Q. IoT Edge ランタイムが再起動が必要な場合とは、どのようなオペレーションを実施した場合でしょうか?例えば、運用中にIoT Edge 配下に新規デバイスを追加した場合、IoT Edge
ランタイムの再起動が必要になりますでしょうか?

 

A. IoT Edge ランタイムの再起動が必要となる場合は、「iotedgectl setup」のようなランタイム自体の設定変更を行った場合です。例えば、Edge ランタイムの接続文字列(connection string)
を変更したり、モジュールのレジストリの資格情報(container registry credential) を追加する場合などが挙げられます。

 

IoT Edge配下に新規デバイスを追加した際、すなわちIoT Edge ゲートウェイに新しいリーフデバイスをMQTT プロトコルで接続する時に、IoT Edge ランタイムの再起動は必要ありません。

 

Azure ポータルからの変更([Specify Routes] でルート変更する場合も含む) では、IoT Edge ランタイムを再起動する必要はありません。

 

 

上記の内容がお役に立てば幸いです。

 

Azure IoT 開発サポートチーム 津田

 

Experiencing Data Access Issue in Azure Portal for Many Data Types – 04/09 – Resolved

$
0
0
Final Update: Monday, 09 April 2018 08:33 UTC

We've confirmed that all systems are back to normal with no customer impact as of 04/09, 8:27 UTC. Our logs show the incident started on 04/09, 8:16 UTC and that during 11 minutes that it took to resolve the issue approximately 5% of customers experienced data access issues in the Azure Portal and in the App Analytics Portal.
  • Root CauseThe failure was due to performance degradation in one of our dependent platform services..
  • Incident Timeline:  11 minutes - 04/09, 08:16 UTC through 04/09, 08:27 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Abhijeet


Initial Update: Monday, 09 April 2018 08:28 UTC

We are aware of issues within Application Insights and are actively investigating. Some customers may experience Data Access Issue in Azure Portal. The following data types are affected: Availability,Customer Event,Dependency,Exception,Page Load,Page View,Performance Counter,Request,Trace.
  • Work Around: None
  • Next Update: Before 04/09 10:30 UTC

We are working hard to resolve this issue and apologize for any inconvenience.
-Abhijeet

Dynamic RLS support for Analysis Service Tabular Model based on multiple roles for each user

$
0
0

Tested on: SQL Server Analysis Service 2016 Tabular, Azure Analysis Service.

 

Hello everyone, I am sure that whenever you wanted to implement Row Level Security(RLS) for Analysis Services Tabular Mode you might be wondering how I will implement RLS when some of my user has multiple roles assigned. Well here is the solution for this issue.

Now before going on to the details of this blog, if Row Level Security in Tabular Mode is still an alien to you, I would recommend you pay a visit to the Microsoft document below which will give you a clear picture to implement Row Level Security.

https://docs.microsoft.com/en-us/power-bi/desktop-tutorial-row-level-security-onprem-ssas-tabular

Coming back to my question, let's assume that you have a two Fact tables namely FactInternetSales and FactResellerSales with a Dimension table named as DimSalesTerritory.

 

Requirement: A user will have access to FactInternetSales for one territory but FactResellerSales for a different territory.

Using the conventional mode of the RLS setup this requirement won't be possible, so I have a new way to set up the RLS.

 

Setup: From the SQL Server, to make this setup we have created a Dimension Table called DimUser as below:

Here I have used two Columns one for SalesTerritory Region and one of ReSalesTerritory Region for each of the user to assign the Territory according to the FactTables.

 

Project Creation: From the Visual Studio we have created Tabular Project and imported the tables as shown below:

 

Please note the relationship that I have built.

DimUserSecurity has a two relationship with DimSalesTerritory

DimUserSecurity.SalesTerritoryID --> DimSalesTerritory.SalesTerritoryKey

DimUserSecurity.ReSalesTerritoryID --> DimSalesTerritory.SalesTerritoryKey

 

Here one of the relation is Active and one is Inactive.

Now based on the user I have created a Role named SalesTerritoryUsers, given read permission to the model and added all the member there which are a part of DimUsers table.

Now for the Row Filters I have added DAX filter to each of the Fact Table

 

=FactInternetSales[SalesTerritoryKey]=LOOKUPVALUE(DimUserSecurity[SalesTerritoryID], DimUserSecurity[UserName], USERNAME(), DimUserSecurity[SalesTerritoryID], FactInternetSales[SalesTerritoryKey])

=FactResellerSales[SalesTerritoryKey]=LOOKUPVALUE(DimUserSecurity[ResalesTerritoryID], DimUserSecurity[UserName], USERNAME(), DimUserSecurity[ReSalesTerritoryID], FactResellerSales[SalesTerritoryKey])

 

This DAX filter will get the data for the logged in user, match the user with the DimUser table, pick the SalesTerritoryID or ReSalesTerritoryID from the DimUser table, match it with the FactInternetSales or FactResellerSales and it will get the data specific to the territory that is assigned the user.

Once everything is set I have saved the model and deployed it in my Analysis Service.

 

Result:

Now to test it, I have browsed the Model from Management Studio with the user Harpreet(xyz/harpsi) who has access to FactInternetSales for Australia region and FactResellerSales for Germany region.

Upon browsing the Fact Table based the SalesTerritory Region, it worked completely fine for me. Please refer the screenshot below.

 

Additional Requirement: Also let say that you have an additional requirement like mine where you want to give more than one Territory permission for one user in a Fact Table. This can also be done with this above approach. All you have to do is to add the Territory ID with the user details in the DimUserSecurity table. Please refer the screenshot below.

 

EmployeeID SalesTerritoryID ReSalesTerritoryID FirstName LastName UserName
1 1 6 Mani Jacob xyzmajac
2 2 7 Kane Conway xyzkaneco
3 9 8 Harpreet Singh xyzharpsi
3 3 NULL Harpreet Singh xyzharpsi
2 NULL 6 Kane Conway xyzkaneco

 

Here my user Harpeet has access to FactInternetSales for two Territory 9 and 3 which is Australia and Central whereas he has access to only one Territory for FactResellerSales.

This option is very helpful over some out of box requirement if you have user assigned to different role for different departments.

 

Hope this helps for you as well.

 

Author:      Jaideep Saha Roy – Support Engineer, SQL Server BI Developer team, Microsoft

Reviewer:  Kane Conway – Support Escalation Engineer, SQL Server BI Developer team, Microsoft

Proactive caching with in-Memory tables as notification

$
0
0

 

Recently we encountered an issue using a Multidimensional Model where ROLAP and proactive caching were enabled on one of the partition. The notification was set to SQL Server table to track the changes and refresh the cache.

 

The behavior we noticed was that if a SQL Server Table was set to a Memory Optimized Table, we don’t see a notification within Analysis Services for any changes within the SQL Server Table. But if the SQL Server Table was not a memory optimized table, the notifications were sent back to Analysis Services and the cache was refreshed.

 

  • We used AdventureWorks to understand this behavior, where the partition: Customers_2005  (Measure group: Internet Customer) with ROLAP and proactive caching enabled.

 

 

  • After Deploying the model to SSAS 2016 instance and making changes within SQL server ([FactSalesQuota]), we could see this notification from the SSAS profiler trace :” A notification was received from an instance of SQL Server for the '[dbo].[ FactSalesQuota]' table
  • The data was changed on the SSAS as well:

 

Before the changes SSAS DB:

Sales Amount Quota : 154088000

After the changes SSAS DB (Sales Quota partition converted to ROLAP):

Sales Amount Quota : 154088004.7

 

  • We converted the FactSalesQuota table to an in-memory table and tested the behavior. This time, we didn’t see any notification triggered from the analysis services profiler trace
  • Took a profiler on the SQL and SSAS simultaneously and we see the below query getting triggered whenever there is a change in the table on SQL side profiler:

 

DECLARE @OlapEvent BIGINT;SELECT @OlapEvent = ObjIdUpdate(2);SELECT (@OlapEvent & convert(bigint, 0xffff000000000000)) / 0x0001000000000000 AS Status, (@OlapEvent & convert(bigint, 0x0000ffff00000000)) / 0x0000000100000000 AS DbId, @OlapEvent & convert(bigint, 0xffffffff) AS ObjId;

 

  • But when we convert the table to an in-memory table. The query was not getting triggered and we are not seeing any notification back on SSAS as well.
  • After more research it seems this query keeps running in suspended state all this while.
  • For normal [FactSalesQuota] table (not in-memory)  we see the below:

 

SQL Profiler : I could see the notification query is getting triggered:

 

SSAS Profiler: SSAS is receiving a notification:

 

 

  • Once we convert the FactSalesQuota tables as in-memory table, I still see the notification query running in suspended state.

 

 

 

 

  • But after we make the change to the table, the notification query is not triggered.

 

 

No notification seen in SSAS:

 

 

  • We verified this behavior with our PG Team and understood that we rely on the SQL server notification to know if and when any changes have been made to the SQL table and only then do we initiate a refresh cache.

 

 

 

 

Conclusion:

 

Proactive caching with SQL server set for notification doesn't will not work for in-memory tables in SQL server. This is a limitation from the SQL side itself.

 

 

 

Author:      Chandan Kumar – Support Engineer, SQL Server BI Developer team, Microsoft

Reviewer:  Kane Conway – Support Escalation Engineer, SQL Server BI Developer team, Microsoft

Performance degradation in West Europe – 04/09 – Investigating

$
0
0

Initial Update: Monday, April 9th 2018 10:18 UTC

We're investigating Performance degradation in West Europe.

  • Next Update: Before Monday, April 9th 2018 11:15 UTC

Sincerely,
Anmol


How to pick the stamp or scale unit where your App Service will run

$
0
0

You can’t.  The best you can do is pick the region / location. Read the bottom for a way to try and influence it.

A scale unit, stamp or tenant is something which I have referred to here. “How to disable TLS 1.0 on an Azure App Service Web App”

The scale unit is identified by the number highlighted in red in Figure 1 and is found by doing an NSLOOKUP on your Azure App Service name.

image

Figure 1, what is my Azure App Service stamp

The Address is the VIP to the stamp that you use for an A Record, I discuss that in more detail here.  “How to get a static IP address for your Windows App Service Web App” do not confuse that with the outgoing IPs which I discuss here.  “How to find your outgoing Azure App Service IP address”

When you create a new App Service Plan (ASP) you choose which Resource Group to place it into.  All ASPs in a given Resource Group will be deployed to the same scale unit.  So, my logic is, if I want to put the ASP onto a different stamp, even though I/YOU CANNOT select it, there is a probability that it is placed onto a different scale unit.  You can at least give it a shot if you need that for some reason.  But you cannot choose it specifically unless you create an App Service Environment (ASE).

Container to bindata, and back again

$
0
0

I was recently engaged in a bug where the size of container caused problems. The solution we arrived at was to implement compression of the container. The solution is trivial; but the APIs to get there took me a while to discover, and as I couldn't find any other post on this, I'm sharing it.

How to implement compression/decompression of pack/unpack methods using BinData and ContainerClass.

 

  1. public container pack()  
  2. {
  3.     // Business as usual...          
  4.     container result = [#currentVersion, #currentList];
  5.  
  6.     // Compress          
  7.     ContainerClass containerClass = new ContainerClass(result);        
  8.     binData = new BinData();        
  9.     binData.setData(containerClass.toBlob());        
  10.     binData.compressLZ77(12);        
  11.     return  binData.getData();    
  12. }    

     

  13. public boolean unpack(container  _compressed)  
  14. {
  15.     // Decompress          
  16.     BinData binData = new BinData();        
  17.     binData.setData(_compressed);        
  18.     binData.decompressLZ77();        
  19.     container packed = ContainerClass::blob2Container(binData.getData());
  20.  
  21.     // Business as usual...          
  22.     Version version = RunBase::getVersion(packed);        
  23.     switch (version)
  24.      {            
  25.         case #CurrentVersion:  
  26.              [version, #currentList] = packed;                
  27.             break;            
  28.         default:  
  29.             return false;        
  30.     }        
  31.     return true;    
  32. }    

     

    THIS POST IS PROVIDED AS-IS; AND CONFERS NO RIGHTS.

Experiencing errors while creation of Application Insights app using Visual Studio – 04/02 – Mitigating

$
0
0
Update: Monday, 09 April 2018 18:56 UTC

We still see remnants of this issue in a few regions where Hotfix is yet to be rolled out. ETA for completion of Hotfix deployment to these regions is 04/10 19:00 UTC
  • Work Around: Apps can be created using Azure portal without any issues
  • Next Update: Before 04/10 19:00 UTC

-Dheeraj


Final Update: Saturday, 07 April 2018 00:07 UTC

Hotfix has been successfully deployed to EUS, SCUS, WEU, WUS2, Southeast Asia and NEU regions. At this moment we don’t expect any issue in creating Application Insights resources via Visual Studio, but in case any help or info is required with respect to this issue, please reach out to Microsoft support.

  • Root Cause: The failure was due to backend configuration changes on one of our dependent services.
  • Incident Timeline: 5 Days, 7 Hours & 12 Minutes - 04/02 16:55 UTC through 04/07 00:07 UTC

We understand that customers rely on Application Insights as a critical service and apologize for any impact this incident caused.

-Dheeraj


Update: Thursday, 05 April 2018 22:28 UTC

Hotfix has been successfully deployed and validated in EUS region. ETA for hotfix rollout completion to all regions is EOD today.
  • Work Around: Apps can be created using Azure portal without any issues
  • Next Update: Before 04/06 23:00 UTC

-Dheeraj


Update: Tuesday, 03 April 2018 22:24 UTC

Hotfix has been successfully deployed in Canary and BrazilUS regions. Currently, we are trying to prioritize this Hotfix rollout for other regions in the order of EastUS, SouthCentralUS, WEU, WUS2, Southeast Asia and NEU. Current ETA for Hotfix rollout across all regions is EOD Friday.
  • Work Around: Apps can be created using Azure portal without any issues
  • Next Update: EOD Friday

-Dheeraj


Update: Monday, 02 April 2018 21:53 UTC

We identified the root cause for this issue. To fix this, we are moving forward with Hotfix deployment in this order: EastUS, SouthCentralUS, WEU, WUS2, Southeast Asia, NEU. Currently we have no ETA on resolution and trying to expedite the rollout of this hotfix.
  • Work Around: Apps can be created using Azure portal without any issues
  • Next Update: Before 04/03 22:00 UTC

-Dheeraj


Initial Update: Monday, 02 April 2018 16:55 UTC

We are aware of the issues within Application Insights and are actively investigating. Customers creating a new project with Application Insights on by default in Visual Studio 2015 will see a failure message as below-

'Could not add Application Insights to project. Could not create Application Insights Resource : The downloaded template from 'https://go.microsoft.com/fwlink/?LinkID=511872' is null or empty. Provide a valid template at the template link. Please see https://aka.ms/arm-template for usage details. This can happen if communication with the Application Insights portal failed, or if there is some problem with your account.'


  • Work Around:  Apps can be created using Azure portal without any issues
  • Next Update: Before 04/02 21:00 UTC

We are working hard to resolve this issue and apologize for any inconvenience.


-Sapna

Azure Short Videos

$
0
0

I recently blogged about Azure Short Videos - few topics are still WIP and I'll update them soon. However, there are links to hundreds of videos already.

Azure is a fast moving platform with changes coming to it more often and more quickly than ever before. It’s difficult to keep up with the features and how you can take advantage of them. Even I find it difficult to find the right resources. If I need a quick information about say “What is StorSimple?”, the information sometimes isn’t what I’m looking for. So I have compiled a list of short videos that may help you understand the feature or enable you to do some configuration quickly. Do send me a note if you like something or if you don’t or if you come across any good resource!

So have fun learning and understanding Azure Short Videos!

Simple Trick to Stay on top of your Azure Data Lake: Create Alerts using Log Analytics

$
0
0

If you manage one or more Azure Data Lake accounts, do you ever find it hard to stay on top of everything that is happening? Ever feel the need to know more about them? Are you regularly asking yourself any or all of these questions:

  • What are our most expensive jobs ?
  • When was a new data folder created in [path]?
  • When did a file get deleted from our [data/compliance/telemetry/other] folder?

Creating Azure Log Analytics alerts for your Azure Data Lake accounts can help you know when specific events happen, or when a metric reaches a defined threshold. In this post I'll show you how to reduce the level of unknowns when working with Azure Data Lake using Azure Log Analytics alerts - it's easy to get started:

Connect your Azure Data Lake account to Log Analytics

Follow the steps in our previous blog post on Log Analytics to connect your accounts and start collecting usage and diagnostics logs.

Create the Log Analytics Alert

Open Log Analytics, and navigate to the Alerts section towards the bottom of the table of contents.

Figure 1: Log Analytics - Alerts

 

In the Log Analytics Alert blade, click on New Alert Rule to create a new alert.

Figure 2: Log Analytics - New alert

 

The first part of the rule, the target, should already be selected – using the current Log Analytics account. For the second part – the criteria, click the button to add the conditions for the alert.

Figure 3: Log Analytics alert criteria settings

 

To configure the alert signal, select Custom Log Search.

Figure 4: Custom query for an alert

  • In the Search query field,  paste the specific query that will trigger the alert. For this example, we will track when a new folder is created in a Data Lake Store account:

    Figure 5: Log search query

 

  • The alert logic can be based on a number of results such as the total number of events tracked (create a folder in Data Lake Store, submit a job in Data Lake Analytics, etc), or a specific metric value such as a sum of the events or aggregation of values from the query (total data read, total number of AUs assigned, total duration of the jobs ran in a window of time, etc).

    Figure 6: The two main types of alerts - based on number of results or a single metric value

 

  • The period and frequency indicate the rolling window of time that needs to be evaluated, and how frequently to check it, respectively.

In the Define alert detail section, we can enter some descriptive details, including the severity for the alert.

Figure 7: Default alert details

 

Next, let's create a new action group where we can add people or groups to notify and the specific action to take, such as email, SMS, etc.

Figure 8: Action group details

 

It is possible to create complex combinations of emails, SMS, or other notifications for specific users and groups. In this example, the team will be emailed:

Figure 9: Action group settings

 

Once the action group is created it will be added to the definition, save the alerts settings, and you're done.

The rule will be displayed in the list of alerts:

Figure 10: Updated alert criteria

 

Conclusion

In this blog post, I've shown you how to configure alerts for your Azure Data Lake accounts. These alerts can notify you of specific events or metric values that are relevant to you and your organization and will help you to proactively act on events, optimize costs, and understand usage trends. Try these simple steps to enable alerts, and let us know how they are helping you stay on top of your Azure Data Lake usage or costs - leave us a comment and share your experiences for others to build on. Have a specific need or scenario? Send your feature requests to our Azure Data Lake UserVoice forum.

Based on your comments and suggestions, we will cover useful and interesting events and metrics that you can plug-in into alerts. Stay tuned!

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>