Quantcast
Channel: MSDN Blogs
Viewing all 35736 articles
Browse latest View live

VS 2017 RTM 关于STL 的修复

$
0
0

 

[原文发表地址]VS 2017 RTM 关于STL 的修复

[原文发表时间] 2017/2/6 9:20AM

VS 2017 RTM 版本很快就要发布了。 目前VS 2017 RC 已经投入使用,并且包含了我们在这里描述的所有改变 – 请尝试在IDE 的help >Send Feedback >Report A Problem (或者 Provide A Suggestion) 提交您的反馈信息。

关于STL 在VS2015 update 3 和 VS 2017 RTM之间的改变, 这是第三篇也是最后一篇博客。 在第一篇博客中(关于2017 Preview 4),我们详细地阐述了2015 和2017 版本是如何实现二进制兼容问题的。 在第二篇博客中(关于VS 2017 Preview 5),我们列举了那些添加到编译器和STL 的模块。(从此之后,我们已经实现了P0504R0中新引入的 in_place_t/in_place_type_t<T>/in_place_index_t<I> 和P0510R0 中抛弃的数组, 引用以及不完整类型变量。)

Vector 修改:

我们已经修改了vector<T> 的成员函数, 修复了许多运行时和性能方面的缺陷。

*修复了一些别名错误。 例如, 尽管标准支持v.emplace_back(v[0]),但是在运行过程却处理失当,还有v.push_back(v[0]),防止其产生有缺陷的代码等(这个对象作用域在我们的内存块中吗,一般而言,这往往是无效的)。在性能方面的修复是井井有条的。 因此凡是我们提到的都将是支持的。

为了防止别名错误,, 在没有别的选择的情况下,我们只能在栈上构造一个元素 (例如:emplace()有充足的能力构造这个元素,但是却不限于此)。(这儿有一个还没有被修复的缺陷, 它是非常晦涩难懂的—严格地说,我们还没有尝试用allocatorconstruct() 函数来处理此类对

象。)请注意我们的实现基于标准:在每一个成员函数中不允许出现别名错误的基础之上。–例如, 在多个元素同时插入的时候不允许出现别名错误,因此我们也不会去修复这种缺陷。

*修复异常处理准则. 早在VS2010开始支持移动构造语意, 那时当我们重新分配容器时都会无条件地使用移动构造方式构造元素. 这确实显著提高了速度, 但是很遗憾,这样并不正确现在我们依照标准准则使用move_if_noexcept()模式,(无异常时使用移动构造)

例如,当我们调用push_back() 和 emplace_back() 的时候,它们需要重新分配。他们将会询问这个元素:“你需要在不抛出异常的情况下,使用移动构造函数吗?如果是的话,我将会采取移动构造。(它不会失败,并且它将非常迅速)。 除此之外,你需要拷贝构造吗? 如果是的话, 随后我将拷贝给你(可能有些缓慢, 但是将不会破坏完善的异常准则)。 另外,如果你仅仅只是在抛出有价值的异常信息的情况下使用移动构造函数, 我也将采用移动构造的方式,但是如果出现异常, 你将不会得到完善的EH 异常确认信息。” 现在, 基于一些模糊的异常信息, 所有Vector 的成员函数实现了标准文档规定的基本或者高级的EH异常信息确认。(第一个异常提出了值得反思的标准,这表明在当构造元素时, 仅仅是输入迭代程序的范围插入必须提供一个严格的信息确认, 如果不是孤注一掷,这基本是不可能实现的, 并且没有人知道曾经都做了哪些实现, 我们的实现提供了基本的信息确认: 我们反复提到的emplace_back()rotate() 到相应的位置上, 如果emplace_back() 中的一个抛出异常, 我们可能需要放弃之前的内存块, 这种变化是可以看到的。 第二个异常是就POCCA/POCMA 分配而言, ”重载”proxy 对象(以及在其他容器中的标记节点))通常在这里很难避免内存越界,幸好std::allocator 不会触发重载)。

*淘汰了一些不必要的EH 逻辑。 例如, vector 的赋值运算符重载有一段没有意义 的try-catch 代码块。 它仅仅提供了基本的信息确认, 我们也可以通过合适的排列来达到这种目的。

*轻微地改善调试性能。 尽管这对我们而言,优先级不是很高(在不能使用优化器的条件下,我们所做的每一件事代价都是巨大的),我们尽可能地避免严重破坏调试性能。 在这种情况下, 当我们可以使用指针时, 在内部的实现过程中, 有时是完全没有必要使用迭代器。

*改善迭代器的有效性检查。例如,resize() 将不会认为结束迭代器是无效的操作。

*减少调用rotate() 函数从而改善性能。 例如,emplace(where, val) rotate()之后调用emplace_back()。 现在,在只有一种情况下,vector 调用rotate()(关于只输入迭代器的范围插入已经在前面阐述过了)

*固化权限控制。现在, helper 成员函数是私有的。(一般而言, 我们预留命名为_ugly的变量去实现新功能,因此公共的helper实际上并不是一个缺陷。)

*allocators来优化性能。例如,用不常规的分配器移动构造函数来激活memmove()优化选项(以前我们使用make_move_iterator(), 这常常不能使用memmove()来进行优化), 现在我们将在VS 2017 update 1 中进行了进一步的改进, 在non-POCMA 不常规的情况下,移动赋值运算重载函数可以重新使用缓存。

注意这个修复本身引起了源码的重大改变。最常见的是,标准规定move_if_noexcept() 可以在某种场景下实例化拷贝构造函数。如果它不能被实例化的, 你的程序将无法编译。 或者, 我们可以尝试使用在标准模板中要求的其他操作符。例如:”N4618 23.2.3 [sequence.reqmts] 说明a.assign(i,j)要求: T 将会构造对象X , 并且赋值为*I” 为了提高性能,我们正在着力于充分利用*i的赋值运算。

警告修复:

这个编译器有一个详细的关于警告,包括警告等级以及压入/禁用/弹出的编译指令系统。编译器警告同样适用于用户代码和标准模板库的头文件。其他标准模板库实现禁用了所有“系统头文件”编译器警告,但我们是按照另一种不同原理。编译器警告的存在是为了警告一些不规范的行为,比如修改值的类型转换或是返回临时变量的引用等。这些行为同样需要考虑到性能问题, 无论这些性能问题是执行用户代码引起的或者是用户调用STL 函数模板执行过程引起的。

。显然,标准模板库不应该为自己的代码发出警告,但是我们认为禁用STL中所有头文件中所有警告是不可取的。

许多年来,STL 一直在清理/W4 /analyze, 并且通过大量的测试进行验证

(没有 /Wall, 这是不同的)。长期以来,我们把在标准模板库中的警告级别推到了3等级,进一步地禁用某些警告。虽然这能够使我们完美的地进行编译,但却忽视了一些有价值的警告提醒。

现在,按照新的方法我们已经对标准模板库进行了修复。首先,我们检测你是否正在使用/W3编译(或较弱级别,但你应该从来没有这样做)/W4(或/Wall,但你是用你自己的,与标准模板库在技术上不支持)。当我们感觉到/W3(或较弱)时,标准模板库将其警告等级推到3(即以前的行为没有改变)。当我们感觉到/W4(或较强)时,标准模板库现在就将其警告等级推到4,那就意味着,4等级警告现在将应用到我们的代码中。此外,我们审计了所有我们个别警告禁止显示设置(包括产品和测试代码),消除了不必要的禁止以及使剩下的更有针对性(有时到各个函数或类)。在整个标准模板库中我们同样禁用了C4702警告(无法访问的代码);虽然此警告对用户有价值,但它是与优化级别相关的,而且我们相信,在标准模板库头文件中启动这个警告将会是弊大于利。我们使用两个内部的测试套,再加上libc++中的开源测试套,一起用验证我们不会针对自己的代码发出警告。

这对你来说意味着什么呢?如果你正在用W3编译(我们不推荐),你应该观察到没有什么大的变化。因为我们已经重新归整了这些警告,你可能已经观察到了几个新的警告,但这应该是相当罕见的。(当它们出现时,说明你正在使用当下的STL存在潜在隐患, 如果它们不是预料中的,报告一个缺陷)。

)如果你正在用/W4编译(我们支持),你应该看到一个由STL 头文件报告的警告错误, 这是用/WX 引起的源码的重大改变,这种改变是有意义的。毕竟,是你要求的4级警告,标准模板库现在也在遵循它。例如,依赖输入类型的STL 算法报告的关于各种中断和符号转化引起的警告。此外,现在通过输入类型被激活的非标准扩展将在标准模板库的头文件中触发警告。当发生这种情况时,你应该修复你的代码去避免警告(例如,通过更改你传给标准模板库的类型来纠正你的函数对象特征,等等)。然而,会有遗漏的出口。

首先,这个宏_STL_WARNING_LEVEL控制标准模板库是否推其警告级别至3级或4级。如前文所述,可以通过检查/W3/W4自动化确定,但你可以通过定义宏的项目范围来覆盖这个。(只有34的值被允许,其他的会发出错误)。所以,如果你想在标准模板库推其警告级别至3级之前使用/W4编译,你可以请求标准模板库将其警告级别推至4级。

其次,这个宏_STL_WARNING_LEVEL(它将总是默认为空)可以通过定义项目范围来禁用整个标准模板库头文件的选择性警告。例如,定义它为4127 6326可以禁用条件表达式是常数一个常量和另一个常量的潜在比较(我们应该早已清除了这些警告,这只是一个例子)。正确性修复和其它的一些改进:

*有些STL算法用const来定义他们的迭代器。源代码的重大改变:由于标准文件中的要求,我们需要将operator* 作为const来处理。

* 改进了basic_string 迭代器调试检查时的诊断器功能。

* basic_stringiterator-range-accepting函数针对(char *, char *)另外实现了重载,这个额外的重载现在已经被移除,因为它阻止了String.assign(“abc”,0)的编译。 (这不是源代码的重大修改; 调用旧的重载方法的代码现在会调用新的(Iterator,Iterator)重载来替代它)

* basic_string 重载的范围包括append, assign, insert replace, 不再需要default的默认构造函数来分配。

* basic_string::c_str(), basic_string::data(), filesystem::path::c_str() locale::c_str() 现在可以用SAL注释来声明他们是以null来结束的。

* array::operator[]() 现在可以通过SAL注释来改善代码的警告分析 (注意:我们不会尝试让SAL注释进入整入STL,我们只会在一些个案上使用SAL注释)

* condition_variable_any::wait_until 现在可以接受较低精度的时间点类型。

* stdext::make_checked_array_iterator的调试检查现在允许迭代器通过C++14的空向前迭代器的需求来做迭代比较。

* 引用C++工作文件的需求来改善了<random>static_assert 消息的性能。

* replace_copy() replace_copy_if() 中条件运算符的实现是错误的,错误的要求输入元素的类型和新值的类型来转换成为一个公共的类型。 现在,他们已经被更正为用if-else来实现,以避免被要求转换。(需要在输出迭代器中分别写入输入的元素的类型和新值的类型)

*STL现在比较重视空的fancy指针,但并不试图去引用他们,哪怕一瞬间。(vector修复部分)

* 各种STL成员函数 (例如: allocator::allocate(), vector::resize()) 已经被标记为_CRT_GUARDOVERFLOW。 当使用/sdl编译选项时,调用函数前检查整数溢出会被解释为 __declspec(guard(overflow))

* <random>independent_bits_engine的任务是在构造和生成种子时包装一个基础的engine (N4618 26.6.1.5 [rand.req.adapt]/5, /8) ,但是他们会返回不同的result_types. 例如: independent_bits_engine可以通过运行32-bit mt19937来生成uint64_t, 这将触发截断警告。这种编译是正确的,因为这是一个物理的数据截断丢失然而,为了完成标准文档中它的任务, 我们添加了 static_cast来使它的编译不会受到编译器的影响。

* 修复了std::variant中的一个缺陷, 当编译 std::get<T>(v) 且变量v不是一个唯一的可替代的类型,它会引起编译器填满所有的可用的堆空间并且报错退出。例如:当v std::variant<int, int>时的 std::get<int>(v) std::get<char>(v)

运行时性能的提高:

* basic_string 中移除了构造、赋值,使交换的性能提高了三倍。通过使Traitsstd::char_traits, 分配器的类型不能是fancy指针,而且一般情况下没有别的可能。因为我们一般使用移除/交换方法而不是操作个别的basic_string数据成员。

*basic_string::find(character) 现在只在查找一个字符的情况下可以使用,而不是一个长度为1的字符串。

*basic_string::reserve不再做重复的范围检查。

* 在字符串收缩的情况下, 所有的basic_string函数中分配,移除是唯一的储存方式。

* stable_partition不再执行self-move-assignment操作。此外,它还会在输入区间的两端跳过已经分区的元素。

* shufflerandom_shuffle不再执行self-move-assignment

* 分配临时空间的算法(stable_partition, inplace_merge, stable_sort) 不再传递和基址完全相同的复本和大小相同的临时空间.

* filesystem::last_write_time(path, time) 是对问题1的磁盘操作而不是问题2

* std::variant中的visit()方法的性能有小幅的提高 :  已经分派到合适的visit函数后就不再去验证所有的变量是否有valueless_by_exception(), 因为std::visit()在分派前已经保证具有这个属有。 std::visit()的性能的提高几乎是可以忽略不计的,但是却大大降低了visitation生成的代码的大小。

编译器的吞吐量做出的改进

* 源代码的重大改变: <memory> 功能不再被仅仅在STL的内部来使用。(uninitialized_copy, uninitialized_copy_n, uninitialized_fill, raw_storage_iterator, and auto_ptr) 现在只会在<memory>头文件中会出现。

* 会集中对STL算法中的迭代器进行调试检查。

 


Unified Service Desk 2.2.1 is released!!!

$
0
0

The latest release of Unified Service Desk client (version 2.2.1.806) is released today. This release is a step in our journey to make Unified Service Desk more accessible, reliable and performant.

This release primarily encompasses accessibility for keyboard users and enhanced keyboard productivity for power users.

We are also introducing a new API called SafeDispatcher with this release. This API will help you build robust hosted controls that are more reliable and easily diagnosable. Most of the existing hosted controls can be updated to take advantage of this new API by making one line of code change.

Documentation for the developers and customizers can be found here.
Documentation for administrators can be found here.

Please share your feedback in this comment section. I have product ideas, kindly request them through crmideas.

 

Best Regards,
Sid Gundavarapu
(OBO USD Team)

Free Power BI ebook by Reza Rad: Power BI From Rookie to Rockstar

$
0
0

I truly have the best job at Microsoft: Make our customers happy, specifically through community channels.

What does that entail on a daily basis?  Looking at the telemetry from our support cases, assist our user group leaders with their group efforts, and my favorite part work with our MVPs.   These people are basically super hero’s that use their Powers helping people with technology.   With a peer group like Paul, Marco, Ginger, Greg, Seth, Chris, Ken, Jen, Matt etc etc there is no way to pick a favorite….but between organizing the Definity Conference,  presenting a bunch of sessions at SQL Saturday Melbourne, tomorrows Webinar and his latest effort a 900+ page book he made free (all in the last 3 weeks!) it is hard not to put Reza Rad on the top of the Power BI MVP pack! 

 

So what about this book?

Back to the subject of this post!  Reza’s day job is take take  the most difficult data challenges and turn them from a storage and IT overhead and cost into business insight. 

This book is a distillation of this learning starting with getting started with the product all the to advanced data manipulation with the M Language and DAX queries.

While the table of contents is subject to change i have included it due to the fact the hyperlinks are to blog posts that are amazingly valuable!

 

2017-02-22_11h33_41

 

Table of Contents

  1. Introduction to Power BI
    1. Introduction to Power BI: What is Power BI?
    2. Power BI Desktop; The First Experience
    3. Power BI Website; You’ll Need Just a Web Browser
  2. Getting Data
    1. What is Power Query: Introduction to Data Mash-Up Engine of Power BI
    2. Get Started with Power Query: Movies Data Mash-Up
    3. Power BI Get Data From Excel: Everything You Need to Know
    4. File Sources
    5. Folder as a Source
    6. Database Sources
    7. Analysis Services Connection
    8. Get Data From Azure SQL Database
    9. Azure SQL Data Warehouse Source
    10. Software as A Source
    11. Web Source
    12. Using Web Service / API As a Data Source for Power BI
    13. R Script as a Source
    14. Power BI and Spark on Azure HDInsight; Step by Step Guide
  3. Data Transformation
    1. Query Editor
    2. Transformation GUI
    3. Row Transformations
    4. Warning! Misleading Filtering in Power Query
    5. Column Transformations
    6. Data Type
    7. Flawless Date Conversion in Power Query
    8. Adding Column
    9. Text Transformations
    10. Number Column Calculations
      1. Make Your Numeric Division Faultless
    11. Date and Time Calculations
    12. Pivot and UnPivot
    13. Grouping in Power Query
    14. Append vs Merge in Power BI
  4. Power Query Formula Language
    1. Code Behind of Power Query: M
    2. Data Types in M
    3. M Lexical Structure
    4. Working with Functions
    5. Error Handling
  5. Power Query Built-in Functions
    1. Date Functions
    2. Time Functions
    3. Text Functions
    4. Table Functions
    5. List Functions
    6. Folder.Files vs Folder.Contents: Fetch Files and Folders with Masking/Filtering
    7. Record Functions
    8. Number Functions
    9. Cube Functions
    10. Data Access Functions
    11. Type Functions
      1. Convert Time Stamp to Date Time
    12. Splitter and Combiner Functions
    13. Power Query Function’s Library; #shared Keyword
  6. M Advanced
    1. Custom Function Made Easy in Power BI Desktop
    2. Using Generators
    3. Error Handling
    4. Example of Power Query Function Using Generators, Each Singleton Function, and Error Handling
    5. Writing Complex Transformations with M
    6. Return Multiple Values from Power Query Function
    7. Dynamic M
  7. Power Query Use Cases
    1. Date Dimension With Power Query
    2. Fitbit Data Integration Part 1
    3. Fitbit Data Integration Part 2
    4. Power Query Not For BI – Part 1
    5. Power Query Not For BI – Part 2
    6. Power Query Not For BI – Part 3
  8. Data Model
    1. Loading Data into Model
    2. Introduction to Power Pivot
    3. Sort By Column
    4. Relationships
    5. Relationship with Multiple Columns
    6. Measures
    7. Formatting
    8. Calculated Columns
  9. DAX
    1. Data Analysis eXpression Language
    2. Function Categories
    3. Secret of Time Intelligence Functions in Power BI
    4. Date and Time Functions
    5. Time Intelligence Functions
    6. Math and Trig Functions
    7. Statistical Functions
    8. Text Functions
    9. Customer Retention with DAX
  10. Advanced DAX
    1. Filter Functions
    2. Calculated Tables; Scenarios of using
    3. Best Practices for Writing DAX
    4. Role Playing Dimension
    5. Relationship tips and tricks
    6. Solving DAX Time Zone Issue in Power BI
  11. Data Visualization
    1. Building Charts
    2. Customizing Charts
    3. Page Level Filters
    4. Object Level Filter
    5. Control the Interaction in Power BI Report
    6. Adding Text and Image
    7. Table View
    8. Matrix View
    9. Card View
    10. Slicer
    11. Grouping and Binning; Step Towards Better Visualization
  12. Custom Visuals
    1. Custom Visuals; Built Whatever You Want
    2. Developing Custom Visuals
    3. Azure Machine Learning and SandDance Visualization
  13. Charts
    1. Bar, Column Chart
    2. Power Behind the Line Chart
    3. Stacked Chart or Clustered, Which One is the Best?
    4. Column Line Chart
    5. Area Chart
    6. Waterfall Chart
    7. Storytelling with Scatter Chart
    8. Pie, Donut Chart
    9. Treemapping
    10. Interactive R Charts in Power BI
    11. Map
    12. Power BI Says Hi to 3D Map
    13. Map Visualization with Latitude and Longitude Only
    14. Filled Map; the Good, the Bad, and the Ugly
    15. Funnel
    16. KPI and Power BI
  14. Special Tips and Tricks
    1. Color Saturation
    2. Sparkline
    3. Colorful Slicers
    4. Using Maps in Different Levels
    5. Showing Multiple Measures
    6. Data Visualization Best Practices
    7. Filtering Slicer Resolved in Power BI
    8. Step Beyond 10GB Limitation of Power BI
  15. Power BI Services
    1. Publish to Power BI site
    2. Creating Dashboards
    3. Dashboards vs Report; Differences at a Glance
    4. Power BI Publish to Web; Questions Answered
    5. Scheduled Data Refresh
    6. Schedule Data Refresh Local Excel File from Power BI Website
    7. Datasets in Power BI
    8. Power BI Pro
    9. Groups in Power BI
  16. Security in Power BI
    1. Row Level Security
    2. Row Level Security Configuration in Power BI Desktop
    3. Row Level Security with SSAS Tabular Live Connection in Power BI
    4. Dynamic Row Level Security with Power BI Made Simple
  17. Gateways
    1. On-Premises SQL Server Live Connection with Enterprise Gateway
    2. Definitive Guide to Power BI Personal Gateway
    3. Loop Through On-Premises Files with Power BI and Schedule it to Refresh Automatically
  18. Power Q&A
    1. Introduction to Power Q&A
    2. Develop a Model that Responds Best to Power Q&A
    3. Tips and Tricks
  19. Mobile
    1. Tips for Mobile Friendly Report Development in Power BI
    2. Dashboard Design for Mobile Power BI
  20. Integration
    1. Power BI Story in Power Point Slides with Commentary
    2. Power BI Embedded; Bring the Power into your Application
  21. Real-time Dashboards
    1. Azure Stream Analytics and Power BI join forces to Real-time Dashboard
    2. Monitor Real-time Data with REST API
  22. Performance Tuning
    1. Performance Tip for Power BI; Enable Load Sucks Memory Up
    2. Not Folding; the Black Hole of Power Query Performance

Blob Auditing in Azure SQL Database is Generally Available

$
0
0

We are excited to announce that SQL Blob Auditing is now Generally Available in Azure SQL Database.

Blob Auditing tracks database events and writes audited events to an audit log in your Azure Storage account. Auditing can help maintain regulatory compliance, understand database activity, and gain insight into discrepancies and anomalies that could indicate business concerns or suspected security violations.

Blob Auditing will be replacing Table Auditing, which has been Generally Available since November 2014, processing billions of daily queries. Blob Auditing will continue to provide the high quality service that Table Auditing has been providing to thousands of SQL customers over the past two years, while generating additional value for existing, as well as new customers:

  • Better performance
  • Advanced filtering options with higher object-level granularity
  • Reduced storage costs
  • SQL Server box compatibility

Blob Auditing also supports Threat Detection, providing an additional layer of security that detects anomalous activities that could indicate a threat to the database:

  • Threat Detection alerts on suspicious activities and enables customers to investigate and respond to potential threats as they occur.
  • Customers can investigate events in the audit log correlated with the suspicious activity, without the need to be a security expert or manage advanced security monitoring systems.

Existing Table Auditing customers are strongly encouraged to switch their database auditing to Blob Auditing.

To get started using Blob Auditing for your Azure SQL database, please review our Get started with SQL database Auditing guide, which shows how to configure SQL DB Blob Auditing as well as provides information on different methods to process & analyze the audit logs.

SQL Security team

microsoft

Neues vom Kontennachweis

$
0
0

Der Kontennachweis (Report 11009 Sales VAT Adv. Not. Acc. Proof) hat eine lange Geschichte und viele Änderungen hinter sich. In letzter Zeit gab es noch einige Weitere und die möchte ich hier gern zusammenfassen.
Der Kontennachweis hat bisher die Tabelle 253 G/L Entry – VAT Entry Link genutzt, um die Verbindung zwischen den Mehrwertsteuerposten und den Sachposten herzustellen. Das wurde umgestellt. In der Tabelle 254 Vat Entry gibt es nun ein neues Feld.

kontennachwei01


In diesem wird direkt die Kontonummer hinterlegt, die mit dem Mehrwertsteuerposten verbunden ist. Dieses Feld muss nun auch gefüllt werden, dazu wurde die Codeunit 12 Gen. Jnl.-Post Line angepasst. Weiterhin benötigen Sie auch historische Werte, in Page 315 Vat Entries wurde eine Funktion hinzugefügt, die es übernimmt.

kontennachwei02


Diese füllt nach Rückfrage das Feld für alle vorhandenen Datensätze. Das kann bei Datenbanken mit vielen Datensätzen eine Zeit dauern. Wir geben selbst keine Möglichkeit mit, die Datensätze zu Filtern, allerdings können Sie natürlich bei Bedarf selbst Filter  in der Action direkt ergänzen:

kontennachwei03


Direkt nachdem diese Funktion erstellt wurde, benötigte man zum Ausführen noch eine Entwicklerlizenz. Wir haben dies mittlerweile geändert und die Berechtigung in den Objekten hinterlegt. Dies wird für Microsoft Dynamics NAV 2017 im Cumulative Update 4 enthalten sein. Die anderen Änderungen befinden sich im Cumulative Update 1. Diese Funktion muss nur einmal aufgerufen werden. Alle neuen Posten werden durch die Änderung in Codeunit 12 automatisch gefüllt.

Wir arbeiten noch an einer weiteren Sache und zwar den Schlüssel, der im Report genutzt wird, zu optimieren. Hier kann ich Ihnen nur den aktuellen Stand der Entwicklung mitteilen und werde dann ein Update schreiben, wenn die Arbeiten abgeschlossen sind.

In Tabelle 254 wird ein Schlüssel mit folgenden Feldern angelegt:
“Posting Date,”Type”,”Closed”,”VAT Bus_ Posting Group”,”VAT Prod_ Posting Group”,”Reversed”,”G_L Account No_”
Der die SummIndexFields und Optionen analog zu dem bislang im Report verwendeten Schlüsseln hat. Der Schlüssel muss dann auch an dieser Stelle ausgewählt werden.

kontennachwei04


Dies beschleunigt den Lauf bei vielen Posten deutlich, ist jedoch wie gesagt noch in Arbeit und hier zur Vollständigkeit schon einmal erwähnt.



Mit freundlichen Grüßen
Andreas Günther

Microsoft Dynamics Germany

 

 

Please be aware
Disclaimer
Because some jurisdictions do not allow the exclusion or limitation of liability for consequential or incidental damages, the limitation and disclaimer set out below may not apply.
ACCEPTANCE AND DISCLAIMER OF WARRANTY
The software contained in this communication is provided to the licensee “as is” without warranty of any kind. The entire risk as to the results, usefulness and performance of the software is assumed by the licensee. Microsoft disclaims all warranties, either express or implied, including but not limited to, implied warranties or merchantability, fitness for a particular purpose, correspondence to description, title and non-infringement. Further, Microsoft specifically disclaims any express or implied warranties regarding lack of viruses, accuracy or completeness of responses, results, lack of negligence, and lack of workmanlike effort, for the software.
LIMITATION OF LIABILITY
In no event shall Microsoft be liable for any direct, consequential, indirect, incidental, or special damages whatsoever, including without limitation, damages for loss of business profits, business interruption, loss of business information, and the like, arising out of the performance, use if, or inability to use, all or part of either the software, even if Microsoft has been advised of the possibility of such damages.

Power BI との連携開発の前に知っておくべきこと

$
0
0

ここでは、現在多くの方にお使いいただいている Power BI と連携したアプリ開発をお考えの ISV 企業の皆さんのために、理解すべき情報をまとめます。

まず、Power BI のコンセプトは「BI の民主化」、つまり本来開発して使うものではない という点は理解しておきましょう。
Microsoft では、過去、OLAP などの専門領域を扱う開発企業や Professional Engineer などに委託して実現されていた高度なデータ設計や BI の実現を、現場のエンドユーザーの元に開放する「みんなの BI」 (=誰もが “作れる” BI) 的な取り組みを (SQL Server チームなどを中心に) 長年実施してきました。
Power BI はこうした過程の中で登場しました。

自社で出荷している製品の県別の売り上げを政府が出している県別の人口統計や年齢別の統計などを元に分析・統合して Insight を得る場合を想像してみましょう。Power BI なら、1 ステップもコードを書く必要はありません。
例えば、企業内で公開されているデータベース上の売り上げ情報と Web 上で公開されている県別の人口統計をマッシュアップ (Join) し、不要なデータの削除、計算列の提供などの簡単なデータ整形をおこない、このデータを元に多角的なレポート (複数) を作成してこれを全社従業員に共有し、さらに県ごとにある支社に共有して各支社ではそれを元に独自のカスタマイズ (レポート追加等) をおこなって支社内で共有させる、といったケースを考えます。これらはすべて、Power BI の UI を使った簡単な設定で実現でき、そこに開発コードは必要ありません。

こうした Power BI のコンセプトを踏まえた上で、なお、製品アプリやサービスにおいては「Power BI と同等の機能をわざわざ作りたくない」、「顧客ごとに個別・特有の細かなレポート要求をわざわざすべて実装せず、いっそのこと Power BI と連携してそっちでやってほしい」といった開発ニーズ (連携ニーズ) が多数存在しています。
大切なポイントは、連携開発の目的を明確にすることです。なぜなら、Power BI では、こうした連携のための手段を 目的に応じて複数提供 しているためです。本来の開発目的を整理し、適材適所に (あるいは、その複数の組み合わせで) 開発方法を選択するのが最大のポイントです。

まず、大きく開発のタイプ (種類) ごとに分類してみます。下記をご参照ください。

統合開発 (Integrate) ISV のアプリやサービスと Power BI を、まさに「連携・連動」する開発方法です。 20171223_pbi_method01
組み込み開発 (Embed) ISV のアプリやサービスの中に Power BI の機能の一部を埋め込む開発方法です。もっともわかりやすいのは、Power BI で作ったレポート (ハイライト、フィルタリングなどの機能含む) そのものを企業アプリの UI に入れてしまうケースです。 20171223_pbi_method02
拡張開発 (Extend) ISV アプリ (サービス) 専用の機能、もしくは ISV アプリ (サービス) でよく使う機能を Power BI 側に追加する方法です。 20171223_pbi_method03

では、ここに、実際の開発手法 (技術手法) をマップしてみます。

統合開発 REST API HTTP ベースの API を投げてその Request / Response で連携します。Power BI 上の DataSet や Table の新規作成、そこへの Data の Push、pbix ファイルの Import、Report や Tile の取得、接続先データベースの変更など広範な操作が可能です。
組み込み開発 Power BI Embedded Power BI で構築したレポートを iframe を使って ISV アプリ (サービス) 上にホストします。(いくつか考慮点があるので後述します。)
拡張開発 Custom Visual Power BI で提供されている Visual (グラフ、UI 部品など) にない独自の Visual (例えば、屋内地図のヒートマップ、スポーツ業界におけるスタジアムマップ等々) を追加できます。上述の Power BI Embedded と組み合わせたり、Gallery に公開することで一般の Power BI 利用者にダウンロードして使ってもらうことなどが可能です。
Template Content Pack (Preview) Power BI でデータを追加する際に Salesforce, Marketo, Github Repository などの 3rd party サービスを選択してその中のデータと連携できますが、この選択肢を追加するイメージの開発です。当然審査がありますので、自身でノミネートをおこない、構築後には製品の提出と審査を受け、これらに合格したものだけが公開されます。

 

さいごに、上記で紹介した開発手法について、細かなポイントや注意すべき点を補足説明します。

まず REST API ですが、現時点では、REST API の呼び出しの前に、必ず OAuth によるログイン (Power BI のログイン画面を表示してトークンを取得する) が必要です。ISV アプリが Office 365 (すなわち Azure AD) とシングル・サインオン (SSO) して構築されているケースではシームレスに利用できますが (SSO によりそのままログインユーザーのトークンで Power BI と連携できます)、そうではないほとんどの ISV サービスではこのログインのタイミングを検討する必要があります。

なお、Power BI ではデータの変動にあわせてリアルタイムに (刻一刻と) レポートを変更させることができますが、このリアルタイム更新には Data の Push が必要であり、こうしたレポートを提供する場合は概ね REST API が必要と考えてください。(厳密には REST API を使わずに既成のサービスや機能から Push 連携も可能ですが、これらも内部では REST API が使用されています。)

上述で Power BI の埋め込み開発として Power BI Embedded を記載していますが、実は Power BI には 3 種類のレポート (またはタイル) の埋め込み方法が提供されています。最大の違いは認証・認可の考え方です。
1 つ目の方法は匿名アクセス可能な iframe 埋め込み用の uri を取得する方法で、これは開発不要であり、Power BI の画面上から埋め込み用の uri を作成できます。(最近話題になったポケモン Go のレポートなどはこの方法で提供されています。誰でも認証せずに見ることができます。)
2 つ目の方法は Power BI による認証 (Azure AD 認証) を使った埋め込み uri で、この uri は上記の REST を使って取得できます。この方法で取得した uri を使って iframe 上でレポートを表示するには、事前に OAuth によって Power BI のログイン画面を提供し、そこで得たトークンを iframe に渡す必要があります。
そして 3 つ目の方法が Power BI Embedded であり、この方法を使うと、ISV 企業独自の認証 (独自の OAuth, Basic 認証など何でも構いません) と自由に組み合わせ、セキュアにレポートをホストできるというメリットがあります。接続先のデータソース (インスタンス) も利用者や利用企業ごとに変更できるなど、現実の ISV アプリとの連携を想定したさまざまな方法を提供しています。

ISV アプリに埋め込まれたこうした Power BI のレポートは、Power BI の JavaScript API を使ってページ変更、フィルタリング等々の機能連携もできるので、利用者に「まるで ISV アプリの一部」であるかのような動作を提供できます。

ただし、こうした埋め込みの最大のデメリットは、利用者自身がその画面上で自分のレポートを追加 (新規作成) できないという点であり、もし「セルフサービス BI」という観点を重視して連携開発するなら Template Content Pack など他の方法による連携を検討すべきです。
また、Power BI Embedded は対応しているデータソースも制限されているので (パスワード認証方式を有するデータベースのみで、現状、Azure SQL Database など一部のデータベースのみです)、この点も注意が必要です。

さいごに、組み合わせよって対応していない技術もあるので注意してください。(例えば、Power BI Embedded の中の DataSet に対する REST API からのデータの Push、REST API で Import された DataSet に対する REST API によるデータの Push、などは不可能です。) このため、必ず実際の開発に入る前に、簡単なケースでの動作確認やフィージビリティの確認をおこなってください。

 

参考情報

Power BI REST API を使ったデータ連携
https://blogs.msdn.microsoft.com/tsmatsuz/2016/01/06/power-bi-custom-development-using-rest-api/

Build Your Custom Visuals in Power BI (Step-by-Step)
https://blogs.msdn.microsoft.com/tsmatsuz/2016/09/27/power-bi-custom-visuals-programming/

How to use Power BI Embedded via REST
https://blogs.msdn.microsoft.com/tsmatsuz/2016/07/20/power-bi-embedded-rest/

 

Es gab wichtige Änderungen auf unserer Facebookseite!

$
0
0

Wie euch wahrscheinlich gestern aufgefallen ist war unsere Facebook Seite Codefest den ganzen Tag nicht erreichbar. Ab heute sind wir unter dem neuen Namen Microsoft Developer auf Facebook gelistet. Die neue URL lautet demnach: https://www.facebook.com/MicrosoftDeveloper.AT/. Facebook arbeitet an einem Redirect von unserer alten URL auf die neue Adresse.

Warum haben wir diese Änderungen durchgeführt? Zum einen sind wir nun durch Facebook verifiziert und zum anderen durch eine globale Domain Microsoft Developer erreichbar.

Facebook Seite Microsoft Developer

Ihr seid auf Facebook und seht nicht die österreichischen Inhalte? Dann wechselte bitte die Einstellungen euer Region. Wie Ihr das machen könnt wird euch in diesem Beitrag erklärt.

Einige andere vielleicht nützliche Informationen beinhalten: Der Sign Up Button meldet euch für unseren Newsletter an, der 1x im Monat verschickt wird und euch über kommende Events in Österreich auf dem Laufenden hält. Der Reiter Upcoming Events zeigt euch die kommenden Events an. Zu jedem großen Event findet Ihr jeweils eine Beschreibung, den Ort und Uhrzeit sowie einen Link zu unserer Anmeldeseite für das entsprechende Event.

Facebook Seite Microsoft Developer

Ihr habt Verbesserungsvorschläge für unsere Facebook Seite? Ihr wünscht euch mehr von einem bestimmten Content? Dann bitte wie gewohnt eine Email an mich: t-nidobi@microsoft.com.

Loading files from Azure Blob Storage into Azure SQL Database

$
0
0

Azure SQL Database enables you to directly load files stored on Azure Blob Storage using the BULK INSERT T-SQL command and OPENROWSET function.

Loading content of files form Azure Blob Storage account into a table in SQL Database is now single command:

BULK INSERT Product
FROM 'data/product.dat'
WITH ( DATA_SOURCE = 'MyAzureBlobStorageAccount');

 

BULK INSERT is existing command in T-SQL language that enables you to load files from file system into a table. New DATA_SOURCE option enables you to reference Azure Blob Storage account.

You can also use OPENROWSET function to parse content of the file and execute any T-SQL query on returned rows:

SELECT Color, count(*)
FROM OPENROWSET(BULK 'data/product.bcp', DATA_SOURCE = 'MyAzureBlobStorage',
 FORMATFILE='data/product.fmt', FORMATFILE_DATA_SOURCE = 'MyAzureBlobStorage') as data
GROUP BY Color;

OPENROWSET function enables you to specify data sources where input file is placed, and data source where format file (the file that defines the structure of file) is placed.

If your file is placed on a public Azure Blob Storage account, you need to define EXTERNAL DATA SOURCE that points to that account:

 

CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
 WITH ( TYPE = BLOB_STORAGE, LOCATION = 'https://myazureblobstorage.blob.core.windows.net');

Once you define external data source, you can use the name of that source in BULK INSERT and OPENROWSET.

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'some strong password';
CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential
 WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
 SECRET = 'sv=2015-12-11&ss=b&srt=sco&sp=rwac&se=2017-02-01T00:55:34Z&st=2016-12-29T16:55:34Z&spr=https&sig=copyFromAzurePortal';
CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
 WITH ( TYPE = BLOB_STORAGE,
        LOCATION = 'https://myazureblobstorage.blob.core.windows.net',
        CREDENTIAL= MyAzureBlobStorageCredential);

 

 

You can find full example with some sample files on SQL Server GitHub account.


Interested in Data Science welcome to Microsoft LearnAnalytics@MS Site

$
0
0

image

Dive into Webinars, On-Demand Videos, and Classroom Training to quickly master big data and advanced analytics techniques with Microsoft Advanced Analytics at http://learnanalytics.microsoft.com/

Deep Neural Networks in Azure: Transfer Learning and Fine-tuning

Deep learning is an emerging field of research, which has applications across multiple fields. We will show how the transfer learning and fine tuning strategy leads to re-usability of the same Deep Convolution Neural Network (DCNN) model in different domains. Attendees will learn about basic deep neural networks, and how to use DNNs in Azure.

Learn more

Dive Deep into Small Data with Big Data Techniques

When you have limited resources and big data to crunch in real time, it makes sense to use the cloud. A less obvious scenario is to have data that isn’t very big and the need isn’t for instantaneous feedback—but the only way to get the crunching done in a timely and financially responsible way is to tap the cloud.

Learn more

New to machine learning?

If you’re just getting started, read David Chappell’s Introduction for Technical Professionals

Another great resource is Roger Barga’s book, Predictive Analytics with Microsoft Azure Machine Learning. Read an interview with the author.

Blog: Data Science 101

Explore resources for learning data science with Ryan Swanstrom

Learn more

Cortana Intelligence Corner

Helping you navigate the world of the Cortana Intelligence Suite

Learn more

Blog: Backyard Data Science

Buck Woody’s non-traditional route to learn data science

Learn more

Our New Nutanix running Windows Server 2016

$
0
0

We just added to our demo capability in the UK MTC ; this is our third Nutanix (they very kindly loan us a new one each year).  This time we have even more Storage and Windows Server 2016 attached the System Center Virtual Machine Manager 2016.  If you see a demo from our team its very likely to be running on this system (and often a few others).

nut2016

One great thing about the Nutanix is the setup simplicity for others look at their website.  It only took a 2-3hours to unbox, rack and get working (thanks to Matt and Rob at Nutanix for all the great help with that!).

We then added our old Fail Over Cluster in to SCVMM and Live Migrated the virtual machines across, a breeze on 10gb networking.  All we need to do now is get it added into Operations Manager and showing up in OMS (now that there is integration from Comtrade), more of that in a later post…

Thanks Clive

How to create a folder that inherits its parent’s ACL, and then overrides part of it

$
0
0


A customer wants to create a folder that inherits its parent’s ACL
but then overrides part of it.
Specifically, the customer wanted to disallow the creation of subfolders.
The customer reported that when they used the
SH­Create­Directory function to create the folder,
the folder did not inherit any ACLs at all from its parent.
The only thing it got was the “deny creation of subfolders” part.



The customer provided this sample code to demonstrate what they were doing.



int main()
{
PSECURITY_DESCRIPTOR pSD;
ULONG ulSDDL;
LPTSTR pszPath = L”C:\my\test\directory”;
LPTSTR pszDacl = L”D:(D;;0x4;;;WD)”;

if (ConvertStringSecurityDescriptorToSecurityDescriptor(
pszDacl, SDDL_REVISION_1, &pSD, &ulSDDL))
{
wprintf(L”Created security descriptorn”);
SECURITY_ATTRIBUTES sa;
sa.lpSecurityDescriptor = pSD;
sa.nLength = sizeof(sa);
sa.bInheritHandle = TRUE;
if (SUCCEEDED(SHCreateDirectoryEx(nullptr, pszPath, &sa)))
{
wprintf(L”Created folder %sn”, pszPath);
}
}
return 0;
}



Notice the importance of

reduction
,
simplifying the problem to the smallest program that still demonstrates
the issue.
This boils the problem down to its essence,
thereby allowing the development team to focus on the issue
and not have to wade through (and possibly debug) unrelated code.
Reduction is also a useful exercise on the part of the person
reporting the problem,
in order to verify that the problem really is what you think it is,
rather than being a side effect of some other part of the program.



The customer added,
“The ACL we are using
D:(D;;0x4;;;WD)
denies folder creation to everyone.
We tried adding flags like
P,
AI,
OICI,
etc.,
but none of them seem to work.”



The shell takes the security descriptor passed to the
SH­Create­Directory­Ex function
and passes it through to the
Create­Directory function,
so any issues you have with the security descriptor
are really issues with the
Create­Directory function.
The shell is just the middle man.



But even though this wasn’t really the expertise of the shell
team, we were able to figure out the problem.



First off, we have a red herring:
The
bInherit­Handle member controls handle
inheritance, not ACL inheritance.
Setting it to TRUE causes the handle to be
inherited by child processes.
But that has no effect on the ACL.
And since
the
Create­Directory function doesn’t return
a handle at all,
fiddling with the
bInherit­Handle means nothing since there
is no handle in the first place.
It’s a double red herring.



When you specify an explicit security descriptor to the
Create­Directory function,
that establishes the security descriptor on the newly-created object.
There is no inheritance from the parent.
Inheritance rules are applied at creation
only when you create the object
with the default security attributes:¹




If lpSecurityAttributes is NULL
,
the directory gets a default security descriptor.
The ACLs in the default security descriptor for a directory are inherited from its parent directory.


Passing an explicit security descriptor
overrides the default behvaior.



If you want a blend of default behavior and custom behavior,
then you have a few options available.



One option is to read the security descriptor of the parent object
and propagate the inheritable ACEs to the child
in the appropriate manner.
This is a
complicated endeavor
and probably is best left to the experts.
It’s not a simple matter of copying them from the parent
to the child.
You also have to
to adapt the ACEs based on flags like
“inherit only” and “container inherit”.



The second option is to create the directory without an explicit
security descriptor and let the experts create it with the default
security descriptor,
which takes into account all the inheritance rules.
And then modify the security descriptor post-creation to include
the new ACE you want.
Fortunately,

MSDN has sample code

for how to add an ACE to an existing security descriptor.



The customer
reported that they adapted the code from MSDN and it worked perfectly.



¹
Inheritance rules are also applied when you use functions
like

Set­Named­Security­Info and
Set­Security­Info
.

How am I supposed to print my print-at-home tickets if I can’t reproduce them?

$
0
0


I ordered some tickets online and
was sent to a Web page which displayed the
tickets (with bar codes and stuff),
with instructions to print them out.



The Web page also had this big warning on it:



IT IS UNLAWFUL TO REPRODUCE THIS TICKET IN ANY FORM.


Um, this is a print-at-home ticket.
The entire reason for its existence is to be
reproduced in printed form.



I printed it anyway.



Nobody arrested me.

Agile and the Theory of Constraints – Part 3: The Development Team (1)

$
0
0

(Note: The first version of this was a very random draft rather than the first part that I wrote. I blame computer elves. This should be a bit more coherant)

This episode of the series will focus on the development team – how a feature idea becomes a shippable feature.

A few notes before we start:

  • I’m using “feature” because it’s more generic than terms like “story”, “MVP”, “MBI”, or “EBCDIC”
  • I picked an organizational structure that is fairly common, but it won’t be exactly like the one that you are using. I encourage you to draw your own value stream maps to understand how your world is different than the one that I show.

In this episode, we will be looking at the overall development team. I’m going to start by looking at a typical development flow:

3a

Green lines are forward progress, red lines show that we have to loop back for rework.

That’s the way it works for a single developer, and across a team it looks something like this:

3b

I’ve chosen to draw the case where features are assigned out by managers, but there are obviously other common choices. Hmm… there are already a ton of boxes in the diagram, and this is just the starting point, so I’m going to switch back to the single-developer view for now.

What are we missing?

Adding the Queues

3c

There are several queues in the process:

  1. The input queue, which I’m going to ignore for now.
  2. Design Review: After I have finished a design, I send it out to the rest of the team for review.
  3. Code Review: After I have finished the implementation, I send out the code to the team for review.
  4. Code Submission: I submit my code to an automated system that will run all the tests and check in if they all pass.
  5. Test: The feature moves to the test phase. This might be done by the development team, or there might be a separate test team.
  6. Acceptance: Somebody – typically a product owner – looks at the feature and determines if it is acceptable

Now, let’s put some times on the time spend in the queue. The numbers I’m listing are from my experience for a decent traditional team, and they are typical numbers.

  1. Design Review: 3 hours to 1 day.
  2. Code Review: 3 hours to 1 day.
  3. Code Submission: 30 minutes to 1 day.
  4. Test: 1 day to 10 days
  5. Acceptance: 1 day to 10 days.

Here’s an updated diagram with the numbers on it:

3d

At this point, we would typically try to put numbers on all of the blue boxes, but because our features sizes vary so much, the numbers were all over the place and weren’t very useful.

We can, however, try to put some numbers on the red rework lines. I’d like you to think about what the numbers are in your organization, and we’ll pick it up from there.

 

What is the role of a data scientist?

$
0
0

By Michele Usuelli, Lead Data Scientist

Data Science has been around for decades, but it recently increased in popularity among companies. Although the tools and techniques existed already, there are some changes. Digital technologies generate more data that can drive new advanced analytics use-cases. Also, there are more success stories show-casing the value in data, making companies more keen to invest resources into new solutions. Because of the hype, phrases like “Data Science” and “Big Data” became buzzwords. However, their meaning is loosely defined and it’s not entirely clear to many businesses. From a practical perspective, what is data science? Is the role of the data scientist the same as the statistician, or are there new challenges?

The scope of this article is to understand what the role of a data scientist consists in. Depending on the company and on the specific role, there are lots of differences, so it’s challenging to come out with a universal profile. However, it’s still possible to understand the role from a high-level perspective. breaking down the core responsibilities of a data scientist, we can have a better understanding of the related skillset.

The starting point of a data science project is “why“. Why does the company need an advanced analytics solution? Why is it willing to allocate a budget? Why will the project have an impact? To address these queries, the data scientist is capable of understanding the business context, brainstorm solutions, and identify what’s valuable. The related skills are business acumen, experience in applying a solution in a specific industry, and soft skills like stakeholder management and listening. Being knowledgeable about the field is definitely useful although not mandatory, since data scientists can interact with subject matter experts to get the information they need. Therefore, the core skill is being capable of collecting business information and defining advanced analytics use-cases accordingly. Also, after having designed and built a solution, the data scientist should be able to present it.

After having defined the target, the next question is “what“. What do we need to do to solve the challenge? What techniques can we use? What are the main steps from the current situation to the final solution? To address these queries, the data scientist should be able to define the logical steps to build an end-to-end solution. The required knowledge is about statistics, data processing, machine learning techniques, model validation. However, being knowledgeable about the separate steps is not enough as the data scientist needs to be capable of designing an end-to-end solution that every time is different depending on the data and on the target. The main challenges are to

  • prepare the data: blend the original data sources, structure the data in the required format
  • apply a machine learning model: define the machine learning model addressing the business challenge
  • validate the model: according to the context, define a meaningful way to measure the success of the model

Each step requires thinking outside the box and using common sense, in addition to some knowledge about the techniques.

Knowing what are the logical step of an advanced analytics solution doesn’t imply being able to build it. The final question is “how“. How can we implement the solution? How can we put the data together? How can we prototype and deploy the solution? This part is more technical and the skillset is diverse. The main areas are

  • dealing with data challenges: the data can be incomplete, have a large volume, have a challenging structure (e.g., text, images). The data scientist should be able to identify and use the tools required by the data. For example, some tools included into the Hadoop ecosystem became popular recently.
  • coding: although there are pre-built high-level tools, every data science solution is unique and it will involve some coding. Being able to use specific languages like R and Python is useful, but not always necessary. The core skill is knowing how to code, no matter the language. In presence of large amount of data, it’s also useful to be able to design parallel algorithms to scale across large data volumes.
  • organising the data into databases: knowing how to store and organise the data, and extract the relevant information. This part is performed using SQL/NoSQL databases although managing them can be out of the scope of a data scientist.
  • prototyping: quickly build a good-enough solution that works
  • deploying: put the solution into production

 

The technical skills depend a lot on the context, so there is more diversity in the “how” area.

The data science process requires a broad expertise and the data scientist can’t go very deeply into each component of the solution. That’s especially true for data scientist consultants, given that they join new projects where the customer has already a deep knowledge about the context and the tools. To design and build the solution, the data scientist needs to interact with

  • Customer stakeholders: to define the scope of the solution and to measure its success, the data scientist should have a conversation with the stakeholders.
  • Customer subject-matter experts (SMEs): data science often consists in improving an existing solution using advanced analytics methodologies, so the starting point is to understand the current solution and use it as a starting point. Also, the data scientist needs to know how to interpret the data. To get some help on that, the data scientist needs to work close to the SMEs.
  • Academic researchers: although data scientists are experts of machine learning techniques, their knowledge is not as deep as academic researchers. Also, data scientists are focused on bringing value to projects, so they often don’t have enough time to develop complex algorithms. The most common way to work together with academic researches is to use tools developed by open-source community. A good example is CRAN providing with its R 10000+ packages, providing the most cutting edge statistical tools and machine learning techniques. Also, in larger projects there might be some researchers working together on the machine learning part of the solution.
  • Solution architects: the data scientist defines and builds advanced analytics models that are utilised by the solution. Usually an architect designs the overall solution, taking into consideration the business context and the technological infrastructure.
  • Data engineers: the “how” part of the engagement is usually the most time consuming and it required a broad range of skills. Data engineers help building and deploying the solution, designing the pipeline translating the data into actions and insights. In some engagement, the data scientists build the prototype and the data engineers put it into production.
  • Software SMEs: if the solution integrates other technologies, there might be other people involved, especially solution architects.

This article shows what’s common across most of the data scientists and aims to provide more clarity about the role. Depending on the specific case, the skillset can be more detailed and it varies a lot depending on the industry, seniority, level, team. Also, in larger teams there will be people specialised in different aspects of the solution, so it’s less important to have a person having the full skillset.

 

 

Visual Studio Toolbox: New Delivery Plan Extension


Visual Studio Toolbox: Using Espresso Tests

Cannot read property ‘listRegistrationsByTag’ of undefined’ (Azure App Services)

$
0
0

Situation:

You are creating a Mobile App or Azure App Services and setting up the push notifications for the first time

Problem:

Calling Register or RegisterAsync in the client code can result in a 400 status code coming back with an inner exception of: ‘Cannot read property ‘listRegistrationsByTag’ of undefined’

Cause:

The Portal blade does not set the Application Setting MS_NotificationHubName

Workaround:

Add MS_NotificationHubName in the Application settings.  The name will be the name of your Notification Hub.  It is the last item after the in the setting MS_NotificationHubId or you can get in from the notification Hub blade:

capture20170223080119644

 

capture20170223080212770

More information:

Once this is fixed the Portal Blade will set the MS_NotificationHubName for you when you walk through the Portal setup of Push notifications.  At that time I will update this Blog!  If you need further help, you can certainly open a support case through the Azure Portal.

Elsterfehlermeldung –“Das Zertifikat ist nicht mehr gültig”

$
0
0

Sollten Sie die Fehlermeldung “Das Zertifikat ist nicht mehr gültig” in Microsoft Dynamics NAV 2016/2017 bei der Übertragung der UVA mit Elster erhalten, obwohl Sie eine neue *.pfx Zertifikatsdatei vom Finanzamt bekommen haben, dann liegt es womöglich daran, dass die alten ungültigen Zertifikate in die *.pfx Datei mit aufgenommen worden sind.

Es gibt für den Benutzer noch keine Möglichkeit in Microsoft Dynamics NAV 2016/2017 die alten ungültigen Zertifikatsdateien manuell zu löschen.

Als Workaround empfehle ich Ihnen, die ungültigen Zertifikate in der *.pfx Datei von den gültigen zu trennen, bevor Sie diese in die Microsoft Dynamics NAV Datenbank importieren.

Gehen Sie hierzu folgendermaßen vor:

  • Importieren Sie die neue *.pfx Datei indem Sie einen Doppelklick auf die Datei machen
  • Im Zertifikatimport-Assistent wählen Sie “Aktueller Benutzer”
  • Setzen Sie bitte den Haken bei “Schlüssel als exportierbar markieren…”

 

elster1

 

  • Alle Zertifikate in “Eigene Zertifikate” speichern
  • Öffnen Sie nun die Managementkonsole (MMC) und fügen das Zertifikate-Snap-In (Strg+M) für “Eigenes Benutzerkonto” hinzu
  • Markieren Sie nur die beiden gültigen Elsterzertifikate (signaturekey + encryptionkey)
  • Exportieren Sie diese in eine neue *.pfx Datei, indem Sie auf “Aktion -> Alle Aufgaben -> Exportieren…” klicken

 

elster2

 

  • Verwenden Sie hier bitte das gleiche Kennwort, welches Sie vom Finanzamt erhalten haben
  • Vergeben Sie einen Dateinamen (z.B. Valid_Elster_Certificates.pfx) und speichern die Datei ab
  • Verwenden Sie nun diese neue *.pfx Datei mit nur den gültigen Zertifikaten in Microsoft Dynamics NAV

Danach taucht die Fehlermeldung bei der Elsterübermittlung nicht mehr auf.

 

Mit freundlichen Grüßen

Khoi Tran

Microsoft Dynamics Germany

 

 

Getting Started with Azure – Backing Up VMs

$
0
0

In my last post, I provided you with some simple steps on how to “Get Started” with the Azure Backup service by focusing on the Backup Vault and Policies that are associated with Vaults and VMs. Starting from there, I would like to go to the next and final step, which is the actual Backing Up of VMs or Servers.

In the below video, I will walk you through how to enable an Azure VM for Backup Protection using the Vault and Policies that we created in the previous post. I will also talk about how you can enable your On-Premises VMs as well as how you can configure them to only backup certain files and folders rather than doing a full VM snapshot.

Now that we have walked through how to protect a specific VM, let me provide you with scripts for how to do the same thing using PowerShell and Azure Resource Manager. I have also provided you with additional documentation and articles that are related to Azure Backup.

Scripts

Resources

We welcome your comments and suggestions to help us continually improve your Azure Government experience. To stay up to date on all things Azure Government, be sure to subscribe to our RSS feed and to receive emails, click “Subscribe by Email!” on the Azure Government Blog. To experience the power of Azure Government for your organization, sign up for an Azure Government Trial.

Ingest data into Azure Data Lake Store with StreamSets Data Collector

$
0
0

Today, I want to give a shout out to one of our partners who has a great offering for Azure Data Lake Store customers. 

When ingesting large scale data into a data lake, data often requires data transformations such as cleaning and filtering.  StreamSets Data CollectorTM offers an open source software for building and deploying modern data-in-motion flows.  You can connect a variety of sources to Azure Data Lake Store with minimal custom coding, even in the face of the inevitable change in data schemas. 

To find out more about how StreamSets Data Collector can help you ingest data into Azure Data Lake Store, watch this webinar on Next Gen Analytics at a Major Bank Using Azure Data Lake and StreamSets.  And check out this new StreamSets tutorial that gives a great step-by-step guide and accompanying video for quickly building pipelines for ingesting, filtering, and transforming data as it is ingested into Azure Data Lake Store. 

Azure Data Lake Store is fully integrated with Azure HDInsight.  You can also deploy StreamSets Data Collector on top of Azure HDInsight, in order to enable real-time monitoring and data flow operations of your HDInsight cluster based analytics.

Azure Data Lake Store (ADLS) is the engine that powers storage for cloud big data analytics in Azure and offers a secure cloud-scale hierarchical file system compatible with Apache Hadoop Distributed File System (HDFS).  We’ll keep you in the loop here as more innovative solutions like StreamSets enable customers to easily build big data platforms with Azure Data Lake. 

 

Viewing all 35736 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>