Quantcast
  • Login
    • Account
    • Sign Up
  • Home
    • About Us
    • Catalog
  • Search
  • Register RSS
  • Embed RSS
    • FAQ
    • Get Embed Code
    • Example: Default CSS
    • Example: Custom CSS
    • Example: Custom CSS per Embedding
  • Super RSS
    • Usage
    • View Latest
    • Create
  • Contact Us
    • Technical Support
    • Guest Posts/Articles
    • Report Violations
    • Google Warnings
    • Article Removal Requests
    • Channel Removal Requests
    • General Questions
    • DMCA Takedown Notice
  • RSSing>>
    • Collections:
    • RSSing
    • EDA
    • Intel
    • Mesothelioma
    • SAP
    • SEO
  • Latest
    • Articles
    • Channels
    • Super Channels
  • Popular
    • Articles
    • Pages
    • Channels
    • Super Channels
  • Top Rated
    • Articles
    • Pages
    • Channels
    • Super Channels
  • Trending
    • Articles
    • Pages
    • Channels
    • Super Channels
Switch Editions?
Cancel
Sharing:
Title:
URL:
Copy Share URL
English
RSSing>> Latest Popular Top Rated Trending
Channel: MSDN Blogs
NSFW?
Claim
0


X Mark channel Not-Safe-For-Work? cancel confirm NSFW Votes: (0 votes)
X Are you the publisher? Claim or contact us about this channel.
X 0
Showing article 33601 to 33620 of 35736 in channel 3196389
Channel Details:
  • Title: MSDN Blogs
  • Channel Number: 3196389
  • Language: English
  • Registered On: May 31, 2012, 2:56 am
  • Number of Articles: 35736
  • Latest Snapshot: October 4, 2019, 9:36 am
  • RSS URL: http://blogs.msdn.com/b/mainfeed.aspx?type=allblogs
  • Publisher: https://blogs.msdn.microsoft.com
  • Description: Get the latest information, insights, announcements, and news from Microsoft experts and...
  • Catalog: //cercomonadidae3.rssing.com/catalog.php?indx=3196389
Remove ADS
Viewing all 35736 articles
Browse latest View live
↧

The ‘What’ is known ? But ‘How’ is the gap ?

August 16, 2018, 6:34 am
≫ Next: 35 x veřejné výukové Azure řešení – Microsoft Cloud Workshops
≪ Previous: IIS on Nano server in a Hyper-V Virtual machine
$
0
0

What about Data and have you thought about 'Artificial Intelligence' as a next extension to your product portfolio ? Every interaction that we have done in the tech circle has touched upon this very question and the answer has always been a big 'Yes'.

I must admit that, no conversation is complete these days without speaking about AI. While I reflect on many meetings I have had in the last 6-8 months, it is imperative that the next frontier of innovation revolves around 'Data', and 'Independent Software Vendors' (ISV) are back on the drawing board to chart out the way forward

Read on here

↧
Search

35 x veřejné výukové Azure řešení – Microsoft Cloud Workshops

August 16, 2018, 5:34 am
≫ Next: Using PerfView (ETW) in Windows (Docker) Conainers
≪ Previous: The ‘What’ is known ? But ‘How’ is the gap ?
$
0
0

Chcete-li se naučit nebo jen vyzkoušet některou z moderních cloudových technologií, pak nepřehlédněte nový balík veřejných HOLů  (Hands On Labs) v rámci  Microsoft Cloud Workshop (MCW).

Microsoft Cloud Workshops

HOLy obsahují zpravidla podrobný manuál a popis pro přednášejícího i posluchače, referenční architekturu, návod jak zprovoznit vše v Azure, zdrojáky, ARM šablony, atd.
Vše je uloženo ve vlastních, oddělených veřejných repositářích na Githubu:

  1. App modernization
  2. Azure Blockchain
  3. Azure security and management
  4. Azure security, privacy, and compliance
  5. Azure Stack
  6. Business continuity and disaster recovery
  7. Big Compute
  8. Big data and visualization
  9. Building a resilient IaaS architecture
  10. Containers and DevOps
  11. Cognitive services and deep learning
  12. Continuous delivery in VSTS and Azure
  13. Data Platform upgrade and migration
  14. Enterprise-class networking
  15. Enterprise-ready cloud
  16. Intelligent analytics
  17. Intelligent vending machines
  18. Internet of Things
  19. IoT for business
  20. Lift and shift/Azure Resource Manager
  21. Linux lift and shift
  22. Media AI
  23. Microservices architecture
  24. Migrate EDW to Azure SQL Data Warehouse
  25. Mobile app innovation
  26. Modern cloud apps
  27. Optimized architecture
  28. OSS DevOps
  29. OSS PaaS and DevOps
  30. SAP HANA on Azure
  31. SAP NetWeaver on Azure
  32. Securing PaaS
  33. Serverless architecture
  34. SQL Server hybrid cloud
  35. Windows Server and SQL Server 2008 R2 end of support planning

 

↧
↧

Using PerfView (ETW) in Windows (Docker) Conainers

August 16, 2018, 1:25 pm
≫ Next: RDP not working for my cloud service
≪ Previous: 35 x veřejné výukové Azure řešení – Microsoft Cloud Workshops
$
0
0

Virtual machines have been around on Windows for many years now, and for the most part you treat the exactly as if it was a physical machine.  In particular with respect to a tool like PerfView, we expect it to 'just work' and I don't have to say anything special.  (Actually there is one things different, which is that the 'CPU Ctrs' Textbox on the collection dialog doesn't do something useful, see Cpu Ctrs TextBox for details).

The trend in virtual machines (VMs) is to make them super light weight, so that they are so cheap you use them more frequently.   The main way of doing this is to share more of the VM with its underlying host, so that only what REALLY needs to be distinct for each VM is actually unshared.   These light weight VMs are called containers and Windows started supporting these light weight containers (Microsoft calls them Windows Server Containers) a few years back.   See Windows Container Quick Start for more.    A tool/system called Docker is used to create, configure an run applications in these light weight operating system images.

I would have liked to say 'PerfView' 'just works like you would expect in these Docker containers, and I would be almost right.   But there are enough caveats that I wrote some documentation on using PerfView in windows Docker containers.     Without repeating that documentation much the main points are

  1. ETW (and PerfView) work in Windows Docker Containers
  2. However you need a new enough version of the Windows OS (Win 10 1709 or later)
  3. You only get the basic kernel events (CPU + Context Switch, but that covers most scenarios.
  4. Because the OSes used in containers can't display a GUI, PerfView has to be used with the /logfile option as described in its automation documentation.
  5. But other PerfView 'just works'
  6. You can even collect on Windows Nanoserver OS, however you can't use PerfView but stripped down version called PerfViewCollect.

So if you are using Windows Docker containers for your app, and want to do a performance investigation, know that it is possible to do so, and you should read the PerfView Docs on Windows Containers to learn more (Edge users, you may have to search for 'Containers' to find the right spot).   This documentation is also part of the PerfView help, so you can find it there if that is more convenient.

Once you have collected the data from the container, you copy it to a windows machine with a  GUI and analyze the trace.

There is one final caveat:  as of right now (8/2016) there seem to be an OS problem in what is called 'merging' where the information needed to look up symbolic information  from Microsoft's is correctly added to the data file.      The exact conditions where it fails is not really clear (it works for some DLLs but not others in the same trace), but the symptom is that sometimes perfView's 'Lookup Symbols' on a native Microsoft-supplied DLL (typically some OS DLL), will fail saying that that data file may not be 'merged' properly.      For .NET applications this s rarely a problem since you are much more interested in the app itself and the .NET Framework, and not the OS DLLs, but for other scenarios it may be an issue.   There is a bug filed on this, and we will see what we learn.

↧

RDP not working for my cloud service

August 16, 2018, 10:55 am
≫ Next: Protected: How to operationalize TensorFlow models in Microsoft Machine Learning Server
≪ Previous: Using PerfView (ETW) in Windows (Docker) Conainers
$
0
0

Today I am going to discuss one of the most common scenario faced by the Azure customers who are using classic cloud service resource - RDP not working throwing the below error !!

Remote-Desktop-Error

 

 

Sometimes you will not be able to download the RDP file itself and will get this error - "Failed to download the file. Error details: error 400 Bad Request".

You may get the above errors for multiple reasons:

  • RDP user account or Encryption certificate got expired.
  • TCP Port 3389 for RDP is blocked.
  • RDP extension got disabled.

Sometimes while troubleshooting the RDP issues with my customers I have found that resetting the RDP user account expiration date or renewing the encryption certificate doesn't work. In those cases I always follow the below steps which resolves all the RDP issues with 100% certainty. 😉

  1. Create a self-signed certificate in .pfx format.
  2. Upload the self-signed certificate in cloud service certificate store from the Azure Portal.
  3. Delete all the disabled RDP extensions if present.
  4. Re-enable Remote Desktop for the roles by using the self-signed certificate which you created in the step 1.

I have automated all the above four steps using a PowerShell script so that it makes our life much easier. 🙂 .... Run the below PowerShell script in admin or elevated mode.

 

$CloudServiceName = "mycloudservice" # Cloud Service name
$CertDnsName = "mycloudservice.cloudapp.net" # Cloud Service Azure DNS name
$CertPassword = "CertPassword" # Password for the self-signed certificate
$CertExportFilePath = "C:my-cert-file.pfx" # Local file path where self-signed certificate will be exported
$RdpUserName = "RemoteUserName" # RDP user name
$RdpUserPassw0rd = "RdpPassword" # RDP user password
$Slot = "Production" # Cloud Service slot
$RdpAccountExpiry = $(Get-Date).AddDays(365) # RDP user account expiration DateTime
# Creating self-signed certificate
Write-Host (Get-Date).ToString() " : Creating self-signed certificate." -ForegroundColor Magenta
$cert = New-SelfSignedCertificate -DnsName $CertDnsName -CertStoreLocation "cert:LocalMachineMy" -KeyLength 2048 -KeySpec "KeyExchange"
$SecureCertPassword = ConvertTo-SecureString -String $CertPassword -Force -AsPlainText
Export-PfxCertificate -Cert $cert -FilePath $CertExportFilePath -Password $SecureCertPassword
Write-Host (Get-Date).ToString() " : Self-signed certificate created successfully at" $CertExportFilePath -ForegroundColor Magenta
# Login to your Azure account
Write-Host (Get-Date).ToString() " : Logging into your Azure account." -ForegroundColor Magenta
Login-AzureRmAccount
Write-Host (Get-Date).ToString() " : Logged in successfully." -ForegroundColor Magenta
# Uploading self-signed certificate to the cloud service certificate store
Write-Host (Get-Date).ToString() " : Uploading self-signed certificate to the cloud service certificate store." -ForegroundColor Magenta
Add-AzureCertificate -serviceName $CloudServiceName -certToDeploy $CertExportFilePath –password $CertPassword
Write-Host (Get-Date).ToString() " : Self-signed certificate uploaded successfully." -ForegroundColor Magenta
# Delete all the existing RDP extensions for a given cloud service slot
Write-Host (Get-Date).ToString() " : Deleting all the existing RDP extensions for" $Slot "slot." -ForegroundColor Magenta
Remove-AzureServiceRemoteDesktopExtension -ServiceName $CloudServiceName -UninstallConfiguration -Slot $Slot
Write-Host (Get-Date).ToString() " : Successfully deleted all the disabled RDP extensions for" $Slot "slot." -ForegroundColor Magenta
# Enabling remote desktop extension on specified role(s) or all roles on a cloud service slot
Write-Host (Get-Date).ToString() " : Enabling remote desktop extension on all the roles." -ForegroundColor Magenta
$SecureRdpPassword = ConvertTo-SecureString -String $RdpUserPassw0rd -Force -AsPlainText
$Credential = New-Object System.Management.Automation.PSCredential $RdpUserName,$SecureRdpPassword
Set-AzureServiceRemoteDesktopExtension -ServiceName $CloudServiceName -Credential $Credential -CertificateThumbprint $cert.Thumbprint -Expiration $RdpAccountExpiry -Slot $Slot
Write-Host (Get-Date).ToString() " : Remote desktop extension applied successfully." -ForegroundColor Magenta

 

Output of the script will be something like below:

RDP-Enable-Script-Output

 

Even after running the above script if you are not able to RDP, then definitely it's a networking issue. Few possible reasons:

  • May be the network from where you are trying to RDP is blocking the traffic.
  • There could be some ACL rules configured in your cloud service.
  • Firewall rules configured using startup tasks.
  • If your cloud service is sitting behind an NSG, you may need to create rules that allow traffic on ports 3389 and 20000. The RemoteForwarder and RemoteAccess agents require that port 20000* is open, which may be blocked if you have an NSG.

Most of time I have seen that customer's corporate network blocks the RDP traffic due to security reasons. So the first thing you should check if you are able to reach to RDP ports 3389 and 20000 (if applicable as mentioned above) using PsPing or PortQry or Telnet. You can refer my blog where I have discussed how to troubleshoot RDP issues using various tools like PsPing and Network monitor. If you are not getting any response, try to RDP from a different network, may be home network or mobile hotspot, etc.

 

I hope you this blog will help you resolve all the RDP issues related to Azure Cloud Service.

Happy Learning !

↧

Protected: How to operationalize TensorFlow models in Microsoft Machine Learning Server

August 16, 2018, 5:05 pm
≫ Next: VS 2017 (15.8 update)
≪ Previous: RDP not working for my cloud service
$
0
0

This content is password protected. To view it please enter your password below:

↧
↧

VS 2017 (15.8 update)

August 16, 2018, 7:12 pm
≫ Next: Lync/Skype for Business クライアントの “テナントの制限”の対応について
≪ Previous: Protected: How to operationalize TensorFlow models in Microsoft Machine Learning Server
$
0
0

The Visual Studio 2017 (15.8 update) is now available for download, and you should see the 'new update available' notification in the coming weeks--you can also get the update now by downloading the 'free trial' version of the installer which will let you update your system.

The latest VS 2017 Redistribution packages are available (x86, x64), as well as the Remote Debugging Tools (x86, x64). For more on the Visual Studio 2017 (15.8) update, see the release notes.

Compiler and CRT

VS 2017 (15.8) includes a new version of the C/C++ complier (19.15.26726). This includes some improvements to the SSA Optimizer, as well as some additional performance improvements for the linker. This also fixes some codegen bugs including this one.

A new C++ debugging feature in VS 2017 (15.8) known as "Just My Code" stepping (JMC). For more details, see this blog post.

There's a newly rewritten C++11/C99-compatible preprocessor in progress you can try out with /experimental:preprocessor. For more details see this blog post.

Note: Per this blog post, the _MSC_VER value is now 1915 instead of 1910, 1911, 1912, 1913, or 1914.

The C/C++ Runtime (14.15.26706) is included in this update. Remember that VS 2015 and VS 2017 share the same runtime redistributable binaries and that VS 2015 Update 3 is binary compatible with VS 2017--code or static library built with one can be linked to the other--, so this is the latest version for both.

Xbox One: By default /JMC is enabled with VS 2017 (15.8) in Debug configurations which can lead to a link error when using the Xbox One XDK (unresolved symbol __CheckForDebuggerJustMyCode). You can easily resolve this by going to your project settings under C/C++ -> General and setting "Support Just My Code Debugging" to "No", and then rebuild. This will be fixed in a future Xbox One XDK QFE at which time you can re-enable this feature if desired.

Static analysis: As I mentioned with the VS 2017 (15.7 update), the /analyze switch now includes some C++ Core Guidelines checker rules per this blog post. With VS 2017 (15.8 update), a fair amount of the 'noise' introduced with this change has been addressed.

Related: VS 2017 (15.7 update), VS 2017 (15.6 update), VS 2017 (15.5 update), Windows 10 Fall Creators Update SDK (15.4), VS 2017 (15.3 update), Windows 10 Creators Update SDK (15.1/15.2), Visual Studio 2017 (15.0)

↧

Lync/Skype for Business クライアントの “テナントの制限”の対応について

August 16, 2018, 10:20 pm
≫ Next: [重要]Skype for Business Online のダイヤルプランに関する重大な更新のお知らせ
≪ Previous: VS 2017 (15.8 update)
$
0
0

こんにちは Japan SKype/Lync サポートチームです。

 

今回は、Lync/Skype for Business クライアントにおける "テナントの制限" (Tenant Restrictions) への対応についてお知らせいたします。

 

 

テナントの制限とは何か?

ざっくりいうと、自社の環境から他社の Office 365 テナントに接続させないための機能となります。
セキュリティの強化のため、この機能を導入したいお客様も多いかと思います。

詳細は、下記技術情報はもちろんのこと、弊社テクニカル セールス チームのBlog でも詳細な情報をご紹介していますのでご参照ください。

Title : テナント制限を使用して SaaS クラウド アプリケーションへのアクセスを管理する
URL : https://docs.microsoft.com/ja-jp/azure/active-directory/manage-apps/tenant-restrictions

Title : 自社テナント以外へのアクセス制御 – “テナントの制限” 機能 (Tenant Restrictions)
URL : https://blogs.technet.microsoft.com/office365-tech-japan/2017/02/06/tenant-restrictions/

 

Skype for Business におけるテナントの制限

まず、Lync 2010/Skype for Business 2015 (Lync 2013)/Skype for Business 2016 クライアントのいずれを使用した場合であっても、現在、テナントの制限は動作します。

これは、上記テナントの制限のドキュメントにおける以下のテナントの制限の条件に必ずしも一致しない場合でもテナントの制限が動作することを意味します。

Office 365 applications must meet two criteria to fully support Tenant Restrictions:

  1. The client used supports modern authentication
  2. Modern authentication is enabled as the default authentication protocol for the cloud service.

https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/tenant-restrictions#office-365-support

 

そのため、今回はこの点についてもう少し詳しく説明したいと思います。

 

テナントの制限とレガシー認証

テナントの制限のドキュメントには、次の記載もあります。

Outlook and Skype for Business clients that support modern authentication may still able to use legacy protocols against tenants where modern authentication is not enabled, effectively bypassing Tenant Restrictions. Applications that use legacy protocols may be blocked by Tenant Restrictions if they contact login.microsoftonline.com, login.microsoft.com, or login.windows.net during authentication.

https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/tenant-restrictions#office-365-support

 

テナントの制限は、クライアント/テナントの双方で先進認証が有効となっており、認証時に先進認証が使用される場合に動作するように実装されています。そのため、レガシー認証が使用される (= テナントの制限の条件である先進認証が使用されない) ケースでは、テナントの制限をバイパスしてしまいます。
しかし、その一方で、レガシー認証が使用されるケースにおいても、認証時に login.microsoftonline.com/login.microsoft.com/login.windows.net during authentication にアクセスする場合には、テナントの制限が動作する、ということが上記ドキュメントには記載されています。

Lync 2010/Skype for Business 2015 (Lync 2013)/Skype for Business 2016 クライアントのレガシー認証では、現在、認証時に login.microsoftonline.com にアクセスします。そのため、上述の条件に合致し、先進認証でないケースにおいてもテナントの制限が動作します。

つまり、Lync/Skype for Business クライアントでは、バージョンや認証の種類 (先進認証/レガシー認証) に関わらずテナントの制限が動作する、と言えます。

 

先進認証に対応していない Lync 2010 クライアントや、先進認証に対応しているがレガシー認証が使用される動作を完全に抑止できない Skype for Business 2015/2016クライアントでは、テナントの制限による制御が難しい状況がありました。しかし、現在は、上述の通り、レガシー認証時を含めテナントの制限が動作するので、Skype for Business Online に関するセキュリティ対策としても、ぜひテナントの制限の利用を検討ください。

 

免責事項:
本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

↧

[重要]Skype for Business Online のダイヤルプランに関する重大な更新のお知らせ

August 16, 2018, 10:25 pm
≫ Next: SharePoint 2010 and August 2018 CU
≪ Previous: Lync/Skype for Business クライアントの “テナントの制限”の対応について
$
0
0

こんにちは、 Japan Skype/Lync サポートチームです。
本日は、ダイヤルプランに関する重大な更新につきましてお知らせさせていただきます。

Skype for Business Online に関する重大な更新のお知らせ(MC146430)

 

概要:
2018年9月15日から Skype for Business Onlineの日本地域において「JP Default」というサービスレベルのダイヤルプランが削除されます。
削除するダイヤルプランのルール
ルールの説明:JP Default Rule
ルールの名前:JP Default

影響:
Cloud Connector Edition (CCE)、オンプレミス PSTN 接続 (OPCH)、Microsoft Teams の Direct Routing で PSTN 通話をご利用されている場合、
発信に問題が発生し、通話が失敗する可能性があります。

対応について:
Business Voiceの ダイヤル プランが JP Default Rule に依存している場合、Skype for Business Online PowerShellを使用してテナントダイヤルプランを
追加および/または変更する必要があります。

2018年9月15日までにテナントダイヤルプランを追加しない場合は、Skype for Businessの PSTN 通話サービスが影響を受ける可能性があります。

詳細な対応方法について:
以下のKBをご確認ください。
英語KB:https://support.microsoft.com/en-us/help/4346668/how-to-add-normalization-rules-to-change-japanese-dial-plans
日本語KB(機械翻訳):https://support.microsoft.com/ja-jp/help/4346668/how-to-add-normalization-rules-to-change-japanese-dial-plans

O365メッセージセンターでの公開:
今回の変更はO365のメッセージセンターにMC146430として、重大な更新の項目として公開されています。
今回の更新に対してユーザー様がよりスムーズに対応できるよう、 MC146430 にて、引き続きリアルタイムに情報をアップデートしていきます。
本記事においても随時お客様に情報をお伝えしていきますので、併せてご確認をいただければ幸いでございます。

↧
Search

SharePoint 2010 and August 2018 CU

August 17, 2018, 4:44 am
≫ Next: SharePoint 2013 August 2018 CU
≪ Previous: [重要]Skype for Business Online のダイヤルプランに関する重大な更新のお知らせ
$
0
0

Our next cumulative update packages are available. For SharePoint we are still suggesting to install a Full-Server package and that looks as follows.

New since November 2014: From now the prerequisites has been changed, you need to have Service Pack 2 installed before upgrading to the next CU. That means as well that the Service Pack 1 is out of support: https://support.microsoft.com/en-us/lifecycle/search?alpha=sharepoint%202010

The format how we are providing the Cumulative Update packages has changed with cy 2015, means you can find all packages on our Download Center.

When you reached the download website, it is independent which language you chose, the packages are the same.

MSF2010:
No packages this month.

SPS2010:
4032221 The full server package for SharePoint Server 2010 and contains also the MSF2010 fixes so you need only this one package.
https://support.microsoft.com/en-us/KB/4032221

Download link: https://www.microsoft.com/en-us/download/details.aspx?id=57254

Project Server 2010:
4092438 The full server package for Project Server 2010 and contains also the SharePoint Server 2010 fixes so you need only this one package.
https://support.microsoft.com/en-us/KB/4092438

Download link: https://www.microsoft.com/en-us/download/details.aspx?id=57246

Important for all Server Applications listed above:
After applying the preceding updates, run the SharePoint Products and Technologies Configuration Wizard or check the post from my colleague Stefan: Why I prefer PSCONFIGUI.EXE over PSCONFIG.EXE

Links:
Update Center for Microsoft Office, Office Servers, and Related Products
SharePoint patching demystified

↧
↧

SharePoint 2013 August 2018 CU

August 17, 2018, 4:54 am
≫ Next: SharePoint 2016 August 2018 CU
≪ Previous: SharePoint 2010 and August 2018 CU
$
0
0

Our SharePoint product group released the next monthly cumulative updates. How patching works for SharePoint 2013? Read more in the post from my colleague Stefan.

Since April 2015:

  • You need to have SP1 installed!
  • In case you have installed a SP1 slipstream version, please read the article from my colleague Stefan!!!
  • Search enabled Farm, please check as well this article.

The format how we are providing the Cumulative Update packages has changed in 2015, means you can find all packages on our Download Center.

When you reached the download website, it is independent which language you chose, the packages are the same.

Plan to upgrade carefully, you may need more time in case the current patch level for your Farm is June 2015 or earlier! Reason is in regards to Search and psconfig. Please check the post from my colleague Stefan!

SharePoint Foundation 2013
4032244 The full server package for SharePoint Foundation 2013
https://support.microsoft.com/en-us/help/4032244

Download link: https://www.microsoft.com/en-us/download/details.aspx?id=57251

SharePoint Server 2013
4032247 The full server package for SharePoint Server 2013 and contains also the SharePoint Foundation 2013 fixes so you need only this package.
https://support.microsoft.com/en-us/help/4032247

Download link: https://www.microsoft.com/en-us/download/details.aspx?id=57260

Office Web Apps Server 2013
4022238 The full server package for OWAS 2013
https://support.microsoft.com/en-us/help/4022238

Download link: https://www.microsoft.com/en-us/download/details.aspx?id=57258

Project Server 2013
4032245 The full server package for Project Server 2013 and contains also the SharePoint Server and Foundation 2013 fixes so you need only this package.
https://support.microsoft.com/en-us/help/4032245

Download link: https://www.microsoft.com/en-us/download/details.aspx?id=57232

Important for all Server Applications listed above:
After any binary update you need to run psconfig or psconfigui. Please refer to the following article to find out what is the best for you: why-i-prefer-psconfigui-exe-over-psconfig-exe.aspx

You might have you own strategy to run psconfig, because it depends on the farm structure and what makes sense to reduce the downtime.

Regarding PSConfig: With August 2016 CU for SharePoint Server 2016 we updated psconfig with a couple of improvements, discussed in my colleagues blog post:

  • PSConfig improved error reporting
  • SharePoint Patching and Get-SPProduct -local

The news are that these improvements are now available with this December update also for SharePoint Server 2013.

As soon as possible (in case patch level of your Farm is June 2015 or earlier) after you installed the binaries, run the psconfig! Reasons are here.

Related Info:

Update Center for Microsoft Office, Office Servers, and Related Products

Common Question: What is the difference between a PU, a CU and a COD?

How to: install update packages on a SharePoint 2013 farm where search component and high availability search topologies are enabled

CHANGE: SharePoint 2013 Rollup Update for the December 2013 Cumulative Update Packaging

SQL Server 2014 and SharePoint Server 2013

↧

SharePoint 2016 August 2018 CU

August 17, 2018, 5:01 am
≫ Next: Azure IoT Edge support for Raspbian 8.0/Debian 8.0
≪ Previous: SharePoint 2013 August 2018 CU
$
0
0

SharePoint Server 2016 and the next cumulative update is available. We may call it public update in the future, in short PU.

This CU also includes Feature Pack 1 which was released with November 2016 CU and Feature Pack 2 which was released with September 2017 CU.

KB 4032256 Language independent version
https://support.microsoft.com/en-us/help/4032256

Download it from: https://www.microsoft.com/en-us/download/details.aspx?id=57222

KB 4022231 Language dependent fixes
https://support.microsoft.com/en-us/help/4022231

Download it from: https://www.microsoft.com/en-us/download/details.aspx?id=57236

No package this month for Office Online Server 2016

After installing the fixes (both packages!) you need to run the SharePoint 2016 Products Configuration Wizard on each machine in the farm. If you prefer to run the command line version psconfig.exe ensure to have a look here for the correct options.

SharePoint 2016 CU Build Number: for the language independent 16.0.4732.1000 and language depended 16.0.4732.1001

You can use the SharePoint Server 2016 Patch Build Numbers PowerShell Module to identify the patch level of your SharePoint components.

Related Links:

  • Blog: Common Question: What is the difference between a PU, a CU and a COD?
  • Blog: SharePoint Patching demystified
  • Blog: Why I prefer PSCONFIGUI.EXE over PSCONFIG.EXE
  • TechNet: Update Center for Microsoft Office, Office Servers, and Related Products
  • Blog: SharePoint Server 2016 Patch Build Numbers PowerShell Module

Blog: Please read when you install February 2017 CU or later and you use MIM for User Profile Import

↧

Azure IoT Edge support for Raspbian 8.0/Debian 8.0

August 17, 2018, 4:04 am
≫ Next: Need Local Features for Your Country? Now You can Build Your Own!
≪ Previous: SharePoint 2016 August 2018 CU
$
0
0

In theory, Azure IoT Edge is working only for the version 9 of Debian. If you’re using a Raspberry Pi, you’ll most likely use a Raspbian version which is based on Debian. So same, you’ll have to be on the version 9 to be able to deploy Azure IoT Edge on your device.

But what if you’re running a version 8? And for some reasons, you can’t upgrade to version 9? Well, the good news is that there is a way to make it working. Let’s see how and what is needed.

Specificities of Azure IoT Edge

Azure IoT Edge uses some of the components which are not present in v8 of Debians like libssl 1.0.2. Reason is because IoTEdge is using DTLS which are not present in the previous versions.

You can try to upgrade the list of packages, force the version, in the 8 version (jessie), this package do not exist. So the way to get it is from the next version, 9 so stretch.

The libssl can be found here: https://packages.debian.org/stretch/libssl1.0.2 and as you’ll see there is the support for armhf which is the version of the RPI processor.

Pre installation before Azure IoT Edge

Basically, we’ll have to download and install the package.

wget http://ftp.us.debian.org/debian/pool/main/o/openssl1.0/libssl1.0.2_1.0.2l-2+deb9u3_armhf.deb

sudo dpkg -i libssl1.0.2_1.0.2l-2+deb9u3_armhf.deb

sudo apt-get install -f

If you get any error message, it may be related to the fact you have a non-compatible version of libssl like the libsll-dev version. In this case, just purge it with

sudo apt-get purge libssl-dev

Now, you can follow the instructions here: https://docs.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-linux-arm

Bang, another issue: LISTEN_FDNAMES

Once you’ve done that, you’ll still have an issue. When looking at the journal sudo journalctl -u iotedge -f, the IoTEdge will tell you the environment variable LISTEN_FDNAMES is not setup. The reason is that in the Debian 9, this is not used anymore. The way the service is setup is using another mechanism.

So, in short, we’ll have to add it. Reading a bit about it and searching about it, it needs to be added in the /etc/system/system/multi-user.target.wants/iotedge.service

Just add the line below in the In the [Service] section

Environment=LISTEN_FDNAMES=iotedge.mgmt.socket:iotedge.socket

And now, if you look at the journal again, you will see the 2 core containers downloading, you’ll see in your portal that your IoTEdge device will get connected. Just allow some time for the donwloads to happen and you’ll be good to go!

Keep in mind that if you’re building your own edge containers, they must be built for Debian 8 and not the 9 version. Otherwise, they won’t start or run correctly.

Please note as well, this scenario is not supported. In theory, Azure IoTEdge is not built to run on Debian 8, only 9. So other things can happen.

 

↧

Need Local Features for Your Country? Now You can Build Your Own!

August 17, 2018, 4:23 am
≫ Next: CosmosDB change feed monitoring
≪ Previous: Azure IoT Edge support for Raspbian 8.0/Debian 8.0
$
0
0

In 20 countries, Microsoft Dynamics 365 Business Central currently offers the same out-of-the-box support for local features as Microsoft Dynamics NAV does. Local features cover country-specific needs, such as requirements for reporting tax, formats for exchanging information with banks, and so on. For Dynamics NAV, those 20 countries are just the start - the flexibility of the solution has enabled Microsoft partners to customize it to deliver local functionality to their markets around the world. Until now, that wasn’t an option for Business Central.

As announced in our October ‘18 Release Notes, we’ve worked to close this gap and enable partners to add local features as upgradable, easy-to-maintain apps in the countries we don’t already support. There’s a lot of buzz about this new opportunity, and many partners are already on-board. For example, our partners in South Korea, United Arab Emirates, and South Africa have already wrapped their local functionality into apps for those countries, and partners in many more countries are following suit.

How does it work?

We provide the platform and the nuts-and-bolts needed to build an app for Business Central, and partners use their knowledge of local requirements to build apps that address them.

Without going into detail (that guidance is available here), some examples of what we do on our end include:

  • Setting up the data center, or data plane, and preparing the base application.
    This is largely invisible to partners, but it’s a fundamental part of the process.
  • Providing Docker images with the required collations.
    For example, collation enables the use of special characters, such as the “ø” and “å” vowels in Danish.
  • Offering a W1 build, which is the global base application that partners can build on.

On the partner side, the effort boils down to:

  • Translating the texts in their apps.
  • Translating the global base application to the local language.
  • Developing the local functionality as apps.
  • Submitting their apps for approval and deployment on AppSource.

One advantage of going the app route is that you can deliver smaller chunks of functionality that are easier to maintain and upgrade. For example, functionality for Denmark includes four apps that each address a specific need. One app offers standard formats that banks require when exchanging electronic files. If a bank changes its formats, only that apps needs to be updated.

Making local features available globally

We also found features that more than one country uses but were duplicated in the feature sets. To deliver a stronger, richer base application that all countries can benefit from we moved these features from local functionality to the core application.

What does this mean for you? 

In short, we’re providing a complete cloud solution, reliable service, a stronger application base, and an easy-to-extend product to help you deliver your own localization. If you’re interested, more information about how to build localization apps is available in this article: https://aka.ms/BusinessCentralLocApps.

↧
↧

CosmosDB change feed monitoring

August 17, 2018, 6:15 am
≫ Next: Database ownership chaining in Azure SQL Managed Instance
≪ Previous: Need Local Features for Your Country? Now You can Build Your Own!
$
0
0

Let's continue in the serie of the posts focused on the CosmosDB change feed. In this post we will focus on the diagnostics.

Goals

After running the change feed processor we need to be sure that:

  • the documents are processed from all partitions and
  • the age of the documents is within the limit and
  • the costs consumed by change feed processor is under control and
  • the communication between change feed processor and CosmosDB meets QoS requirements.

Let's deep dive each aspect from the bottom up.

 

The costs and QoS measurement

Change feed processor uses DocumentDB SDK to connect and communite with CosmosDB. It connects to two sources: document feed collection and lease collection. Just to recap:

  • feed collection is the collection providing the documents for processing
  • lease collection is a collection used by change feed processor in order to manage and distribute the partitions processing evenly between all change feed processors.

Change feed processor gives the developer a possibility to configure how to connect to the collections. By default it creates a DocumentClient under the hood. It also gives a possiblity to pass directly DocumentClient or IChangeFeedDocumentClient instances.

We will use last option here (document client level). Using this approach it will be possible to catch all calls made from change feed processor to document client and meter the calls, frequency, costs and reliability.

 

Let's define a metering reporter interface and its console implementation.

 

Let's create a metering decorator for IChangeFeedDocumentClient:

 

As you can see the change feed query has to be decorated too. Here is the decorator:

 

And let's put it all together:

That's all. It's possible to run it.

From our experience we used this to see if we have throttling issues when accessing lease collections, when listing the leases, updating the leases, etc. It all adds to the feed processing time so it's worth measuring it.

In addition to the monitoring, using the explicit document client give us a possiblity to fine-tune the  connection policy (e.g. switching to direct TCP connection mode, open connection timeouts, etc).

 

Monitoring the age of the documents

This is the metric which defines how much the change feed processing lags behind. Each document has a field _ts (document level).

 

This is a system property representing epoch time (it's the number of seconds! that have elapsed since 00:00:00 (UTC), 1 January 1970) when a document was last updated (e.g. create or replace). That's enough for this measurement. Let's see it in action:

 

From our experience this is the one of the metrics we use to define SLA for our service where we measure e2e latency/processing time.  It has one more requirement, time synchronization. So you need to ensure that the servers where the change feed processing is runnning are synchronized with closest NTP servers.

 

Processing the documents from all partitions are moving forward

This is the most import metric, I think. It's possible to do it on 2 levels:

  • observer level
  • document client level

 

As we saw in the first post from this series, the observer is the component which is getting the feed. Here is the interface:

 

We will leverage OpenAsync/CloseAsync callback methods. The observer is called once the change feed processor instance opens or closes the processing of the partition. This could happen due to reasons like:

  • redistributing the partition processing load when scaling up/down the cluster or
  • redistributing the partition processing load when rollout/shutdown the cluster or
  • system error

Here is the complete list of the closing reasons:

It's necessary to monitoring high and constant frequency of closing the partitions because it signals an issue, especially reasons Unknown and ObserverError.

Unknow is due to internal change feed processor issues. In such case, inspect the logs (see previous post) and report the issue to https://github.com/Azure/azure-documentdb-changefeedprocessor-dotnet.

In case of observer error the issue will be in your code 😉

All other reasons signals transient issues and the system should recover from them.

At the end of the day, you need to monitor that the number of open partitions is equal to the number of all partitions in the collection and there are no closings. This ensures that the processing is working smoothly.

But that's not all, folks. It's also necessary to track that the partitions are being processed. It could happen that the partition processing is open but reading the feed per partition is stuck. Yes, we had such issues in past! So how to solve it?

There is a cheap but "undocummented" way which we will try to get into the library.

Undocummented way

Reading the partition feed is done by change feed document client and its response has a session token and feed documents. Session token is in the format <partition ID>:<LSN>. LSN is the last commited transaction number per partition.

The document has also "_lsn" which is the LSN which commited the document change (create or   replace).

The whole point is to report that the reading the documents from the particular partitions occurs and that it is progressing till the end. One of the options is to report the remaining work (session LSN - document LSN).

 

Remaining work estimator

The previous approach was so call "undocummented". There is also other, documented approach built inside change feed processor SDK and exposed via IRemainingWorkEstimator. Let's see it:

It has a drawback. The estimator calculates estimated work for whole consumer and collection, not just per partition. So the best option is to go with the undocumented way until it is exposed in SDK. We will fix it!

This helped us to avoid incidents, especially in the early stages of integrating the change feed processor.

 

Let's put it all together. As usual, the whole code is at my github repo. After running it, the output would be:

 

 

 

 

Previous posts:

  1. New Azure CosmosDB change feed processor released!
  2. Logging in CosmosDB change feed processor

 

↧

Database ownership chaining in Azure SQL Managed Instance

August 17, 2018, 9:02 am
≫ Next: What’s new in Microsoft Social Engagement 2018 Update 1.8
≪ Previous: CosmosDB change feed monitoring
$
0
0

Azure SQL Managed Instance enables you to run cross-database queries the same way you do it in SQL Server. It also supports cross-database ownership chaining that will be explained in this post.

Cross database ownership chaining enables logins to access the objects in other databases on the SQL instance even if explicit access permissions are not granted on these objects, if the logins are accessing the objects via some view or procedure, if view/procedure and the objects in other database have the same owner, and if DB_CHAINING option is turned on on the database.

In this case, if you have the same owner on several objects in several databases, and you have some stored procedure that access these objects, you don't need to GRANT access permission to every object that the procedure needs to access. If the procedure and the objects have the same owner, you can to GRANT permission on the procedure and Database Engine will allow the procedure to access all other objects that share the same owner.

In this example, I will create two databases that have the same owner and a login that will be used to access the data. One database will have some table and other database will have a stored procedure that reads data from the table in other database. Login will be granted to execute the stored procedure, but not to read data from the table:

-- Create two databases and a login that will call procedure in one database
CREATE DATABASE PrimaryDatabase;
GO 
CREATE DATABASE SecondaryDatabase;
GO
CREATE LOGIN TheLogin WITH PASSWORD = 'Very strong password!'
GO

-- Create one database with some data table, and another database with a procedure that access the data table.
USE PrimaryDatabase;
GO
CREATE PROC dbo.AccessDataTable
AS
BEGIN
SELECT COUNT(*) FROM SecondaryDatabase.dbo.DataTable;
END;
GO
CREATE USER TheUser FOR LOGIN TheLogin;
GO 
GRANT EXECUTE ON dbo.AccessDataTable TO TheUser;
GO

USE SecondaryDatabase;
GO
SELECT * INTO dbo.DataTable FROM sys.objects;
GO
CREATE USER TheUser FOR LOGIN TheLogin;
GO

If you try to read the table via procedure you will get an error because the login don't have GRANT permission on the table:

EXECUTE('SELECT * FROM SecondaryDatabase.dbo.DataTable') AS LOGIN = 'TheLogin' ;
GO
-- Msg 229, Level 14, State 5, Line 34
-- The SELECT permission was denied on the object 'DataTable', database 'SecondaryDatabase', schema 'dbo'.

The same thing will happen if you try to read the data from the table using the stored procedure:

EXECUTE('EXEC PrimaryDatabase.dbo.AccessDataTable') AS LOGIN = 'TheLogin' ;
GO
--Msg 229, Level 14, State 5, Procedure dbo.AccessDataTable, Line 5 [Batch Start Line 65]
--The SELECT permission was denied on the object 'DataTable', database 'SecondaryDatabase', schema 'dbo'.

Although the user has the rights to execute the procedure, Database Engine will block the query since the login don't have access rights to read from the underlying table in SecondaryDatabase.

Now, we can enable ownership chaining on the databases:

ALTER DATABASE PrimaryDatabase SET DB_CHAINING ON;
GO
ALTER DATABASE SecondaryDatabase SET DB_CHAINING ON;
GO

If we try to access table again via procedure we are getting the results:

EXECUTE('EXEC PrimaryDatabase.dbo.AccessDataTable') AS LOGIN = 'TheLogin' ;

Managed Instance/Database Engine will see that procedure and table have the same owner, and since DB_CHAINING is turned on, it will allow access to the table.

However, note that the login still don't have rights to access the table directly because nobody granted him access:

EXECUTE('SELECT * FROM SecondaryDatabase.dbo.DataTable') AS LOGIN = 'TheLogin' ;
GO
--Msg 229, Level 14, State 5, Line 54
--The SELECT permission was denied on the object 'DataTable', database 'SecondaryDatabase', schema 'dbo'.

Conclusion

Database ownership chaining might be useful but also unexpected behavior from the security perspective. You would need to carefully analyze when and do you want to configure it.

↧
Search

What’s new in Microsoft Social Engagement 2018 Update 1.8

August 17, 2018, 9:41 am
≫ Next: Nuget package restore failures due to timeout errors in all regions – 08/17 – Investigating
≪ Previous: Database ownership chaining in Azure SQL Managed Instance
$
0
0

Microsoft Social Engagement 2018 Update 1.8 is ready and will be released in August 2018. This article describes the features, fixes, and other changes that are included in this update.

New and updated features

Microsoft Social Engagement 2018 Update 1.8 introduces the following features:

Provide your feedback about Social Engagement

After signing in to Social Engagement, you can select the smiley symbol and provide your feedback about the app and the service. We're looking forward to hearing your thoughts and continue to refine the product based on your feedback.

 

Provide feedback

 

Compliance stream for WordPress comments and Disqus

When a comment on WordPress is removed by the author, it's also removed from Social Engagement databases. If a user decides to delete their Disqus profile, all related comments and threads are deleted in Social Engagement too.

Removal of interaction token for Facebook user profiles

With this release, we're following Facebook's Graph API v3.0 changes from August 1, 2018: Social Engagement stops supporting engagement actions for Facebook user profiles. Interaction tokens for Facebook users were removed from Social Engagement and you won’t be able to add them anymore.

Good to know: This change doesn't affect the way you work with social profiles for Facebook pages in Social Engagement.You still need to create a Facebook Acquisition profile to allow data acquisition from Facebook for Facebook pages you administer.

 

Looking for more information? Visit our help center.

↧

Nuget package restore failures due to timeout errors in all regions – 08/17 – Investigating

August 17, 2018, 11:15 am
≫ Next: NEW AzureCAT GUIDANCE: High performance computing on Azure: Migration guide for A8 to A11 virtual machines
≪ Previous: What’s new in Microsoft Social Engagement 2018 Update 1.8
$
0
0

Initial notification: Friday, August 17th 2018 18:01 UTC

We're investigating Nuget package restore failures for customers using VSTS Hosted Build agents in all regions. All customers who are using corporate proxy while working on private build agents will notice Nuget package restore failures with timeout message.

We are working on reverting the recent version update to this task to mitigate this issue.

  • Next Update: Before Friday, August 17th 2018 20:35 UTC

Sincerely,
Manohar

↧
↧

NEW AzureCAT GUIDANCE: High performance computing on Azure: Migration guide for A8 to A11 virtual machines

August 17, 2018, 6:15 pm
≫ Next: Simplifying big data analytics architecture
≪ Previous: Nuget package restore failures due to timeout errors in all regions – 08/17 – Investigating
$
0
0

Check out our new Whitepaper!

This guide provides recommendations and tools for migrating the virtual machines (VMs) in the A8 to A11 sizes. KR Kandavel of AzureCAT tells you how to migrate these legacy HPC clusters into new VM series—such as H, D, E, and F—for better performance with reduced cost. To help reduce downtime for your workloads, he shares tools and scripts to help you automate the migration process.

 

  • Download the whitepaper on Azure.com

 

As more powerful high performance computing (HPC) clusters become available in Microsoft Azure datacenters, we recommend assessing your workloads and migrating the virtual machines (VMs) in the A8 to A11 sizes. These legacy HPC clusters can be migrated into new VM series— such as H, D, E, and F—for better performance with reduced cost. Newer datacenters include the next generation of Azure HPC VMs known as the H series, which are intended for high-end computational needs, such as molecular modeling and computational fluid dynamics. The main difference between the A series and H series is improved cluster performance. The H series clusters have more modern cores and greater capacity.

Authored by KR Kandavel. Edited by Nanette Ray. Reviewed by AzureCAT.

 

Azure CAT Guidance

"Hands-on solutions, with our heads in the Cloud!"

↧

Simplifying big data analytics architecture

August 17, 2018, 1:16 pm
≫ Next: Apache Phoenix now supports Zeppelin in Azure HDInsight
≪ Previous: NEW AzureCAT GUIDANCE: High performance computing on Azure: Migration guide for A8 to A11 virtual machines
$
0
0

Fast Interactive BI, data security and end user adoption are three critical challenges for successful big data analytics implementations. Without right architecture and tools, many big data and analytics projects fail to catch on with common BI users and enterprise security architects. In this blog we will discuss architectural approaches that will help you architect big data solution for fast interactive queries, simplified security model and improved user adoption with BI users.

Traditional approach to fast interactive BI

Deep analytical queries processed on Hadoop systems have traditionally been slow. MapReduce jobs or hive queries are used for heavy processing of large datasets, however, not suitable for the fast response time required by interactive BI usage.

Faced with user dissatisfaction due to lack of query interactivity, data architects used techniques such as building OLAP cubes on top of Hadoop. An OLAP cube is a mechanism to store all the different dimensions, measures and hierarchies up front. Processing the cube usually takes place at the pre-specified interval. Post processing, results are available in advance, so once the BI tool queries the cube it just needs to locate the result, thereby limiting the query response time and making it a fast and interactive one. Since all measures get pre-aggregated by all levels and categories in the dimension, it is highly suitable for interactivity and fast response time. This approach is especially suitable if you need to light up summary views.

 

image

Above approach works for certain scenarios but not all. It tends to break down easily with large big data implementations, especially with use cases where many power users and data scientists are writing many ad-hoc queries.

Here are the key challenges:

  • OLAP cubes require precomputations for creating aggregates which introduces latency. Businesses across all industries are demanding more from their reporting and analytics infrastructure within shorter business timeframes. OLAP cubes can’t deliver real-time analysis.
  • In big data analytics, precomputation puts heavy burden on underlying Hadoop system creating unsustainable pressure on entire big data pipeline which severely hampers performance, reliability and stability of entire pipeline.
  • This type of architecture forces large dataset movement between different systems which works well at small scale. However, falls apart at large data scale. Keeping data hot and fresh across multiple tiers is challenging.
  • Power users and data scientists requires a lot more agility and freedom in terms of their ability to experiment using sophisticated ad-hoc queries that puts additional burden on overall system.

 

Azure HDInsight Interactive query overview

One of the most exciting new features of Hive 2 is Low Latency Analytics Processing (LLAP), which produces significantly faster queries on raw data stored in commodity storage systems such as Azure Blob store or Azure Data Lake Store.

This reduces the need to introduce additional layers to enable fast interactive queries.

Key benefits of introducing Interactive Query in your big data BI architecture:

Extremely fast Interactive Queries: Intelligent caching and optimizations in Interactive Query produces blazing-fast query results on remote cloud storage, such as Azure Blob and Azure Data Lake Store. Interactive Query enables data analysts to query data interactively in the same storage where data is prepared, eliminating the need for moving data from storage to another analytical engine for analytical needs. Refer to Azure HDInsight Performance Benchmarking: Interactive Query, Spark, and Presto to understand HDInsight Interactive Query performance expectations @ 100TB scale.

HDInsight Interactive Query (LLAP) leverages set of persistent daemons that execute fragments of Hive queries. Query execution on LLAP is very similar to Hive without LLAP, except that worker tasks run inside LLAP daemons, and not in containers.

image

Lifecycle of a query: After client submits the JDBC query, query arrives at Hive Server 2 Interactive which is responsible for query planning, optimization, as well as security trimming. Since each query is submitted via Hive Server 2, it becomes the single place to enforce security policies.

image

File format versatility and Intelligent caching: Fast analytics on Hadoop have always come with one big catch: they require up-front conversion to a columnar format like ORCFile, Parquet or Avro, which is time-consuming, complex and limits your agility.

With Interactive Query Dynamic Text Cache, which converts CSV or JSON data into optimized in-memory format on-the-fly, caching is dynamic, so the queries determine what data is cached. After text data is cached, analytics run just as fast as if you had converted it to specific file formats.

Interactive Query SSD cache combines RAM and SSD into a giant pool of memory with all the other benefits the LLAP cache brings. With the SSD Cache, a typical server profile can cache 4x more data, letting you process larger datasets or supporting more users. Interactive query cache is aware of the underlying data changes in remote store (Azure Storage). If underlying data changes and user issues a query, updated data will be loaded in the memory without requiring any additional user steps.

image

Concurrency: With the introduction of much improved fine-grain resource management, preemption and sharing cached data across queries and users, Interactive Query [Hive on LLAP] makes it better for concurrent users.

In addition, HDInsight supports creating multiple clusters on shared Azure storage and Hive metastore helps in achieving a high degree of concurrency, so you can scale the concurrency by simply adding more cluster nodes or adding more clusters pointing to same underlying data and metadata.

Please read Hive Metastore in HDInsight to learn more about sharing metastore across clusters and cluster types in Azure HDInsight.

Simplified and scalable architecture with HDInsight Interactive Query

By introducing Interactive Query to your architecture, you can now route power users, data scientists, and data engineers to hit Interactive Query directly. This architectural improvement will reduce the overall burden from the BI system as well as increases user satisfaction due to fast interactive query response as well as increases flexibility to run ad-hoc queries at will.

image

In above described architecture, users who wants to see the summary views can still be served with OLAP cubes. However, all other users leverage Interactive Query for submitting their queries.

For OLAP based applications on Azure HDInsight, please see solutions such as AtScale and Apache Kyligence.

Security model

Like Hadoop and Spark clusters, HDInsight Interactive Query leverages Azure Active Directory and Apache Ranger to provide fine-grain access control and auditing. Please read An introduction to Hadoop security article to understand security model for HDInsight clusters.

In HDInsight Interactive Query, access restriction logic is pushed down into the Hive layer and Hive applies the access restrictions every time data access is attempted. This helps simplify authoring of the Hive queries and provides seamless behind-the-scenes enforcement without having to add this logic to the predicate of the query. Please read Using Ranger to Provide Authorization in Hadoop to understand different type of security policies that can be created in Apache Ranger.

User adoption with familiar tools

In big data analytics, organizations are increasingly concerned that their end users aren’t getting enough value out of the analytics systems because it is often too challenging and requires using unfamiliar and difficult-to-learn tools to run the analytics. HDInsight Interactive Query addresses this issue by requiring minimal to no new user training to get insight from the data. Users can write SQL queries (hql) in the tools they already use and love the most. HDInsight Interactive query out of the box supports BI tools such as Visual Studio Code, Power BI, Apache Zeppelin, Visual Studio, Ambari Hive View, Beeline, and Hive ODBC.

image
To learn more about these tools, please read Azure HdInsight Interactive Query: Ten tools to analyze big data faster.

Built to complement Spark, Hive, Presto, and other big data engines

HDInsight Interactive query is designed to work well with popular big data engines such as Apache Spark, Hive, Presto, and more. This is especially useful because your users may choose any one of these tools to run their analytics. With HDInsight’s shared data and metadata architecture, users can create multiple clusters with the same or different engine pointing to same underlying data and metadata. This is very powerful concept as you are no longer bounded by one technology for analytics.

    image

Try HDInsight now

We hope you will take full advantage fast query capabilities of HDInsight Interactive Query. We are excited to see what you will build with Azure HDInsight. Read this developer guide and follow the quick start guide to learn more about implementing these pipelines and architectures on Azure HDInsight. Stay up-to-date on the latest Azure HDInsight news and features by following us on Twitter #HDInsight and @AzureHDInsight. For questions and feedback, please reach out to AskHDInsight@microsoft.com.

About HDInsight

Azure HDInsight is Microsoft’s premium managed offering for running open source workloads on Azure. Azure HDInsight powers mission critical applications ranging in a wide variety of sectors including, manufacturing, retail education, nonprofit, government, healthcare, media, banking, telecommunication, insurance, and many more industries ranging in use cases from ETL to Data Warehousing, from Machine Learning to IoT, and more.

Additional resources

  • Get started with HDInsight Interactive Query Cluster in Azure
  • Azure HDInsight Performance Benchmarking: Interactive Query, Spark and Presto
  • Learn more about Azure HDInsight
  • Use Hive on HDInsight
  • Open Source component guide on HDInsight
  • HDInsight release notes
  • Ask HDInsight questions on Msdn forums
  • Ask HDInsight questions on stackoverflow
↧

Apache Phoenix now supports Zeppelin in Azure HDInsight

August 17, 2018, 1:17 pm
≫ Next: Top Stories from the Microsoft DevOps Community – 2018.08.17
≪ Previous: Simplifying big data analytics architecture
$
0
0

The HDInsight team is excited to announce Apache Zeppelin Support for Apache Phoenix

Phoenix in Azure HDInsight

Apache Phoenix is an open source, massively parallel relational database layer built on HBase. Phoenix allows you to use SQL like queries over HBase. Phoenix uses JDBC drivers underneath to enable users to create, delete, alter SQL tables, indexes, views and sequences, upset rows individually and in bulk. Phoenix uses NOSQL native compilation rather than using MapReduce to compile queries, enabling the creation of low-latency applications on top of HBase.

Apache Phoenix enables OLTP and operational analytics in Hadoop for low latency applications by combining the best of both worlds. In Azure HDInsight Apache Phoenix is delivered as a 1st class Open Source framework.

Why use Apache Phoenix in Azure HDInsight?

HDInsight is the best place for you to run Apache Phoenix and other Open Source Big Data Applications. HDInsight makes Apache Phoenix even better in following ways:

Out of the box highly tuned Apache Phoenix cluster in minutes

In Azure, several large customers runs their mission critical HBase/Phoenix workloads, over the period of time services becomes more and more intelligent about right configurations for running the HBase workloads as efficiently as possible. This intelligence is than brought to you in the form of highly tuned clusters that will meet your needs. You can create clusters within minutes manually with Azure Portal or by automating the creation workflow with Azure JSON templates, Powershell, REST based API or Azure client SDK.

Decoupled Storage and CPU

HDInsight changes the game with seemingly simple , yet very powerful cloud construct where compute and CPU are decoupled. This is very powerful as you have inexpensive abundant cloud storage that could be mounted to a smallest HBase cluster. When you don't need to read/write, you can delete the cluster completely and still retain the data. This flexibility helps our customers achieve best price/performance.

Delivered as a service, yet not compromising on control

HDInsight delivers Phoenix as a service so you don't have to worry about setup, patching, upgrading , maintaining etc. Moreover, you get financially backed SLA of 99.9 percent as well as support, yet it doesn't take away any control. You have the option to further fine tune your cluster as well as install additional components and make further customizations.

Best suited for mission critical production workloads

As Microsoft's roots are in enterprises, you will find HDInsight Phoenix fitting very nicely into your enterprise architecture. You can host Phoenix clusters in a private virtual network in order to protect your valuable data. You can take advantage of Azure infrastructure to achieve high availability and disaster recovery. You can also find Azure and HDInsight constantly update their compliance status @Azure Trust Center.

What is Apache Zeppelin?

Apache Zeppelin is open source Web-based notebook that enables data-driven, interactive data analytics and collaborative documents with SQL, Scala and many other languages. It helps data developers & data scientists in developing, organizing, executing, and sharing data code and visualizing results without referring to the command line or needing the cluster details.

Integration with Apache Phoenix

Now HDInsight customers can use Apache Zeppelin to query your Phoenix tables. Apache Zeppelin is integrated with HDInsight cluster and there are no additional steps to use it.

Simply create a Zeppelin Notebook with JDBC interpreter and start writing your Phoenix SQL queries

image_thumb[2]

Try HDInsight now

We hope you will take full advantage Apache Zeppelin with Apache Phoenix. We are excited to see what you will build with Azure HDInsight. Read this developer guide and follow the quick start guide to learn more about implementing these pipelines and architectures on Azure HDInsight. Stay up-to-date on the latest Azure HDInsight news and features by following us on Twitter #HDInsight and @AzureHDInsight. For questions and feedback, please reach out to AskHDInsight@microsoft.com.

About HDInsight

Azure HDInsight is Microsoft’s premium managed offering for running open source workloads on Azure. Azure HDInsight powers mission critical applications ranging in a wide variety of sectors including, manufacturing, retail education, nonprofit, government, healthcare, media, banking, telecommunication, insurance, and many more industries ranging in use cases from ETL to Data Warehousing, from Machine Learning to IoT, and more.

Additional resources

  • Apache Phoenix in HDInsight.
  • Use SQLLine with HDInsight Phoenix.
  • Learn more about Azure HDInsight.
  • Open Source component guide on HDInsight.
  • HDInsight release notes.
  • Ask HDInsight questions on Msdn forums.
  • Ask HDInsight questions on stackoverflow.
↧
Remove ADS
Viewing all 35736 articles
Browse latest View live

Search

Top-Rated Images

ↂ
Electricity Circuits and Their Components Class 7 Worksheet with Answers Science Chapter 3

Electricity Circuits and Their Components Class 7 Worksheet with Answers Science Chapter 3

ↂ
Students hit streets to save Agriculture College land in city

Students hit streets to save Agriculture College land in city

ↂ
Jonathan Helmstetter Jr

Jonathan Helmstetter Jr

ↂ
The Great Muppet Mural

The Great Muppet Mural

ↂ
Adobe Photoshop 2020 v21.0.2.57 Pre-Activated

Adobe Photoshop 2020 v21.0.2.57 Pre-Activated

ↂ
1995-12-30 / Tupac & The Outlawz Photo Shoots / Chi Modu

1995-12-30 / Tupac & The Outlawz Photo Shoots / Chi Modu

ↂ
1995-12-30 / Tupac & The Outlawz Photo Shoots / Chi Modu

1995-12-30 / Tupac & The Outlawz Photo Shoots / Chi Modu

ↂ
1995-12-30 / Tupac & The Outlawz Photo Shoots / Chi Modu

1995-12-30 / Tupac & The Outlawz Photo Shoots / Chi Modu

ↂ
Solved CBSE Sample Papers for Class 10 Science Set 3

Solved CBSE Sample Papers for Class 10 Science Set 3

ↂ
Solved CBSE Sample Papers for Class 10 Science Set 3

Solved CBSE Sample Papers for Class 10 Science Set 3

ↂ
NCERT Solutions for Class 10 Maths Chapter 3 Pair of Linear Equations in Two Variables Ex 3.4

NCERT Solutions for Class 10 Maths Chapter 3 Pair of Linear Equations in Two Variables Ex 3.4

ↂ
Class 10 Sanskrit Grammar Book Solutions चित्राधारितम् वर्णनम्

Class 10 Sanskrit Grammar Book Solutions चित्राधारितम् वर्णनम्

ↂ
Vacation with My Nani Maa Class 3 Worksheet Maths Mela Chapter 4

Vacation with My Nani Maa Class 3 Worksheet Maths Mela Chapter 4

ↂ
Vacation with My Nani Maa Class 3 Worksheet Maths Mela Chapter 4

Vacation with My Nani Maa Class 3 Worksheet Maths Mela Chapter 4

˂
˃

Latest Images

TCL 55

TCL 55" 55C6K 4K QD-Mini LED Google TV (2025) $994 + Delivery ($0 C&C/...

July 17, 2025, 9:54 pm
How to keep squirrels at bay AND add a touch of colour to your garden this...

How to keep squirrels at bay AND add a touch of colour to your garden this...

July 6, 2025, 9:37 am
How to keep squirrels at bay AND add a touch of colour to your garden this...

How to keep squirrels at bay AND add a touch of colour to your garden this...

July 6, 2025, 9:37 am
Grace Davison breaks Irish record and bags bronze in 100m freestyle final at...

Grace Davison breaks Irish record and bags bronze in 100m freestyle final at...

July 6, 2025, 7:31 am
Why I’m skipping the iPhone 17: Five reasons you should wait too

Why I’m skipping the iPhone 17: Five reasons you should wait too

July 6, 2025, 7:19 am
Why I’m skipping the iPhone 17: Five reasons you should wait too

Why I’m skipping the iPhone 17: Five reasons you should wait too

July 6, 2025, 7:19 am
Robert Priest - People Like You and Me (2024) [Hi-Res]

Robert Priest - People Like You and Me (2024) [Hi-Res]

July 6, 2025, 5:44 am
Costco told to ‘join this century’ over 3 checkout policy problems as members...

Costco told to ‘join this century’ over 3 checkout policy problems as members...

July 6, 2025, 5:03 am
Readers and writers: Two books take clear-eyed view of health care system’s...

Readers and writers: Two books take clear-eyed view of health care system’s...

July 6, 2025, 4:08 am
Readers and writers: Two books take clear-eyed view of health care system’s...

Readers and writers: Two books take clear-eyed view of health care system’s...

July 6, 2025, 4:08 am
  • RSSing>>
  • Latest
  • Popular
  • Top Rated
  • Trending
© 2025 //www.rssing.com
<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>