HTML5 Windows 8 Training: Presentation, and Resources Available
Running SQL Server 2012 and earlier versions on Windows Server 2012
I have got a lot of questions on how to install and run SQL Server on Windows Server 2012. Now we have released a knowledge base article that outlines the required software prerequisites for running SQL Server on Windows Server 2012. The article talks about all versions of SQL Server 2005, 2008, 2008 R2 and 2012. The article also contains known issues on installing, you can find the article here http://support.microsoft.com/kb/2681562
How To : SQL 2012 Filetable Setup and Usage
One of the cool things about my job is that I get to work on the latest technologies earlier than most people. I recently stumbled upon an issue related to Filetables, a new feature in SQL Server 2012.
To start with, a Filetable brings you the ability to view files and documents in SQL Server, and allows you to use SQL Server specific features such as Full-Text Search and semantic search on them. At the same time, it also allows you to access those files and documents directly, through windows explorer or Windows Filesystem API calls.
Setting up Filetables
Here are some basic steps for setting up Filetables in SQL Server 2012:
- Enable Filestream for the instance in question from SQL Server Configuration Manager (Right click on the SQL Server Service-> Properties->Filestream-> Enable Filestream for Transact-SQL access). Also make sure you provide a Windows Share name. Restart SQL after making this change.
- Create a database in SQL (exclusively)for Filetables (preferable to using an existing database), and specify the WITH FILESTREAM option. Here’s an example:
- Alternatively, you can add a Filestream Filegroup to an existing database, and then create a Filestream directory for the database:
- To verify the directory creation for the database, run this query:
- Next, you can run this query to check if the enabling Non Transacted Access on the database was successful (the database should have the value ‘FULL’ in the non_transacted_access_desc column):
- The next step is to create a Filetable. It is optional to specify the Filetable Directory name. If you don’t specify one, the directory will be created with the same name as the Filetable.
Example: - Next, you can verify the previous step using this query (don’t be daunted by the number of rows you see for a single object):
- Now comes the most exciting part. Open the following path in windows explorer:
\\<servername>\<Instance FileStream Windows share name (from config mgr)>\<DB Filetable directory>\<Table Directory Name>
In our case, it will be:
\\Harsh2k8\ENT2012\Filetables\DocumentTable - Next, copy files over to this share, and see the magic:
select * from DocumentStore
CREATE DATABASE FileTableDB
ON PRIMARY
(
NAME = N’FileTableDB',
FILENAME = N'C:\FileTable\FileTableDB.mdf'
),
FILEGROUP FilestreamFG CONTAINS FILESTREAM
(
NAME = FileStreamGroup1,
FILENAME= 'C:\FileTable\Data'
)
LOG ON
(
NAME = N'FileTableDB_Log',
FILENAME = N'C:\FileTable\FileTableDB_log.ldf'
)
WITH FILESTREAM
(
NON_TRANSACTED_ACCESS = FULL,
DIRECTORY_NAME = N'FileTables'
)
ALTER DATABASE [FileTableDB] ADD FILEGROUP FileStreamGroup1 CONTAINS FILESTREAM(NAME = FileStreamGroup1, FILENAME= 'C:\FileTable\Data')
GO
ALTER DATABASE FileTableDB
SET FILESTREAM ( NON_TRANSACTED_ACCESS = FULL, DIRECTORY_NAME = N'FileTables' );
GO
SELECT DB_NAME ( database_id ), directory_name
FROM sys.database_filestream_options;
GO
SELECT DB_NAME(database_id), non_transacted_access, non_transacted_access_desc
FROM sys.database_filestream_options;
GO
CREATE TABLE DocumentStore AS FileTable
WITH (
FileTable_Directory = 'DocumentTable',
FileTable_Collate_Filename = database_default
);
GO
SELECT OBJECT_NAME(parent_object_id) AS 'FileTable', OBJECT_NAME(object_id) AS 'System-defined Object'
FROM sys.filetable_system_defined_objects
ORDER BY FileTable, 'System-defined Object';
GO
So you get the best of both worlds: Accessing files through SQL, searching for specific words/strings inside the files from inside SQL, etc. while retaining the ability to access the files directly through a windows share. Really cool, right? I think so too.
A few points to remember:
- The Fielstream/Filetable features together give you the ability to manage windows files from SQL Server. Since we’re talking about files on the file system, accessing them requires a Windows user. Thus, these features will not work with SQL Server authentication. The only exception is using a SQL Server login that has sysadmin privileges (in which case it will impersonate the SQL Server Service account).
- Filetables give you the ability to get the logical/UNC path to files and directories, but any file manipulation operations (such as copy, cut, delete, etc.) must be performed by your application, possibly using file system API's such as CreateFile or CreateDirectory. In short, the onus is on the application to obtain a handle to the file using file system API’s. Filetables only serve the purpose of providing the path to the application.
Some useful references for Filetables:
http://msdn.microsoft.com/en-us/library/gg492089.aspx
http://msdn.microsoft.com/en-us/library/gg492087.aspx
Hope this helps. Any comments/feedback/suggestions are welcome.
Microsoft Pairs Up with LinuxCon Europe
Voir la synthèse d’Alfonso Castro - Director of Strategic Partnerships, Microsoft Open Solutions Group
Microsoft takes part in a range of open source events and gatherings around the world as a way to facilitate face to face discussions with our open source colleagues and help create greater synergies that will ultimately bring real benefits to customers, no matter their preferred platforms or unique needs. I’ve been lucky to be at many of these events, and this week I’ve been in Barcelona for LinuxCon Europe 2012 to talk about Linux and Windows Azure.
Microsoft is proud to sponsor this year’s show, which represents a great opportunity for us to further share our belief that interoperability is key for today’s IT environments, big and small. This is especially true with the continued growth of the cloud and customers’ desires to use cloud-based systems to produce results at potentially lower costs. Combine this with Linux and its strong community of vendors and users, and it makes even clearer sense as to why we’re striving to increase our openness across platforms, including Windows Azure. View full article...
Et bientôt, Microsoft sera aussi présent à OW2Con 2012
Friday Five–November 9, 2012
1. My Review of the Microsoft Surface for Windows RT
By Microsoft Integration MVP Maxime Labelle
2. Microsoft shares considerations for extending AD into Windows Azure
By System Center: Cloud and Datacenter Management MVP John Joyner
3, How Will Windows 8 Enter The Business?
By Virtual Machine MVP Aidan Finn
4. ExcelOne formula returns value of the same cell on multiple worksheets
By Excel MVP Tom Urtis
5. TFS 2012 and local workspaces
By VS ALM MVP Mickey Gousset
Thanks for Sharing Getting Results the Agile Way
Thank you everyone. It was a great day for Getting Results the Agile Way. As folks shared the message around the world, Getting Results the Agile Way became a top download on Amazon in a few categories. At various points in the day, it was #1 in Business Time Management, #1 in Self Help, and I saw it as high as Amazon’s all up Best Sellers Rank: #43 Free in the Kindle Store.
But the best part is this …
Several of you emailed me, telling me your stories of how you’ve used Getting Results the Agile Way to really get ahead in work and life, or to get back on top of your game. Many of you also emailed me telling me that this was your first exposure to the book, and that as you started to read it, you started to realize what’s really inside. It’s more than a time management guide or a personal productivity toolkit. It’s a way to really take everything we’ve learned about operating at a higher level, and actually put that into practice in a simple and systematic way.
It’s more than a book. It’s a way to make the most of work and life.
Feel free to continue to send me your stories of success and what you specifically did, and how Getting Results the Agile Way helped. I won’t share your story, unless you ask me too, but I use the feedback to continue to refine the approach as I share it and scale it to others to help them get an edge in work and life.
Open Source Frameworks Unveil Support for Windows Phone 8
Des annonces importantes pendant //build/ 2012 avec la synthèse de Kerry mais surtout ne pas rater le post d’Olivier avec les frameworks open source supportant Windows Phone 8 (bravo !!)
By Kerry Godes - Senior Manager, Worldwide Marketing and Operations
Last week’s //build/ 2012 conference highlighted how developers can take advantage of their existing skills and favorite languages and frameworks to extend their market reach and bring great apps to the Windows 8 platform. There was a spotlight on the newly launched Windows Phone 8 Developer Platform, which is now supported by several popular open source and cross-platform frameworks.
In order to achieve the vision of Windows Phone as “the world’s most personal smartphone”, Microsoft relies on a talented developer ecosystem that is fueled by companies, communities, and people who are creating offerings and resources to help these developers quickly and easily build or port apps for Windows Phone. View full article...
//build/ today with open source frameworks on Windows Phone 8
- Apache Cordova (known as PhoneGap) now supports Windows Phone 8
- The next release of Sencha Touch 2 arrives with added support for Windows Phone 8
- A new jQuery Mobile theme for Windows Phone 8 is available
- SQLite can be used to build Windows Phone 8 applications. You can find the bits here.
- Here is a preview version of Cocos2D supporting Windows Phone 8
- Ogre3D on Windows Phone 8
- Trigger.io has been updated to support Windows Phone 8
- SharpDX (an open-source C#/Managed DirectX API for.NET) is now available for Windows Phone 8
- Popular open source MVVM Light Toolkit gets a fresh new version supporting Windows Phone 8. Read the details on Laurent Bugnion’s blog
Azure como backend de storage para Windows Apps y el poder de los Mobile Services
Con Windows Apps, me refiero a apps para el Windows Store y para el Windows Phone Store.
El conocimiento promedio que tenemos hoy en día de Azure y de Cloud Computing, nos indica que una gran alternativa para las necesidades de storage que tienen las apps, efectivamente es el storage de Azure. Y uno va y mira y sí… efectivamente así es. Pero hay que tener en cuenta ciertos detallitos en los que podemos pecar por ir muy a la ligera.
Si queremos ofrecer almacenamiento en nuestra app a través de una cuenta de Azure, lo primero que se nos vendría a la mente es: fácil, uso el API de Storage y la accedo desde mi app. Pero esto tiene dos implicaciones:
1. El soporte que se le da a la API desde las plataformas de desarrollo de WP y de WinRT: En este caso, lamento informar que el Microsoft.WindowsAzure.StorageClient.dll aún no está disponible para estas plataformas, pese a que en Web, WinForms, WPF, WCF y demás sí es completamente accesible. Esto nos deja con una sola alternativa y es la de usar REST puro y duro para acceder al storage de Azure. Es decir, crear clientes HTTP dentro de del código de la app y configurarlos para hacer un request REST sobre la API del storage (recordemos que esta API es nativa REST y por ende se puede acceder de esta manera). Tanto en WP como en WinRT es completamente viable esto, ya que tenemos las clases adecuadas. Sin embargo, configurar estos clientes no es fácil ni amigable. Algunos acercamientos como la librería Windows Azure Storage Client para Windows Phone disponible a través de NuGet, pueden ayudar a alivianar la tarea, encapsulando todos estos llamados. Desafortunadamente hoy no tenemos una equivalencia para WinRT. Tal vez mediante agunos hacks, uno pudiera usar esta de phone con Windows 8. Pero entonces viene el 2do. Punto a tener en cuenta:
2. Independientemente de que hagamos los llamados a través de REST puro o usando una librería como la citada en phone, siempre debemos incluir la información de la cuenta para poder tener acceso a Azure. Tradicionalmente esto no es un gran problema, y digo tradicionalmente refiriéndome al hecho de que si estamos acostumbrados a crear aplicaciones de servidor que acceden a Azure, sabemos que los datos de la cuenta (nombre y clave) están “seguros” en nuestros servidores que están detrás de unos balanceadores de carga y unos proxies que lo protegen. Pero en el caso de las apps…. Te sentirías seguro sabiendo que el nombre y clave de tu cuenta de Azure está “quemado” en una app que probablemente hayan bajado cientos o miles de usuarios? Cualquier usuario malicioso, podría hacer ingeniería inversa de la app para extraer los datos de la cuenta (y aunque es complicado, no es imposible).
Entonces, no hay una API oficial y tampoco es seguro embeber los datos de la cuenta en los clientes. Así que entonces la solución que se viene a la cabeza es: Hagamos servicios web!
De esa manera, dado que WinRT y WP, tienen completo soporte a servicios web se podrán subir los archivos al servidor web y éste, que sí tiene el poder de usar la API de Storage de Azure, tendrá muy cómodo pasar estos archivos a la nube. Además, los datos de cuenta estarán seguros dentro del server, pues los clientes solo necesitan saber la dirección del servicio web.
Pero realmente es esto lo que queremos? Tener un servidor Web, tiene costos asociados. Además tendríamos una sobrecarga sobre esos servidores recibiendo imágenes de Raimundo y todo el mundo. Por lo tanto requeriríamos varios servidores y los costos se elevarían enormemente. Sumémosle además el hecho de que el storage de Azure se puede acceder directamente y nos ahorraríamos todos esos costos.
Entonces… cuál sería la solución apropiada?
Se imaginan un mecanismo en el cual cada app pudiera tener un permisito que dure solo el tiempo requerido para ejecutar la acción necesaria sobre el storage y que luego ya no sea válida? En ese caso, por más que el hacker obtenga la clave, cuando lo haga, ya no le servirá de mucho.
Pues para sus cuentas, les informo que Azure soporta el acceso al storage a través de este mecanismo, que particularmente se llama “Shared Access Signature” o SAS.
Obviamente, para poder generar una SAS, se requiere tener los datos de la cuenta. Y volvemos al mismo sitio: no queremos exponer esos datos. Pero hay algo bueno! Si creamos un servicio web que nos retorne esa SAS (menos de 1K de tamaño), tendremos mucha menos sobrecarga que cuando usamos web services para transferir imágenes. En este caso, solo requeriremos una pequeña respuesta del server con la SAS y ésta la usaremos para que a través de un llamado HTTP (ésta vez mucho más sencillo que cuando se desea usar los datos de la cuenta nativa, ya que este requiere ser asegurado) en el que referenciamos dicha SAS podamos ejecutar la acción requerida sobre el storage (por ejemplo subir o bajar un archivo de los blobs, o consultar un registro de las tablas) sin necesidad de suministrar al cliente los datos de la cuenta.
Entonces se podría decir que hallamos la solución! Ponemos un servicio Web a repartir SAS a las apps. Tal vez el servicio web haga alguna validación para decidir si responde positivamente al llamado. Y listo. La logramos!
Esta situación la podríamos optimizar aún más. Dado que ahora la carga no será tan fuerte, podríamos entrar a analizar la posibilidad de usar web servers compartidos, disminuyendo enormemente los costos. Pero aquí tendremos que entrar a balancear costo contra disponibilidad.
No sería mejor entonces tener un servicio pre-construido en la nube que ya nos proveyera la capacidad de emitir las SAS a las apps? Que nos quitara la necesidad de crear todo un web server para este fin?
Sí, existe! Son los Windows Azure Mobile Services. Sevidores configurados para que clientes con una clave definida puedan entrar a escribir en unas tablas que quedan disponibles para este fin. Adicionalmente, también ofrecen servicios de Push Notifications y sobre ambos se pueden establecer estrategias para que solo usuarios autenticados puedan acceder a estos servicios.
Subyacentemente, los Mobile Services tienen servidores web compartidos o dedicados que cuestan menos que los Web Sites de Azure. De hecho, hoy en día, se pueden tener hasta 10 Mobile Services gratuitamente. Estos servidores subyacentes, se pueden escalar a discreción. Aunque entonces parece que es lo mismo que tener los Web Servers tradicionales, la ventaja es que no tenemos que desplegar ningún código para que funcionen. La funcionalidad ya está en producción.
No obstante, en las funcionalidades que mencioné, en ninguna está la del suministro de la SAS. Pero aquí es donde la imaginación u el ingenio juegan parte importante. Lo que sí mencioné es que se puede acceder a unas tablas a leerlas o modificarlas. Para ello se han dispuesto unas APIs especialmente diseñadas para WP y WinRT; hasta para iOS hay versión. Y en las tres plataformas pueden ser incluidas y usadas sin ningún problema de funcionamiento ni certificación cuando la app se sube al store. Pues bien, las operaciones CRUD que hacemos sobre estas tablas (que se pueden diseñar desde el portal de administración de Azure sin código alguno), están sujetas a que se ejecuten triggers que se disparán cuando nosotros lo especifiquemos. Por ejemplo, cuando haya una inserción. En últimas esto se puede tomar como un servicio web asíncrono, en el cual insertamos un request en la tabla y luego la volvemos a consultar para ver la respuesta obtenida, que obviamente se encontrará en otra columna.
De esta manera, imaginen entonces que queremos subir una imagen al blob storage de Azure. Así que a modo de request, insertamos el nombre de esa imagen en una tabla de un Mobile Service previamente configurado, a través de las apis mencionadas.
El servicio tiene un trigger sobre las inserciones. Y ese trigger detecta que hubo una inserción. Entonces ejecuta un código de servidor basado en Node.js (Javascript en el Server). Este código es suministrado por nosotros. Lo escribimos en el portal de administración de Azure; en la sección de configuración del servicio. Es así como podemos escribir un código que nos inserte en el mismo registro de la tabla la SAS que requerimos para acceder al container en el que queremos poner el archivo. Entonces cuando ordenamos la inserción asíncrona, pasamos el ítem a insertar y cuando termina esa inserción, podemos obervar que el ítem ya traerá la SAS que hemos generado.
Luego usando esa SAS subimos el blob con tres líneas de código y ya está!
En síntesis:
Para poder acceder al storage de Azure desde apps sobre las cuales no tenemos control una vez han sido descargadas de las distintas tiendas de aplicaciones, la mejor alternativas es usar las SAS de Azure. La alternativa más cómoda y económica para obtener esas SAS, viene siendo los Mobile Services de Azure a través de las tablas de datos, ya que se proveen APIs de acceso a esas tablas que funcionan muy bien en WP7, Win8 y iOS. Con la SAS accedemos de manera segura para todos al storage.
Una vez comprendido esto, continuemos con un paso a paso de cómo desplegar una solución de este tipo, en este post:
Dark Manifesto for Agile Software Development
For those interested in answer a survey or questionnaire by Giancarlo Succi and Andrea Janes: http://darkagilemanifesto.org
Let me also post here my comments for further discussion:
Part 1
Do you think that instead of "We are uncovering better ways of developing software by doing it and helping others do it." people tend to adopt "We are uncovering the only ways of developing software teaching others."?
If “doing it” is removed then there are lesser chances to reflect, to inspect, and to adapt on the search for the specific business value —which often is a moving target— in a given context. Key parts of software development could be seen as epistemological endeavors; thus, if the empirical or experimental part is removed then the justification of a claim for an achieved business value could be dramatically diminished.
Do you think that instead of "Individuals and interactions over processes and tools" people tend to adopt "Individuals and interactions and not processes and tools"?
If the processes and tools do not help for better communication between individuals then those processes and tools should not be followed or used. Instead, an open dialog among practitioners should define proper processes and tools that help them to practice their values and professional principles.
Do you think that instead of "Working software over comprehensive documentation" people tend to adopt "Working software and not comprehensive documentation"?
The lack of a proper way to remember or to communicate relevant things to future or distant people is certainly a problem that should be solved in a project or product. That is a different problem than the problem of effort wasted in obscure and ineffective documents that nobody read. Besides, there is a plethora of recording technologies that could help to preserve and transmit relevant and valuable things about a project.
Do you think that instead of "Customer collaboration over contract negotiation" people tend to adopt "Customer collaboration and not contract negotiation"?
And for that matter: who, on the face of the Earth, is going to listen to the end-user?
Do you think that instead of "Responding to change over following a plan" people tend to adopt "Responding to change and not following a plan"?
If «a plan» no longer follows discovered and corroborated reality, then it is wise to make all sort of proactive adjustments or an entire new plan. The key to follow, of course, is not the outcome —a plan— but the frequent practice of planning. Conversely, blind reactions to whimsical moves, without awareness of the related costs, pave the path to diminished business value or business failure.
Do you think that instead of "That is, while there is value in the items on the right, we value the items on the left more." people tend to adopt "That is, while since there is no value in the items on the right, we value only the items on the left."?
A part of the problem is a psychological pattern of neglected interpretation. It is not, of course, something exclusive of agile methods but it is a pervasive lack of education that permeates politics, economics, and many more spheres of society. The root cause is a very, very poor foundation in important areas of life, whereas is it called science, philosophy, history, mathematics, ethics, etc. In addition to that, the absolute reign of the hunt for short-term gratifications aggravates the odds to potential exits from the previous condition.
I am fully aware that a commercial interest in profits with software does not have an inherent responsibility to advance the state of the practice of the software development profession at the individual level, but it is not difficult to see that doing so is for the best of precisely that interest. Otherwise, an increasing number of consumers in the general public will start to not pay for software that does not deliver value fast and robustly.
The need for better software practices has been stated constantly, and yet, there is still way too much wishful thinking in our industry, mainly from non-practitioners, which persist in their lack of awareness about the long-term consequences.
What prevents us from learning a new concept is the preconception that usurps its place. See what Derek A. Muller has to say in the case of scientific concepts:
November 14th: Presenting at C-level: learn how to close the deal with CxOs
Part of closing a CxO is your ability to deliver a presentation that best positions your solution and value proposition. If you are interested in pushing your presentation skills to another level, don’t miss the next Time to Thrive webcast with Worldwide Partner Conference speaker Dave Underhill. Registration details, as well as on-demand webcasts from last season, can be found here
WOWZAPP 2012 | Windows 8 Socket Networking Session Files
Please find the presentation and finished project below.
Adicionando archivos al Storage de Azure Apps con Mobile Services
Pero cómo benefician estos servicios a la necesidad de poder almacenar archivos en el storage?

Obviamente, también podemos conectarnos a apps existentes. No solo a nuevas. Con este fin el portal nos suministra un código con una clave que debe tener la app para poderse conectar. Este código en el caso de Windows 8 por ejemplo, se pone en el App.xaml.cs:
//This MobileServiceClient has been configured to communicate
//with your Mobile Service's url and application key.
//You're all set to start working with your Mobile Service! public static MobileServiceClient MobileService =
new MobileServiceClient( "https://warserv.azure-mobile.net/", "ElSmsvstPUsdXWsHJqFteqhkLxDVcdr15" );
Obviamente para que el código nos funcione, debemos de haber referenciado la dll que contiene el tipo MobileServiceClient. Esta dll ya viene agregada si nos bajamos la solución de ejemplo del portal. Si estamos habilitando una solución preexistente, la incluimos manualmente. Se llama Windows.Azure.Mobile.Services.Managed.Client y queda disponible luego de que instalemos el Mobile Services SDK.
function insert(item, user, request) { var accountName = '<SU NOMBRE DE CUENTA>'; var accountKey = '<SU PAK>'; //Note: this code assumes the container already
//exists in blob storage. //If you wish to dynamically create the container then implement
//guidance here -
//http://msdn.microsoft.com/en-us/library/windowsazure/dd179468.aspx var container = 'test'; var imageName = item.ImageName; item.SAS = getBlobSharedAccessSignature(accountName,
accountKey, container, imageName); request.execute(); }
//la estructura de la tabla que tenemos //en el Mobile Service. //Es una clase generada por nosotros //de acuerdo a nuestras necesidades. public class TodoItem { public int Id { get; set; } [DataMember(Name = "text")] public string Text { get; set; } [DataMember(Name = "ImageName")] public string ImageName { get; set; } //Es en este campo, donde quedará almacenada la SAS [DataMember(Name = "SAS")] public string SAS { get; set; } [DataMember(Name = "complete")] public bool Complete { get; set; } }
StorageFile file = await openPicker.PickSingleFileAsync(); if (file != null) { //agrega un item a la tabla de Mobile Service, disparando un trigger //que retorna el item.SAS, como un valor en otra columna de la tabla,
//que se transfiere automáticamente al ítem. var todoItem = new TodoItem() { Text = "test image",
ImageName = file.Name }; await todoTable.InsertAsync(todoItem); items.Add(todoItem); //Cargamos la imagen con el HttpClient al blob //service usando la SAS generada en item.SAS using (var client = new HttpClient()) { //Obtenemos el stream de un storage file definido anteriormente using (var fileStream = await file.OpenStreamForReadAsync()) { var content = new StreamContent(fileStream); content.Headers.Add("Content-Type", file.ContentType); content.Headers.Add("x-ms-blob-type", "BlockBlob"); //Con el PutAsync, enviamos el archivo a Azure //a través de la URL autorizadora que está en SAS de la forma: //"https://--------------.blob.core.windows.net/test/
//androidsmartglass.jpg?etc //donde etc es la cadena de validación using (var uploadResponse = await client.PutAsync(new Uri(todoItem.SAS), content)) { //Agregar cualquier post proceso adicional } } } }
De esta manera, ya habremos podido subir nuestro archivo al storage de Azure, desde un cliente móvil, sin arriesgar nuestra cuenta de storage, sin sobrecostos por Servidores Web y de una manera absolutamente escalable, porque este request en total pesa menos de 1Kb. Mucho menos de lo que sería subir la imagen a un servidor, para que desde allí de manera segura se pudiese enviar al storage. Ahora, el envío es directo, de manera que es más rápido!
Dark Manifesto for Agile Software Development. Take 2
Do you think that instead of "We are uncovering better ways of developing software by doing it and helping others do it." people tend to adopt "We are uncovering the only ways of developing software teaching others."?
Yes, I have often seen a kind of indoctrination into the new «one and only true agile cult». But that is because people do not actually read and reflect on what writing software and agile are about. So the problem is radical dogmatism: people imposing unquestionable ideas over other people; that kind of dogmatism has the consequence you are contrasting. There is another kind of dogmatism that is chosen by an individual upon herself provisionally, as a stage in her apprentice path; about this look for ‘Shu-Ha-ri’ in the agile literature.
Do you think that instead of "Individuals and interactions over processes and tools" people tend to adopt "Individuals and interactions and not processes and tools"?
Yes, I have recently seen a development director that forced their teams to use Post-it on the wall whereas they have a fully functional development suite of wonderful tools to have a ‘single system of record’ of project reality; one consequence is that those teams now have two systems to keep in synchrony, wasting effort that could otherwise be in more useful activities.
Do you think that instead of "Working software over comprehensive documentation" people tend to adopt "Working software and not comprehensive documentation"?
I have seen no significant change with the problem of documentation. I still see much of both, too much ineffective documents and too little concise and useful forms of documents that help to communicate, and to preserve, important justifications of why the software is as it is.
Do you think that instead of "Customer collaboration over contract negotiation" people tend to adopt "Customer collaboration and not contract negotiation"?
I would be interested to see what might come out of such a tendency: "Customer collaboration and not contract negotiation". I still see too much protectionist focus on the terms of the contract, out of pure fear from both the customer and the provider. I would like to see more about designing value-streams that end at the end-user level.
Do you think that instead of "Responding to change over following a plan" people tend to adopt "Responding to change and not following a plan"?
I see that teams must respond to change but they do it in different ways depending of their context, and with different business consequences. Some, by contract restrictions, certainly must wait until the current plan ends; others, who provisioned proper contract clauses, can adapt to new business priorities more quickly. Many could respond to change quickly but also can break things more quickly; I still see fewer teams who are able to actually change quickly and not breaking a single feature at the same time.
Do you think that instead of "That is, while there is value in the items on the right, we value the items on the left more." people tend to adopt "That is, while since there is no value in the items on the right, we value only the items on the left."?
I think that we people fool ourselves into thinking that we understand something new without proper application of the tenets of critical thinking. We often fail to challenge properly our own presuppositions and then misread new concepts, relating them with what we already have in our memory; mere opinion is misinterpreted as knowledge. So, little or nothing new is actually learned. For example, an agile development process is seen as a sequence of steps or discrete iterations instead of an integrated set of value streams.
An Unmoving Experience
It seems, from the regular emails containing instructions about the delivery of packing boxes and the matching status updates confirming the allocation of my new work area, that I'm about to move to a brand new office. Our beloved old Building Five (or, to use the correct terminology, "Bldg5") is about to be redeveloped to provide new facilities. They'll probably even concrete over Lake Bill.
...(read more)Découverte de C++ AMP - Part 2
Le premier billet vous a sensibilisé à la puissance de calcul avec C++ AMP. Je vous propose maintenant de comprendre ce qu’est C++ AMP, en expliquant des motivations qui ont permis à Microsoft de se lancer dans ce projet.
Lorsque Microsoft ajoute une nouvelle librairie à Visual Studio, ce n’est pas sans plusieurs motivations parfaitement réfléchies et cette nouvelle librairie, n’échappe pas la règle. Certains lecteurs pourraient être surpris par ce choix, car des technologies comme CUDA et OpenCL disponibles depuis plusieurs années ont rencontré un franc succès auprès des développeurs spécialisés GPU. Pour qu’il n’y ait aucun doute sur les motivations de C++ AMP, je vous propose de passer en revue tous les éléments qui ont poussé Microsoft de se lancer sur un sujet aussi intimiste que la programmation GPU.
La grande majorité des développeurs produit des programmes à destination d’un ou plusieurs CPU, mais finalement elle produit du code ciblant la technologie CPU. Les CPU sont par nature modélisés pour réaliser des traitements génériques sans considération spécifique. Ils sont parfaitement adaptés à la majorité des besoins réclamés par les programmes actuels. Par exemple des applications riches, des applications web et serveurs applicatifs, sont généralement orientés données, mais ne réclament pas des traitements massivement parallèles.
Par nature, les cartes graphiques sont destinées à calculer des informations graphiques pour afficher de très nombreux pixels sur un ou plusieurs écrans. Cependant, certains développeurs graphiques dans les années 2005 ont trouvé astucieux d’utiliser les cartes graphiques plutôt que le CPU pour accélérer des calculs sur des volumes de données très importants. Le calcul sur carte graphique est pertinent lorsque votre problème repose sur un immense volume de données à traiter. Les cartes GPU possèdent des architectures plus simples que les CPU, mais ont surtout un nombre impressionnant de threads que n’ont pas les CPU. Aujourd’hui encore les architectures matérielles des cartes graphiques sont construites pour obtenir des performances d’affichage exceptionnelles. Le succès du calcul sur GPU de ces dernières années a donné lieu à des cartes professionnelles sans sortie vidéo, complètement spécialisées dans la programmation massivement parallèle sur GPU.
Le problème est qu’aujourd’hui, peu de gens possèdent des compétences pour programmer les GPU. Ce type de programmation reste l’affaire de quelques spécialistes que l’on trouve généralement dans les services R&D de nombreuses industries réclamant des calculs très couteux, mais aussi dans le domaine des jeux où on utilise souvent la puissance des GPU. En d’autres mots, les développeurs généralistes ne connaissent pas la programmation GPU, car ils sont rarement face à des traitements couteux, mais surtout parce que la programmation GPU est une technologie de niche compliquée à mettre œuvre. Le développeur généraliste souhaite rarement s’investir dans une technologie intimiste souvent exprimée en langage C, accompagné d’un écosystème relativement pauvre, même si parfois il est face à des cas où des portions de code seraient bien plus performants en programmation le GPU, car il n’existe pas de solution grand public pour programmer les GPU aujourd’hui. Ce dernier point est très important, car c’est la première motivation qui a poussé Microsoft à lancer le projet C++ AMP.
Aujourd’hui les constructeurs de cartes graphiques sont à la croisée des chemins en termes d’architecture matérielle. On assiste à des évolutions importantes entre les GPU et les CPU. C’est un secteur en pleine évolution où la compétition est rude et touche une poignée de constructeurs pour un marché immense. Pour les développeurs C++ AMP, les ingénieurs Microsoft ont pris en compte ce paramètre et affirme que tous les codes C++ AMP ne souffriront pas des futures évolutions matérielles. En réalité, votre code C++ AMP est dès à présent capable de s’exécuter sur de nombreux matériels sans aucune modification. Par nature C++ AMP n’est pas rattaché à un matériel spécifique (pas de code spécifique aux cartes graphiques nVidia, AMD ou même Intel). En d’autres mots votre investissement sera préservé. Ce dernier point est très important, car c’est la seconde motivation qui a poussé Microsoft à lancer le projet C++ AMP.
Votre investissement sur C++ AMP ne se limite pas à une seule plateforme : de Windows Azure à Windows Phone, de Windows Desktop à Windows RT, de Windows Server à Windows Embedded, Windows HPC Server à Xbox, toutes ces plateformes exécuteront à terme votre code C++ AMP. Mais ce n’est pas tout, dans un futur proche, vous pourrez exécuter du code C++ AMP sur des plateformes non Microsoft. En effet, Microsoft a publié en février 2012, une spécification ouverte sur le standard C++ AMP (http://download.microsoft.com/download/4/0/E/40EA02D8-23A7-4BD2-AD3A-0BFFFB640F28/CppAMPLanguageAndProgrammingModel.pdf), permettant aux fabricants de compilateurs d’implémenter C++ AMP sur des plateformes complémentaires à celles de Microsoft. Aujourd’hui, AMD a déjà annoncé qu’il produira une version de C++ AMP au-dessus d’Open CL. L’initiative de Microsoft à travers la spécification ouverte de C++ AMP, a pour but d’encourager tous les fabricants de compilateurs d’implémenter la librairie C++ AMP sur diverses plateformes. Vous l’aurez compris, le champ des possibles de C++ AMP sera à terme immense et vous n’avez pas à vous soucier de la plateforme d’exécution qui exécutera votre code, ce point est très important, car c’est à la fois la troisième et dernière motivation qui a poussé Microsoft à lancer le projet C++ AMP. Arrivé à ce stade, vous avez pris connaissancedes motivations qui ont poussé Microsoft à produire la librairie C++ AMP, il est temps de décrire ce qu’est C++ AMP.
C++ AMP fait partie du compilateur C++. Si vous utilisez déjà le compilateur C++ de Visual Studio 2012, vous avez C++ AMP. Vous n’avez besoin de rien d’autre. Pour déployer votre application C++ AMP, vous n’avez aucun prérequis supplémentaire, le redistribuable Visual C++ contient la librairie C++ AMP. En tant que partie intégrante à Visual Studio, la librairie C++ AMP est parfaitement intégrée à Visual Studio à la fois sur le plan du Debugging, du Profiling et de l’IntelliSense. Avec C++ AMP, vous pouvez réutiliser vos applications et vos connaissances C++, car la librairie a été pensée pour justement les préserver.
On peut définir C++ AMP comme une librairie C++ exposant un petit jeu d’API (compatible STL) sachant gérer des données multidimensionnelles afin de faciliter leur parallélisation. La courbe d’apprentissage de C++ AMP est donc faible, car le jeu d’API est réduit et repose sur le standard C++ 11. Si vous connaissez la librairie STL, alors vous connaissez une bonne partie de C++ AMP.
L’implémentation Microsoft de C++ repose sur l’API Direct3D de DirectX 11. Ce choix d’implémentation est plutôt une bonne chose, car DirectX est une librairie mature et donc stable. Ce choix apporte une abstraction pour supporter de nombreux matériels comme ceux de constructeurs nVidia, AMD, Intel, ARM … Cependant, si votre programme ne détecte pas de matériel compatible DirectX11, C++ AMP se tourne vers une solution de repli, WRAP, que nous avons utilisé dans la première démonstration, exploitant alors les multicœurs disponibles et la vectorisation parallèle via les instructions SSE (AVX n’est pas supporté pour l’instant). Même si Microsoft utilise DirectX pour implémenter C++ AMP, les API DirectX ne sont pas visibles des API C++ AMP. C’est pour cette raison que Microsoft a pu publier une spécification ouverte décrivant le standard C++ AMP sans aucune adhérence avec DirectX.
Pour Microsoft, C++ AMP se doit d’être performant, productif et portable pour tous les développeurs, ce n’est pas un produit de niche. Il n’est pas utile d’être un spécialiste en programmation GPU pour utiliser C++ AMP. Ainsi, un développeur C++ généraliste disposant de Visual C++ 2012, peut parfaitement utiliser cette technologie sans contrainte technique majeure : pas de nouvelles compétences, pas de librairies à ajouter la mise au point ou l’analyse de performance. La mise en production ne réclame rien de plus que le redistribuable C++. Et pour couronner le tout, votre code devrait tourner sur des plateformes complémentaires à celle de Microsoft dans un avenir proche.
Arrivé à ce stade, nous avons beaucoup décrit C++ AMP, sans jamais l’illustrer avec du code, il est temps corriger le tir dans le prochain billet.
A bientôt
Bruno
SQL Server 2012 SP1 released!!!
How To Be Ready for Any Emergency
“By failing to prepare, you are preparing to fail.”― Benjamin Franklin
I know a lot of people have had their lives turned upside down. Hurricane Sandy and the follow up Noreaster, really created some setbacks and a wake of devastation.
Disasters happen. While you can’t prevent them, what you can do is prepare for them and improve your ability to respond and recover.
I’m not the expert on disaster preparation, but I know somebody who is. I’ve asked Laurie Ecklund Long to write a guest post to help people prepare for the worst. Here it is:
Disaster Proof Your Life: How To Be Ready for Any Emergency
The goal of the post is to help jumpstart anybody who wants to start their path to planning and preparation for emergencies.
Laurie is an emergency specialist. She is a best-selling author, national speaker, and trainer that helps individuals, businesses, and the military survive natural disasters and family emergencies, based on her book, My Life ina Box…A Life Organizer. On a personal level, Laurie’s inspiration came from losing 12 people close to her, including her Dad, within the span of five years. She learned a lot during 9/11 and Hurricane Katrina, and she’s on a mission to help more people be able to answer the following questions better:
Do you have a personal emergency tool box? Can you quickly locate your legal, financial and personal documents within minutes and be able to rebuild your life if something happens to your home?
Check out Laurie’s guest post Disaster Proof Your Life: How To Be Ready for Any Emergency, and start your path of planning and preparation for emergencies, and help others to do the same.
ALM Rangers ALM Dogfooding, Ruck and the new TLAs – Part 5
Continuing from ALM Rangers ALM Dogfooding and the age of the visual board – Part 4 we will focus on the Ruck backlog in this post, sharing experiences, asking for candid feedback and giving the ALM Ranger project leads a checklist to work with.
IMPORTANT: Always refer to the ALM Rangers Practical Ruck Training and Reference Manual document for the latest Ruck guidelines. These blog posts are only intended as gap-fillers when we encounter challenges as part of our dog fooding or questions that deserve a bit more elaboration.
Last updated on 2012/11/10
Recap … what is Ruck and why are we using it?![Clipart Illustration of a Relaxed White Character Sitting In A Broken Blue Easter Egg Clipart Illustration of a Relaxed White Character Sitting In A Broken Blue Easter Egg]()
The ALM Guidance: Visual Studio ALM Rangers — Reflections on Virtual Teams article is a good reflection of where Ruck came from and why we are continuously evolving Ruck. We like to refer to Ruck as the “Practitioners Scrum Variant”, catering for a team environment in which all team members are part-time volunteers, with a real job and a family. The latter obviously receives the most focus and priority, leaving only fragments of capacity to Ranger projects. This combined with the geographically dispersed team members and numerous time zones is creating a challenging backlog planning and sprint management environment.
What are some of the major variations between Ruck and Scrum in a nutshell?
Scrum | Ruck |
15min Daily Scrum Meeting with team eye:eye | 15min (bi-)weekly stand-up meeting, with an optional 15min free-for-all chat thereafter with team vEye:vEye |
8 hour Sprint Planning Meeting with team
| Project Lead and Program Manager | Ruck Master plan meet for ½ hour to define and propose a product backlog prioritization and suggested sprint plan. Entire team grooms the backlog offline, collaborating by email. The sprint backlog is discussed at the last or first stand-up in each sprint. |
4 hour Sprint Review Meeting with team | Teams produce videos of their deliverables, which are distributed to all stakeholders |
3 hour Retrospective Meeting with team | Team submits their retrospective feedback during the last week. Project Lead and Program Manager | Ruck Master plan aggregate results, which are discussed at the next stand-up during optional time slot |
Time where the entire team can be together in a video conference meeting is precious and therefore we have to revert to offline collaboration and primarily email when planning, estimating and stack ranking.
Backlog planning terminology
The terminologies are documented in numerous publications, yet many of us (myself included) often wonder what the difference between the tsunami of terms is. Do you know what the difference is between a scenario, an Epic, an MMF and an MVP? I love (joke) acronyms, because in this context MVP stands for “Minimum Viable Product”, not Microsoft Most Valuable Professional as most of us are used to.
To be honest, I have to keep notes and often fall back to our quick reference posters and cheat sheets for a quick terminology rescue.
Let’s focus on the backlog planning, the various terminology and guidelines we are following in Ruck. We are trying our utmost to align with the rest of the communities and processes, but are trying to keep it simple!
Let’s start with the following illustration…
We are now sharing common sprints, which are simply a year sliced up into 2-week or 1-month sprints / iterations. While not all projects start at the same time, the common sprint model synchronises the beginning and end of sprint ceremonies across all our projects and simplifies the overall tracking and administration efforts dramatically.
For projects that span many months, typically strategic initiatives, we may define Pillars. These relate to strategic objectives, for example Improve the Ruck Ecosystem or Introduce New Deliverable Experience, and are essentially bigger buckets for Epics.
For most projects we slice and dice features into Epics –> Product Backlog Items (PBIs) -> Tasks, whereby a PBI is used to define a collection of other PBIs, a User Story, a Test Case or a Minimum Marketable Feature (MMF).
- Epic– group of related PBIs.
- User Story - communicates functionality that is of value to the end user of the product or system.
- MMF– feature with a clearly defined user story in the form of “As a <user type> I want to <do some action> so that <desired result>", which represents a marketable and shippable feature.
- Task– tasks needed to implement an PBI and meeting its acceptance criteria
- Test Case - test and project managers to track the testing efforts in context with various requirements, features and other logical conditions
Backlog planning “Ruck” guidelines | checklist
The following table is for our project leads, program managers and Ruck Masters and summarises the backlog planning guidelines.
Artefact | Intended usage | Constraints | Recommendations |
Pillar | Collection of Epics that span across many sprints, such as strategic FY13 Epics. | Define only Epics as children of a Pillar. | Only use for strategic projects that span many (>10) sprints. |
Epic | Collection of related PBIs. The Epic typically defines the value proposition for our solution and is used by Bijan when he runs into the VP in the hallway. | Each Epic must have a detailed description and acceptancecriteria. | Define only PBIs as children of an Epic. |
PBI | Collection of tasks which clearly defines the user story for typically one persona in the context of a potentially shippable feature. | Each PBI must have a detailed description and acceptancecriteria. An exception is a PBI that acts as a container to other PBI’s that may span multiple sprints. In this case the PBI is assigned to a major node such as iteration ALM/FY13 and never assigned to more granular sprints. | Do not span PBI’s across sprint boundaries. |
Task | Autonomous activity of work that can involve activities such as construction or testing or review or documentation. | Task and state thereof is owned by the team member. | Do not define tasks that are greater than 8h, preferably defining a maximum remaining work of 4h or less. |
Review using illustrations
![]() | Healthy backlog view for a strategic project that spans more than 10 months. The dotted red lines indicate the sprint barriers for sprints that appears in your backlog view as iterations available for planning.
|
![]() | Healthy backlog view for a project that spans a few weeks or months. The dotted red line indicates the sprint barrier for sprints that appear in your backlog view as iterations available for planning.
|
![]() | Unhealthy backlog view, because PBI 1 spans across two sprints. See below for one of the various possible resolutions. |
![]() | Healthy backlog resolving the previous unhealthy backlog.
|
Frequently Asked Questions
When I perform a review which tasks do I associate my check-in with? | You must associate your check-in with a valid and existing task. It is up to the team to decide on the preferred mechanism for review tasks, but we recommend the following: For small, ad-hoc reviews, check-in using the task that defined the completed work that is being reviewed. For more formal and larger reviews, create a review PBI and review tasks that define all review activities. Use these tasks for review check-ins. |
If a task cannot be completed in a sprint, must I split the parent PBI before moving the task to the next sprint? | If the tasks are small and few in numbers, determine if the effort of splitting the PBI out-weighs the value. If effort far outweighs the value, move the tasks which effectively split the PBI across two sprints. |
TLAs (Two | Three Lettered Acronyms)
MMF | Minimum Marketable Feature. See http://en.wikipedia.org/wiki/Minimum_Marketable_Feature for more information. |
MVP | Minimum Viable Product. See http://en.wikipedia.org/wiki/Minimum_viable_product for more information. |
PL | Project Lead. See Practical Ruck Training and Reference Guide http://vsarguidance.codeplex.com/downloads/get/461175 |
PM | Program Manager. See Program Management – Are some of the ALM Rangers Symbiotic PM’s? and Project Lead. See Practical Ruck Training and Reference Guide http://vsarguidance.codeplex.com/downloads/get/461175 |
PO | Project Owner. See Practical Ruck Training and Reference Guide http://vsarguidance.codeplex.com/downloads/get/461175 |
Doporučené čtení za 46. týden
Přehles přednášek o Windows Phone z konference //build najdete v článku It’s a Wrap on Windows Phone at BUILD 2012. Zajímavý je seznam přednášek s odkazem na záznam. Než se pustíte do sledování videí, tak si můžete projít seznam nejzásadnějších novinek pro vývjáře Windows Phone aplikací v článku Windows Phone 8 developer platform highlights.
Chcete-li začít s Windows Azure Mobile Services, tak si nenechte ujít Get started with Mobile Services.
Během příprav na Windows 8 Camp jsem narazil na starší, ale velmi užitečný, článek Roaming your app data.
Pokud využívá vaše aplikace streaming videa, pak jistě oceníte dokončení Player Framework for Windows 8.
Štěpán, @stepanb