Quantcast
Channel: MSDN Blogs
Viewing all articles
Browse latest Browse all 35736

Defining Red, Amber, and Green (RAG) Statuses

$
0
0

A few years ago, AppDNA and ChangeBase (listed alphabetically, so please don’t infer anything from the ordering, though they are also sorted by the capacity of their original CTOs to consume gin and tonics – I won’t say if that is ascending or descending) came along and fundamentally redefined the language of app compat with the introduction of one simple concept: the RAG (Red, Amber, Green) report.

The RAG report is a very elegant idea – you get some indication of whether things work or not in a simple, highly visual report that lends itself well to pie charts and executive reports.

And this idea, like all great ideas, took off like wildfire when applied to app compat. I know of no customer who doesn’t phrase their app compat work in the context of a RAG status these days.

But eventually, a bit of a problem emerged – that there was no consistent operational definition of the Red, Amber, and Green statuses. (Well, in addition to the problem that Americans call yellow lights “yellow lights” rather than “amber lights”, but in addition to forcing Americans to learn just one British colloquialism, RAG also just sounds better than RYG reports.)

Here are just some of the definitions I have seen:

RedAmberGreen
Broken, but can’t be auto-fixedBroken, but can be auto-fixedWorks
BrokenWorks, but with issuesWorks
Broken, needs code remediationBroken, can be shimmed or repackagedWorks

With all of these potential definitions, it’s hard to say what it means to be a Red, Amber, or Green application. And, as most customers continue to use this taxonomy throughout the app compat project, I have also noticed that many customers end up changing the operational definition within the scope of a single project.

That makes it very difficult to make this data actionable!

The problem is that we have conflated whether an app works or not with how hard it’s going to be to fix it. So, lately I have been working with the customers I work with to disambiguate the statuses and agree on a single definition, and then track the fix difficulty separately.

Here is the definition that I use:

RedAmberGreen
Known issuesI don’t knowNo known issues

This may seem obvious, but it actually matters quite a lot to have the project agree on the terminology.

Given this, the app compat project can now be much more clearly visualized by these different statuses. Here’s an example:

image

 

At the beginning – you don’t know anything. Everything starts out as yellow!

In this example, the customer chose to start with static analysis. (Lately, I’ve been noticing a trend of folks starting with Install/Launch Testing – ILT - and then leveraging static analysis to diagnose failures.) At the end of static analysis, they still didn’t know compatibility status for all of their apps, but they know a lot more.

At the end of ILT, however, note that we don’t have any more amber. We have either seen it work, or we have seen it fail and sent it over for remediation. An app exits ILT when it has been remediated and turned to green.

The next phase is UAT, where the user validates that the application works correctly. Remember, though we had some red apps in ILT, they didn’t exit until they were remediated and validated green in ILT. But we find still more red apps that only the user can find! (That is, after all, why we have UAT.) Once we’ve fixed all of those, we hit our finish line.

So, app compat is the gradual progression from Everything Amber to Everything Green, with some apps turning Red along the way (perhaps multiple times, if we find and fix bugs discovered at different stages of the process), but always eventually being transformed to Green somehow.

With this definition in place, I’ve had a lot more success not only using this terminology with customers who are using static analysis tools, but also taking it and extending it throughout the project and through other tools and services, not to mention even using it for products that don’t have tools which produce such reports.

(It also means that, if a tool generates output with a different definition, I always go in and tune the output to match my definition, so the data and statistics become more meaningful and actionable.)


Viewing all articles
Browse latest Browse all 35736

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>