1 month Ago

Profiling: that critical 3% (Part I)

Published by marco on in Programming

An oft-quoted bit of software-development sagacity is

“Premature optimization is the root of all evil.”
Donald Knuth

As is so often the case with quotes—especially those on the Internet[1]—this one has a slightly different meaning in context. The snippet above invites developers to overlook the word “premature” and interpret the received wisdom as “you don’t ever need to optimize.”

Instead, Knuth’s full quote actually tells you how much of your code is likely to be affected by performance issues that matter (highlighted below).

“Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

An Optimization Opportunity in Quino[2]

In other articles, I’d mentioned that we’d upgraded several solutions to Quino 2 in order to test that the API was solid enough for a more general release. One of these products is both quite large and has a test suite of almost 1500 tests. The product involves a lot of data-import and manipulation and the tests include several scenarios where Quino is used very intensively to load, process and save data.

These tests used to run in a certain amount of time, but started taking about 25% longer after the upgrade to Quino 2.

Measuring Execution Speed

Before doing anything else—making educated guesses as to what the problem could be, for example—we measure. At Encodo, we use JetBrains DotTrace to collect performance profiles.

There is no hidden secret: the standard procedure is to take a measurement before and after the change and to compare them. However, so much had changed from Quino 1.13 to Quino 2—e.g. namespaces and type names had changed—that while DotTrace was able to show some matches, the comparisons were not as useful as usual.

A comparison between codebases that hadn’t changed so much is much easier, but I didn’t have that luxury.

Tracking the Problem

Even excluding the less-than-optimal comparison, it was an odd profile. Ordinarily, one or two issues stick out right away, but the slowness seemed to suffuse the entire test run. Since the direct profiling comparison was difficult, I downloaded test-speed measurements as CSV from TeamCity for the product where we noticed the issue.

How much slower, you might ask? The test that I looked at most closely took almost 4 minutes (236,187ms) in the stable version, but took 5:41 in the latest build.

This test was definitely one of the largest and longest tests, so it was particularly impacted. Most other tests that imported and manipulated data ranged anywhere from 10% to 30% slower.

When I looked for hot-spots, the profile unsurprisingly showed me that database access took up the most time. The issue was more subtle: while database-access still used the most time, it was using a smaller percentage of the total time. Hot-spot analysis wasn’t going to help this time. Sorting by absolute times and using call counts in the tracing profiles yielded better clues.

The tests were slower when saving and also when loading data. But I knew that the ORM code itself had barely changed at all. And, since the product was using Quino so heavily, the stack traces ran quite deep. After a lot of digging, I noticed that creating the ExpressionContext to hold an object while evaluating expressions locally seemed to be taking longer than before. This was my first, real clue.

Once I was on the trail, I found that when evaluating calls (getting objects) that used local evaluation, it was also always slower.

Don’t Get Distracted

 Once you start looking for places where performance is not optimal, you’re likely to start seeing them everywhere. However, as noted above, 97% of them are harmless.

To be clear, we’re not optimizing because we feel that the framework is too slow but because we’ve determined that the framework is now slower than it used to be and we don’t know why.

Even after we’ve finished restoring the previous performance (or maybe even making it a little better), we might still be able to easily optimize further, based on other information that we gleaned during our investigation.

But we want to make sure that we don’t get distracted and start trying to FIX ALL THE THINGS instead of just focusing on one task at a time. While it’s somewhat disturbing that we seem to be created 20 million ExpressionContext objects in a 4-minute test, that is also how we’ve always done it, and no-one has complained about the speed up until now.

Sure, if we could reduce that number to only 2 million, we might be even faster[3], but the point is that that we used to be faster on the exact same number of calls—so fix that first.

A Likely Culprit: Scope

I found a likely candidate in the Scope class, which implements the IScope interface. This type is used throughout Quino, but the two use-cases that affect performance are:

  1. As a base for the ExpressionContext, which holds the named values and objects to be used when evaluating the value of an IExpression. These expressions are used everywhere in the data driver.
  2. As a base for the poor-man’s IOC used in Stage 2 of application execution.[4]

The former usage has existed unchanged for years; its implementation is unlikely to be the cause of the slowdown. The latter usage is new and I recall having made a change to the semantics of which objects are returned by the Scope in order to make it work there as well.

How could this happen?

You may already be thinking: smooth move, moron. You changed the behavior of a class that is used everywhere for a tacked-on use case. That’s definitely a valid accusation to make.

In my defense, my instinct is to reuse code wherever possible. If I already have a class that holds a list of objects and gives me back the object that matches a requested type, then I will use that. If I discover that the object that I get back isn’t as predictable as I’d like, then I improve the predictability of the API until I’ve got what I want. If the improvement comes at no extra cost, then it’s a win-win situation. However, this time I paid for the extra functionality with degraded performance.

Where I really went wrong was that I’d made two assumptions:

  1. I assumed that all other usages were also interested in improved predictability.
  2. I assumed that all other usages were not performance-critical. When I wrote the code you’ll see below, I distinctly remember thinking: it’s not fast, but it’ll do and I’ll make it faster if it becomes a problem. Little did I know how difficult it would be to find the problem.

Preventing future slippage

Avoid changing a type shared by different systems without considering all stakeholder requirements.

I think a few words on process here are important. Can we improve the development process so that this doesn’t happen again? One obvious answer would be to avoid changing a type shared by different systems without considering all stakeholder requirements. That’s a pretty tall order, though. Including this in the process will most likely lead to less refactoring and improvement out of fear of breaking something.

We discussed above how completely reasonable assumptions and design decisions led to the performance degradation. So we can’t be sure it won’t happen again. What we would like, though, is to be notified quickly when there is performance degradation, so that it appears as a test failure.

Notify quickly when there is performance degradation

Our requirements are captured by tests. If all of the tests pass, then the requirements are satisfied. Performance is a non-functional requirement. Where we could improve Quino is to include high-level performance tests that would sound the alarm the next time something like this happens.[5]

Enough theory: in part II, we’ll describe the problem in detail and take a crack at improving the speed. See you there.


[1] In fairness, the quote is at least properly attributed. It really was Donald Knuth who wrote it.
[2] By “opportunity”, of course, I mean that I messed something up that made Quino slower in the new version.
[3] See the article Quino 2: Starting up an application, in detail for more information on this usage.
[4] I’m working on this right now, in issue Add standard performance tests for release 2.1.

Occupy Wildlife Preserve

Published by marco on in Public Policy & Politics

I haven’t read much about Ammon Bundy and his gang’s standoff in Oregon. I’ve read so little about it that I had to look up where it was actually happening (other than knowing it was “somewhere in the U.S.”). So it’s some Arizona ranchers occupying a federal building in Oregon. This is definitely a step up from the domestic terrorism of the Unabomber or Timothy McVeigh because at least fewer people are being killed. Still, armed men have occupied federal property and are demanding the release of two of their family members from prison—any non-biased and halfway-objective definition of terrorism would have to include this act.

I admit I haven’t put much effort into reading about it because it sounds so overtly ridiculous, much like the peaceful protest/armed standoff against federal officers by Cliven Bundy[1] and co. a few years ago. It honestly sounds like people feeling oppressed but utterly unaware of the privileges that allow them to even feel oppressed in this way and to protest it without getting killed. Even if the grievance were to be legitimate—which Clive Bundy’s objectively was not and a cursory examination of Ammon Bundy’s leads one to the same conclusion there—armed resistance is not the way to solve problems in a civilized country.

And, even if we were to consider allowing it in cases where revolution truly seems to be the only answer, these guys are definitely not first in line. Last year, during the protests triggered by police violence, there were those quick to opine that blacks should stop complaining and work harder instead of abandoning their families in droves to go do drugs. Those same people now listen carefully with looks of concern to every word that drips from Bundy’s lips about the nearly unutterable obscenities that the oppressive government has visited on them when it’s not otherwise bestowing its largesse on them from its public coffers in the form of ranching subsidies.

Where the grievances of the Occupy Movement were founded in real problems that affect many, many Americans and were based on deep inequalities and injustices in the American system of governance, these grievances don’t seem like injustices at all.

Where Occupy asked how people were supposed to live in a system that imposes such crushing debt for so many simply in order to take part in society, these ranchers simply don’t want to pay taxes, or to pay to use public land or to be told when and how much they’re allowed to set it on fire.

Where Occupy addressed the underlying issues of an economic system that caused global economic collapse and instability, these ranchers are growing what are very personal issues into national grievances with very careful manipulation of people’s heartstrings and miseducation on issues of governance.

Where addressing Occupy’s grievances would lead to a more equitable and viable society for many more than just the 1%, the solutions stemming from these ranchers are utterly unviable. That is, “not paying taxes” doesn’t scale to everybody, else how would you pay anyone to maintain public grazing land?

Though Occupy didn’t offer concrete solutions, the implication that the richest should be reined in by getting a smaller slice of the pie or at least giving up more of it should they unfairly get it doesn’t seem so outrageous, unless you’ve been heavily indoctrinated in so-called “free market” religion. Extending the mantra of “stop taxing us” and “let us use public land for free” doesn’t scale. At all. All it is is “I’ve got mine, Jack” and, once I’ve got it, no-one else gets it, ‘cause that wouldn’t be fair.

Just because it’s a mentality that might just win out in the short term doesn’t make it any less stupid or any more long-term viable.

There might be more to it, I know, but that’s really what it looks like so far. And, even if there were deeper subtleties, I don’t believe that most of the people who are so quick to throw their support behind these ranchers are doing it for those subtle reasons. Americans are trained from birth to simultaneously hate their own government and to unwittingly live off its largesse. The ignorance is often deliberate. This cognitive dissonance goes a long way to ensuring that they shut discussion down immediately lest uncomfortable reality intrude.

What the ranchers are doing sounds for all the world like a reality show, though, and will likely be picked up as a new Netflix-only series by 2017.

The article The Dumb and the Restless by Matt Taibbi (Rolling Stone) (sub-titled, “Ammon Bundy and his band of weeping, self-pitying, gun-toting, wannabe-terrorist metrosexuals are America’s most ridiculous people”) closes with the following,

“There’s no doubt that these people are dangerous, but their ridiculousness is a huge part of who they are. Incidentally, this is true of groups like the actual al-Qaeda, too, led as they are by men in beards and Rick-Perry-style “smart glasses” who play at being religious scholars and intellectuals when in fact they are the kind of people who are afraid of cartoons and lie awake at night wondering if it’s permissible to play chess with a menstruating woman. Just because a person is dangerous does not mean he’s not also absurd.

“The Bundy militiamen are an extreme example of a type that’s become common in America. Like the Tea Partiers, they seem to not only believe that they’re the only people in history who’ve ever paid taxes, but that they’re the only people who were ever sad about it. What they call tyranny on the part of the federal government just means putting up with the same irritating bills and regulations and other crap that we all put up with, only the rest of us don’t whine about it in the front seats of our cars while posing in front of tripods.

“Again, these people may be dangerous, but their boundless self-pity, their outrageous sense of entitlement and their slapstick incompetence as rebels and terrorists are absolutely ridiculous. Sure, it may not help, but how can we not laugh?”

The opposite postulate holds as well: just because a person is absurd, that does not mean he’s not also dangerous. While I agree with Taibbi that we should laugh at them to dispel their power, we should still keep a careful eye on who’s not laughing. If there are enough people not laughing, they could react in ways that we don’t foresee because we aren’t taking them seriously.

Otherwise, we’re all self-satisfied and oblivious fools, judging “those idiots” who don’t know anything about anything and then utterly shocked to discover that those idiots are now, for all intents and purposes, in charge.[2]

This advice applies to all interest groups that use numbers—even seemingly small numbers—to exert control over the powerful on behalf of the weak. Sometimes this pressure achieves ostensibly “good” results—a union negotiates living wages or better benefits for its workers and the company for which they work is still profitable—and sometimes it’s bad—groups exert P.C. pressure to force companies and individuals to submit to ridiculous demands.

If we dismiss those who automatically lend credence to such special-interest groups out of hand, we run the risk of being extremely surprised when those groups end up exerting no small amount of control over our own lives.[3]


[1] No relation?
[2]

Here is where we often confuse who’s winning and losing. It’s very possible to lose reputation while winning everything else. That is, you think you’ve won against someone because you’ve made them ridiculous, but you’ve only beaten them on an inconsequential battleground while losing everywhere important.

Witness “bankers” (or the financial community) since 2008. Once again, in the wake of the global crash, everyone hates them and their reputation is terrible. On the other hand, almost all economic gains in the last seven years have gone to them. So what do they care if we think they’re losers, when by the only measure that matters in our society—money, in case you haven’t been paying attention—they are very definitely winning?

While we smugly consider ourselves better than “those people”, we continue our lives of quiet desperation while “those people” live lives of luxury unparalleled in history and financed purely by a largesse born of ignorance, self-satisfaction and an utter misunderstanding of power structures.

[3] A good example would be the “whackos” who are against abortion. The really extreme ones are a relatively small minority, but they are constantly winning their war on abortion in the States because only legislators really take them seriously—but that’s all that matters in a republic. The fact about abortion in the States is that, while it is legal everywhere, it is—for exactly those people who would need them—extremely difficult and time-consuming to get one.

Quino v2.1: API-smoothing and performance

Published by marco on in Programming

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

Quino 2 is finally ready and will go out the door with a 2.1 rather than a 2.0 version number. The reason being that we released 2.0 internally and tested the hell out of it. 2.1 is the result of that testing. It includes a lot of bug fixes as well as API tweaks to make things easier for developers.

On top of that, I’ve gone through the backlog and found many issues that had either been fixed already, were obsolete or had been inadequately specified. The Quino backlog dropped from 682 to 542 issues.

Breaking changes

The following changes are marked with Obsolete attributes, so you’ll get a hint as to how to fix the problem. Since these are changes from an unreleased version of Quino, they cause a compile error.

  • UseMetaSchemaWinformDxFeedback() has been renamed to UseMetaschemaWinformDx()
  • UseSchemaMigrationSupport() has been renamed to UseIntegratedSchemaMigration()
  • MetaHttpApplicationBase.MetaApplication has been renamed to BaseApplication
  • The IServer.Run() extension method is no longer supported.
  • GetStandardFilters, GetStandardFiltersForFormsAuthentication() and GetStandardFiltersForUnrestrictedAuthentication are no longer supported. Instead, you should register filters in the IOC and use the IWebFilterAttributeFactory.CreateFilters() to get the list of supported filters
  • The ToolRequirementAttribute is no longer supported or used.
  • AssemblyExtensions.GetLoadableTypesWithInterface() is no longer supported
  • AssemblyTools.GetValidAssembly() has been replaced with AssemblyTools.GetApplicationAssembly(); GetExecutableName() and GetExecutablePath() have removed.
  • All of the constant expressions on the MetaBuilderBase (e.g. EndOfTimeExpression) are obsolete. Instead, use MetaBuilderBase.ExpressionFactory.Constants.EndOfTime instead.
  • All of the global values on MetaObjectDescriptionExtensions are obsolete; instead, use the IMetaObjectFormatterSettings from the IOC to change settings on startup.
  • Similarly, the set of extension methods that included GetShortDescription() has been moved to the IMetaObjectFormatter. Obtain an instance from the IOC, as usual.

2 months Ago

Quino v2.0: Logging, Dependencies, New Assemblies & Nuget

Published by marco on in Programming

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

In the beta1 and beta2 release notes, we read about changes to configuration, dependency reduction, the data driver architecture, DDL commands, security and access control in web applications and a new code-generation format.

In 2.0 final—which was actually released internally on November 13th, 2015 (a Friday)—we made the following additional improvements:

These notes are being published for completeness and documentation. The first publicly available release of Quino 2.x will be 2.1 or higher (release notes coming soon).

Breaking changes

 A big project will have a lot of errors (over 12,000!)As we’ve mentioned before, this release is absolutely merciless in regard to backwards compatibility. Old code is not retained as Obsolete. Instead, a project upgrading to 2.0 will encounter compile errors.

The following notes serve as an incomplete guide that will help you upgrade a Quino-based product.

As I wrote in the release notes for beta1 and beta2, if you arm yourself with a bit of time, ReSharper and the release notes (and possibly keep an Encodo employee on speed-dial), the upgrade is not difficult. It consists mainly of letting ReSharper update namespace references for you.

Global Search/Replace

Instead of going through the errors (example shown to the right) one by one, you can take care of a lot of errors with the following search/replace pairs.

  • Encodo.Quino.Data.Persistence => Encodo.Quino.Data
  • IMetaApplication => IApplication
  • ICoreApplication => IApplication
  • GetServiceLocator() => GetServices()
  • MetaMethodTools.GetInstance => DataMetaMethodExtensions.GetInstance
  • application.ServiceLocator.GetInstance => application.GetInstance
  • Application.ServiceLocator.GetInstance => Application.GetInstance
  • application.ServiceLocator => application.GetServices()
  • Application.ServiceLocator => Application.GetServices()
  • application.Recorder => application.GetLogger()
  • Application.Recorder => Application.GetLogger()
  • session.GetRecorder() => session.GetLogger()
  • Session.GetRecorder() => Session.GetLogger()
  • Session.Application.Recorder => Session.GetLogger()
  • FileTools.Canonicalize() => PathTools.Normalize()
  • application.Messages => application.GetMessageList()
  • Application.Messages => Application.GetMessageList()
  • ServiceLocator.GetInstance => Application.GetInstance
  • MetaLayoutTools => LayoutConstants
  • GlobalContext.Instance.Application.Configuration.Model => GlobalContext.Instance.Application.GetModel()
  • IMessageRecorder => ILogger
  • GetUseReleaseSettings() => IsInReleaseMode()
  • ReportToolsDX => ReportDxExtensions

Although you can’t just search/replace everything, it gets you a long way.

Model-Building Fixes

These replacement pairs, while not recommended for global search/replace, are a handy guide for how the API has generally changed.

  • *Generator => *Builder
  • SetUpForModule => CreateModule
  • Builder.SetElementVisibility(prop, true) => prop.Show()
  • Builder.SetElementVisibility(prop, false) => prop.Hide()
  • Builder.SetElementControlIdentifier(prop, ControlIdentifiers => prop.SetInputControl(ControlIdentifiers
  • Builder.SetPropertyHeightInPixels(prop, 200); => prop.SetHeightInPixels(200);

Constructing a module has also changed. Instead of using the following syntax,

var module = Builder.SetUpForModule<AuditModule>(Name, "ApexClearing.Alps.Core", Name, true);

Replace it with the following direct replacement,

var module = Builder.CreateModule(Name, "ApexClearing.Alps.Core", Name);

Or use this replacement, with the recommended style for the v2 format (no more class prefix for generated classes and a standard namespace):

var module = Builder.CreateModule(Name, typeof(AuditModuleBuilder).GetParentNamespace());

Standard Modules (e.g. Reporting, Security, etc.)

Because of how the module class-names have changed, the standard module ORM classes all have different names. The formula is that the ORM class-name is no longer prepended its module name.

  • ReportsReportDefinition => ReportDefinition
  • SecurityUser => User
  • And so on…

Furthermore, all modules have been converted to use the v2 code-generation format, which has the metadata separate from the ORM object. Therefore, instead of referencing metadata using the ORM class-name as the base, you use the module name as the base.

  • ReportReportDefinition.Fields.Name => ReportModule.ReportDefinition.Name.Identifier
  • ReportReportDefinition.MetaProperties.Name => ReportModule.ReportDefinition.Name
  • ReportReportDefinition.Metadata => ReportModule.ReportDefinition.Metadata
  • And so on…

There’s an upcoming article that will show more examples of the improved flexibility and capabilities that come with the v2-metadata.

Action names

The standard action names have moved as well.

  • ActionNames => ApplicationActionNames
  • MetaActionNames => MetaApplicationActionNames

Any other, more rarely used action names have been moved back to the actions themselves, so for example

SaveApplicationSettingsAction.ActionName

If you created any actions of your own, then the API there has changed as well. As previously documented in API Design: To Generic or not Generic? (Part II), instead of overriding the following method,

protected override int DoExecute(IApplication application, ConfigurationOptions options, int currentResult)
{
  base.DoExecute(application, options, currentResult);
}

you instead override in the following way,

public override void Execute()
{
  base.Execute();
}

Using NuGet

If you’re already using Visual Studio 2015, then the NuGet UI is a good choice for managing packages. If you’re still on Visual Studio 2013, then the UI there is pretty flaky and we recommend using the console.

The examples below assume that you have configured a source called “Local Quino” (e.g. a local folder that holds the nupkg files for Quino).

install-package Quino.Data.PostgreSql.Testing -ProjectName Punchclock.Core.Tests -Source "Local Quino"
install-package Quino.Server -ProjectName Punchclock.Server -Source "Local Quino"
install-package Quino.Console -ProjectName Punchclock.Server -Source "Local Quino"
install-package Quino.Web -ProjectName Punchclock.Web.API -Source "Local Quino"

Debugging Support

We recommend using Visual Studio 2015 if at all possible. Visual Studio 2013 is also supported, but we have all migrated to 2015 and our knowhow about 2013 and its debugging idiosyncrasies will deteriorate with time.

These are just brief points of interest to get you set up. As with the NuGet support, these instructions are subject to change as we gain more experience with debugging with packages as well.

  • Hook up to a working symbol-source server (e.g. TeamCity)
  • Get the local sources for your version
  • If you don’t have a source server or it’s flaky, then get the PDBs for the Quino version you’re using (provided in Quino.zip as part of the package release)
  • Add the path to the PDBs to your list of symbol sources in the VS debugging options
  • Tell Visual Studio where the sources are when it asks during debugging
  • Tell R# how to map from the source folder (c:\BuildAgent\work\9a1bb0adebb73b1f for Quino 2.0.0-1765) to the location of your sources

Quino packages are no different than any other NuGet packages. We provide both standard packages as well as packages with symbols and sources. Any complications you encounter with them are due to the whole NuGet experience still being a bit in-flux in the .NET world.

An upcoming post will provide more detail and examples.

Creating Nuget Packages

We generally use our continuous integration server to create packages, but you can also create packages locally (it’s up to you to make sure the version number makes sense, so be careful). These instructions are approximate and are subject to change. I provide them here to give you an idea of how packages are created. If they don’t work, please contact Encodo for help.

  • Open PowerShell
  • Change to the %QUINO_ROOT%\src directory
  • Run nant build pack to build Quino and packages
  • Set up a local NuGet Source name “Local Quino” to %QUINO_ROOT%\nuget (one-time only)
  • Change to the directory where your Quino packages are installed for your solution.
  • Delete all of the Encodo/Quino packages
  • Execute nant nuget from your project directory to get the latest Quino build from your local folder

Limited drive-space chronicles #2: Why...isual Studio installed on my machine?

Published by marco on in Tips & Tricks

If you’re like us at Encodo, you moved to SSDs years ago…and never looked back. However, SSDs are generally smaller because the price (still) ramps up quickly as you increase size. We’ve almost standardized on 512GB, but some of us still have 256GB drives.

Unfortunately, knowing that we all have giant hard drives started a trend among manufacturers to just install everything, just in case you might need it. This practice didn’t really cause problems when we were still using by-then terabyte-sized HDs. But now, we are, once again, more sensitive to unnecessary installations.

If you’re a Windows .NET developer, you’ll feel the pinch more quickly as you’ve got a relatively heavyweight Visual Studio installation (or three…) as well as Windows 8.1 itself, which weighs in at about 60GB after all service packs have been installed.

Once you throw some customer data and projects and test databases on your drive, you might find that you need, once again, to free up some space on your drive.

I wrote a similar post last year and those tips & tricks still apply as well.

System Cleanup is back

One additional tip I have is to use Win + S to search for “Free up disk space by deleting unnecessary files”[1] and run that application in “clean up system files” mode: the latest version will throw out as much Windows Update detritus as it can, which can clean up gigabytes of space.

Remove Old Visual Studios

The other measure you can take is to remove programs that you don’t use anymore: for .NET developers that means you should finally toss out Visual Studio 2010—and possibly even 2013, if you’ve made the move to the new and improved 2015 already.[2] Removing these versions also has the added benefit that extensions and add-ons will no longer try to install themselves into these older Visual Studios anymore.

However, even if you do remove VS2010, for example, you might find that it just magically reappears again. Now, I’m not surprised when I see older runtimes and redistributables in my list of installed programs—it makes sense to keep these for applications that rely on them—but when I see the entire VS2010 SP1 has magically reappeared, I’m confused.

 VS2010 in Programs and Features

Imagine my surprise when I installed SQL Server Management Studio 2016—the November 2015 Preview—and saw the following installation item:

 SQL Server Management Studio installation

However, if you do remove this item again, then SQL Server Management Studio will no longer run (no surprise there, now that we know that it installed it). However, if you’re just doing cleanup and don’t know about this dependency[3], you might accidentally break tools. So be careful; if you’re too aggressive, you’ll end up having to re-install some stuff.[4]


[1] The reason I write that “it’s back” is that for a couple of versions of Windows, Microsoft made it an optional download/feature instead of installing it by default.
[2] Be careful about removing Visual Studio 2013 if you have web projects that still rely on targets installed with VS2013 but not included in VS2015. I uninstalled 2013 on my laptop and noticed a warning about an MS target that the compiler could no longer find.
[3] The fact that Windows still can’t tell you about dependencies is a story for another day. We should have had a package manager on Windows years ago. And, no, while Choco is a lovely addition, it’s not quite the full-fledged package manager that aptitude is on Ubuntu.
[4] Speaking from experience. Could you tell?