This page shows the source for this entry, with WebCore formatting language tags and attributes highlighted.

Title

How to replace "warnings as errors" in your process

Description

A build started started failing after a commit. However, the errors had nothing to do with the changes in the commit. A little investigation revealed that the cloud agent had started using a newer version of the build tool that included an expanded set of default warnings. These warnings started appearing first on CI because developers hadn't had the chance to update their tools yet. The "warnings as errors" setting turned what would have been a build with a few extra warnings into a failing build that prevented a developer from being able to apply completely unrelated changes. The setting allowed new, unrelated, and irrelevant warnings to push their way to the top of the priority queue. 👉 <abbr title="too long; didn't read">tl;dr</abbr>: I don't think we should use the "errors as warnings" setting anymore. You can get the same benefit ---and even more---by using newer, more finer-grained configuration options. <h>What are we trying to accomplish?</h> This section wasn't included in my original draft of this essay. It only occurred to me under the shower that this is the <i>real</i> reason why I wrote a ten-page essay to answer a teammate's question in a PR review. In hindsight, it's obvious: to answer whether we should re-enable the "warnings as errors" setting, we should first think about what doing so would accomplish. What need does it fulfill? The rest of this essay meanders drunkenly along a path toward what I hope is a reasonable answer. <h>My team doesn't care about warnings</h> I understand the sentiment. You're in a team that never, or rarely, looks at warnings. You've given up on teaching them how to look at warnings and keep them fixed. Fine. You just make every warning an error and now they absolutely have to fix everything. Problem solved. <h>My team now only cares about warnings</h> Except it isn't, is it? Not really. What you've now done is ensured that your team will be constantly fixing errors that aren't really errors at times when they wouldn't want or need to be doing so. Don't make me waste time pretty-printing code that I'm still writing! How annoying is it when you can't run a test because your comment has an extra line below it? Are you kidding me?<fn> <hr> <ft>There are ways to configure formatting automatically to reduce incidences of these. Those ways are discussed a bit below.</ft> <h>How much do you trust your team?</h> If your team <i>does</i> care about warnings, then, ... why do you need to make them errors? Before handcuffing developers with a setting, think about whether there isn't a trust problem first. Are you addressing a symptom rather than the cause? While it's possible that applying handcuffs is the best possible solution in your case, consider that there are other solutions along a spectrum that goes from "enforcing discipline" to "relying on individual discipline". Any feature that's enforced at all times will end up hampering efficiency and flexibility in some cases, while any feature that's left up to developers is liable to not be applied consistently. The job of the person setting up code-style configuration is to thread that needle, tailoring the configuration for the team and solution at hand. If you have a lot of solutions and teams, then you also get to consider the maintenance overhead of having too many custom configurations. In that case, you might want to make a few standard bundles that group teams and solutions, like "legacy", "modern", "junior team", etc. You don't have to name them like that, but the name should give you an idea of how loose or restrictive the settings would be. <h>Let the CI do it, then</h> I don't have time for all of that. Let's just run them on the CI. Warnings as errors in the cloud FTW! Now you're allowing team members to push all the way up to the server before they realize that they have errors. Granted, they're actually warnings, but you can't merge to master until you fix them, so, yeah, they're errors. This isn't less annoying. But, but, but, what if they're, like, real warnings? Like "possible <c>NullReferenceException</c>" or something like that? That's a good point, sure. But, in most cases, it's something more like "extra line found at end of file", "space missing after parenthesis", "method can be made private", "class should be internal", etc. There are better---more automated---ways of addressing some of those, which we'll discuss below. <h>The CI is not necessarily stable</h> Also, what if some warnings start appearing in your CI because of a tooling change? That can never happen, though, right? Because you've locked down <i>all</i> of your tool versions so that it can never happen? No? You didn't do that? You're using "latest"? Why? <pullquote align="right" width="12em">The people building the tools are pretty clever, so we want to know what new things they have to tell us about our code.</pullquote>Oh, right. Because it makes sense. If you lock down your tool versions, you run the very real risk of not knowing when your build will stop running with more-modern tools. You run the risk of it having been years since you last changed anything in your build and your being stuck with those settings and old tools ... until they're obsolete or no longer available on your build server. It's better to use "latest" and have an occasional spike of warnings than to just never know where you stand with newer toolchains. Locking down tool versions leads to things like DevOps having to set up on-site build agents with Visual Studio 2010 on them for certain projects. OK, so we want to use latest tools, but that means that we might also get new warnings. These are a good thing! The people building the tools are pretty clever, so we want to know what new things they have to tell us about our code. <h>The future broke the build</h> What we don't want is for those new things to break builds that used to be running just fine. This usually shows up when someone pushes new commits, runs the CI, and sees that they're getting errors that they didn't see locally. WTH? "<i>My code</i> didn't cause those errors?" The drawback here is this is (A) annoying and (B) it's very possible that the new errors are a distraction at this point in time. The person's bug fix may be important, but the new warnings have now bumped themselves to the top of the priority queue! And what if the person whose build has failed isn't well-qualified to address these new warnings? Well, then they get to bump the new warnings to the top of someone else's priority queue! Probably a more senior developer. Fun for all! What's the solution then? Well, if you realize that the new warnings appeared because of a tool change, then I suppose you should try to pin the tool version on the CI, with all of the drawbacks outlined above. That's assuming that the person to whom this happens is (A) capable of figuring this out and (B) knows how to pin the tool version. And (C) we don't really like that solution, for the reasons outlined above. <h>What were the requirements again?</h> What about if we think again about what we're trying to accomplish with "warnings as errors"? Thinking...🤔🤔🤔... <dl> The system must allow individual configuration of severity. We want <i>certain</i> warnings to be errors. The system must not require all team members be capable of configuring it. We want clever tool people to configure things for maximum developer comfort, warning visibility with everyone having to become a clever tool person (which isn't generally possible). The system must be configurable per solution. <div>Each solution should be able to decide what is an error and what is a warning and what is a suggestion. You can't make "possible null-reference exception" an error in some legacy solutions without completely killing forward progress. We want warnings to indicate potential problems, but be careful about forcing a solution to address all of them immediately. It's more realistic to create tasks to slowly eliminate warnings, only switching a setting to an error later, to prevent future transgressions.</div> The system must have a versioned configuration. We want the configuration to grow with the project, so older versions have their own configuration, with which they can still be run when needed. The system should use default settings wherever possible No matter how clever your own team's tool person is, the people who designed the whole thing are probably even better. Apply configuration changes knowingly and judiciously. The system should not force a developer to change priorities unnecessarily. <div>If the developer is focused on something, they shouldn't be forced to switch modes and prioritize formatting. Use gentle, visible hints, unless it's really, really relevant to what they're working on. For example, a possible <c>NullReferenceException</c> is something to be avoided, but is it really an error in all code? It's definitely a <i>warning</i>, but if the developer knows that it doesn't matter <i>right now</i>, then they should be able to ignore it, no? I mean, they haven't even <i>committed it</i> yet (as far as you know 😉). Maybe they have a breakpoint to see how the heck that variable could be <c>null</c> in the first place and they were just going to bounce the <abbr title="Extended Instruction Point">EIP</abbr> past the crash anyway. <abbr title="You Only Live Once">YOLO</abbr>. Anyway, we want to be really careful about how pushy we are with the IDE configuration. We want to strike a balance between missing actual problems and decreasing efficiency. We don't want the developer above to have to write a suppression---or, even worse, do some other, ad-hoc short-circuit of inspections---in order to keep working.</div> The system should be future-proof We don't want running builds to stop running just because we've upgraded tools, but made no changes to the tools. This won't always be possible but, in this case, the "warnings as errors" setting is a pretty obvious <a href="https://en.wiktionary.org/wiki/footgun">footgun</a>. The system should provide developers everything they need locally to avoid CI failure <div>Something should fail only on CI as a last resort. That is, a developer must have tools that make it relatively easy to pass CI. This includes being able to see all warnings in the solution, whether warnings would fail the CI, or having an easy way to apply formatting to all files, if incorrect formatting would fail the build. We want to avoid a process that leads to half of our commits being called "fix formatting" and "remove warnings". So, we should consider things like having the IDE auto-reformat files on save.</div> The system should discourage allowing inspection violations to be committed <div>Inspections should be applied and made visible as quickly as possible, to give the developer the opportunity to produce conforming code from the get-go. The path of least resistance should result in committing code that will also pass CI. We don't want to encourage "noisy" commits that "fix up" formatting or other inspection violations. We would rather have a high signal-to-noise ratio in our commits. We want compact, descriptive commits---so we don't want bug-fix commits to include formatting changes to other parts of the file, if we can avoid it.</div> </dl> Looking at these requirements, we have to conclude that the "warnings as errors" configuration option is an absolute cudgel that we had to use in the old days because we didn't have fine-tuned control of the inspection-configuration. <h>We are no longer in the dark ages</h> Can we do better today, with modern tools? Absolutely, we can! Most modern IDEs support <c>.editorconfig</c>, which allows fine-tuned configuration of both code-style and formatting, especially for languages like C# and TypeScript/JavaScript. The wide variety of JetBrains, Intellij-based tools use it as well, e.g. <a href="https://www.jetbrains.com/help/pycharm/configuring-code-style.html#import-export-schemes">PyCharm</a>, <a href="https://www.jetbrains.com/help/webstorm/configuring-code-style.html#configuring-code-style-for-a-lan">WebStorm</a>, or <a href="https://www.jetbrains.com/help/phpstorm/configuring-code-style.html#import-export-schemes">PHPStorm</a>. Visual Studio understands it. Visual Studio Code understands it. Of course, the devil is in the details and, the degree to which code-inspection configuration applies from one IDE to another depends very much on the level of standardization for that language and environment. The .NET/C# world has a high degree of standardization, which is very helpful. <h>Using EditorConfig</h> EditorConfig allows you to control almost anything you can think of about your code style or formatting. These are called <i>inspections</i>, each of which you can configure with an inspection-specific value and a severity to assign when the inspection is triggered. For example: <code> dotnet_style_require_accessibility_modifiers = for_non_interface_members:silent dotnet_style_prefer_auto_properties = true:silent </code> The two inspections above should be relatively obvious. In both cases, the preferred setting is configured, but the severity is "silent", so the IDE doesn't complain about it. What's the point of configuring a preference and then not showing it to the developer? Ah, because the developer is the not the only one modifying the code. Excuse me? <h>The IDE also writes code</h> Don't forget that the IDE will auto-format the code when requested. The IDE also writes code when it refactors anything. It needs to know how to format the code that it's inserting or modifying. The IDE uses the configuration in the EditorConfig to determine how to format the code. Your tools guy can configure the EditorConfig to conform to the style that the solution / team wants to use. When the code is auto-formatted or refactored, everything should end up looking just the way they wanted it. <h>How to apply "silent" inspections?</h> If you have a "silent" severity, that means it's something that you don't want the team wasting time with during development. However, if no-one ever auto-formats the code, then those inspections will never be applied. You should consider the process by which your solution will be made to conform with silent inspections in the EditorConfig. <dl> Visible inspections <div>If the inspection severity is <c>suggestion</c> or higher<fn>, then the developer sees an indicator in the code when the file is open. Suggestions, warnings, and errors are shown in the build output, as well. Of course, the developer can disable showing warnings and messages (where suggestions appear) in the error-list pane, but you can't control everything---and you shouldn't try. Give your developers the tools and configuration to be efficient and produce good code, but try not to be too pushy about when they do it.</div> Invisible inspections <div>If the inspection severity is <c>silent</c> or <c>none</c>, then the inspection setting is only used by auto-formatting and refactoring tools. In this case, you'll have to consider <i>when</i> will your code be formatted? Do your developers occasionally auto-format files? Do they auto-format on save? Is there a step in the CI that auto-formats everything before compilation? If so, does it commit those changes? Or does the CI reject for formatting warnings? If you have silent inspections, be honest about when they're going to be applied. If you don't have a plan, then they will be applied seemingly randomly when someone inadvertently triggers the hotkey for auto-formatting a file<fn>, which may lead to unpleasant surprises and/or messy commits.</div> </dl> <hr> <ft>There is a bit of a mismatch with using .EditorConfig versus the JetBrains-native configuration: <a href="https://www.jetbrains.com/help/resharper/Code_Analysis__Configuring_Warnings.html#editorconfig">JetBrains tools support an additional severity level called "Hint"</a>, which is generally shown as a green squiggly line rather than the blue one for warnings. However, if you set the severity to "hint", Visual Studio interprets it as a warning, showing it as such in both the IDE and in the build output. On top of that, JetBrains seems to think that the <c>silent</c> option is called <c>none</c>, although it seems to understand <c>silent</c> well enough.</ft> <ft>Probably because of historical reasons, there is a difference between Visual Studio's <i>Format Document</i> (<kbd>Ctrl</kbd> + <kbd>K</kbd>, <kbd>D</kbd>), ReSharper/Rider's <i>Reformat Code</i> (<kbd>Ctrl</kbd> + <kbd>Alt</kbd> + <kbd>Enter</kbd>), and ReSharper/Rider's <i>Cleanup Code</i> (<kbd>Ctrl</kbd> + <kbd>E</kbd>, <kbd>C</kbd>). I can't tell you exactly which inspections are considered in which mode, but I've listed them in order of "number of changes they seem to make to the code", with Visual Studio's native command seeming to make the fewest changes.</ft> <h>Code Style vs. Formatting</h> Let's clear up the distinction between these two main groups of inspections. <dl> Code style A code-style inspection expresses a preference for something that affects semantic content. That is, applying the fix for the inspection may change the code in a way that makes it compile differently or may lead to it not compiling at all. Even something as subtle as using <c>var</c> instead of an explicit type can, in very rare cases, lead to code that no longer compiles. By now, many IDE tools are generally clever enough to avoid even suggesting such a change, but it can still happen. Formatting A formatting inspection expresses a preference that affects only syntactic content. That is, applying the fix for the inspection will change the appearance, but will not change how the code compiles. Consistent formatting is very important for direct readability of the code, but also to avoid spurious differences in files when inspecting commits. The fewer differences there are, the less likely it is that conflicts appear when merging or rebasing. </dl> <h>Local vs. CI</h> So we've examined inspections in detail and talked a lot about setting severity to optimize the developer feedback loop i.e., we don't want to mess with a developer's priority queue unless absolutely necessary. But aren't there some things that we might allow a developer to do locally but not allow to pass CI? That's where the "warnings as errors" setting ensured that the CI never passed, even if the developer forgot to check something locally. For example, it's important to have consistent formatting before attempting a merge. There are other ways to encourage and support proper coding practices, though. <dl> Auto-formatting files Most IDEs have an option to run formatting or code cleanup automatically when files are saved. Consider setting this option in your solution. Developers will become accustomed to having the IDE lightly reformat even their WIP code and they will always have correctly formatted files. Local pre-commit hooks <div>Pre-commit hooks can run locally, running global formatting on the code base before a developer can commit. This is kind of touchy, as sometimes developers are just committing a <abbr title="Work in progress">WIP</abbr> to avoid losing their changes. It would be annoying if you had to clean up your formatting just to commit those. You could include auto-formatting in the commit hook, but it's probably better to set up auto-formatting in the IDE.</div> Server pre-commit hooks <div>Instead of a local pre-commit hook, you can configure a pre-commit hook on the server. This hook could cause a push to be rejected if its head commit doesn't conform to certain conditions. But...isn't that what the CI is for? Well, kind of, but the CI runs only <i>after</i> the commits have landed on the server. It's prefereable to have the developer fix commits locally before being able to push, again, to avoid "fix formatting" and "cleanup warnings" commits. You could choose which branch patterns to run these on.</div> </dl> My recommendation is to lean as heavily as possible on IDE configuration before getting lost in the weeds with commit hooks. <h>Avoiding ugly or "noisy" commits</h> As soon as we start talking about "fixes" for warnings or formatting, we're talking about "noisy" commits. If we enforce inspections more strictly on CI than we do locally, then there will be more "fixup" commits. OK, so what do we do about them? Squash 'em! Right? Right? 🫠 Kind of. Look, the PR machinery allows you to merge, rebase, squash-merge, or squash-rebase. That's OK, but it's not great. A lot of times, you'll have four commits that are descriptive and semantically relevant, describing changes that were made, as well as a few commits that address problems that either came up in CI or as part of the review. Don't you think you should squash those into the four commits and make a clean history instead of just squashing the whole lump into one big hairball? Or do you think that each PR should have only one commit, equating a branch with a commit (as e.g. plugins like <a href="https://graphite.dev/">Graphite</a> positively encourage)? I recently wrote <a href="{app}/view_article.php?id=4900">PRs suck. Stop trying to fix them.</a> that also touches on the workflow outlined below. You see how tool configuration affects everything? You have to think about how your team builds PRs, how they review PRs, how they repair PRs after review---or whether they even use PRs. I would encourage a more real-time review culture, where possible. <ol> Set up the tools so a developer has a good chance of committing conforming code When a developer has a set of commits they want to push, they ask another team member to review, live, explaining what they did. The live reviewer can point out any issues in the code and they can repair them <i>together</i>, all without wasting time writing and reading review comments. Those repairs can be squashed into the appropriate commits before anything's pushed to the server. Once on the server, the CI runs. If something still fails, the original developer can squash in fixes and force-push<fn> to keep the commits in the PR clean. You can create a PR to note the integration to master, but then its mostly a formality. </ol> <hr> <ft>What's the problem? Don't you trust your team members to decide what to do with their own highly ephemeral feature branches? Allowing force-push encourages team members to care about what the commit history looks like. It give them a tool that allows them to revise their commit history until it tells a coherent story. See <a href="{app}/view_article.php?id=3864">Rebase Considered Essential</a> for a longer discussion on rewriting commit history.</ft> <h>Configuring your solution</h> Phew! So, what have we learned? <ul> There is no one-size-fits-all solution. Errors are priority interrupts. The developer cannot execute until an error is fixed. Be judicious about which inspections you promote to errors. "warnings as errors" is an inelegant configuration option that promotes <i>any</i> warning to an error. That's not very judicious at all. It has been replaced by a much more flexible and granular system in EditorConfig. Encourage a culture in your team that pays attention to warnings. There will always be team members who do more than others. That's just life. Don't force those team members least equipped to deal with them to address warnings by making them errors. Baby steps. Silent inspections will only be applied when the tools apply them, e.g., with auto-formatting or refactoring. Consider when those inspections will be applied. Keep what's executed locally as close as possible to what's executed in CI Consider how you want your whole process to work, right up through integration to the trunk/main/master. </ul> If that all sounds like a lot, well---it is. Building clean, maintainable code is a complex undertaking. There are a lot of tools that can help, but you have to put some time into thinking how you want to use them, and then into configuring them so they help you instead of getting in your way. It's a delicate balancing act: to give developers the best chance of (A) producing conforming code in the first place and (B) avoiding "noisy" commits, while (C) not hitting them with priority interrupts irrelevant to what they're working on. There will be tradeoffs. <h>Sharing or copying configuration?</h> Once you've set up a couple of solutions, you can just copy/paste the configuration to others as a starting point. Remember, though, that solutions are usually pretty unique. Only consider generalizing or packaging a configuration if you've considered that, <ul> It will be more difficult for solutions that use the package to override settings that turn out not be as standard as you thought. It will be more time-consuming to make changes to the configuration because you have to roll out a new version---e.g., when the underlying tools change---with potentially multiple stakeholder solutions. This extra work may discourage solutions from improving the shared configuration. Instead, it will languish, with all users annoyed by the same inconsistencies, but no-one willing to do the work to address them. </ul> For these reasons, each solution having its own copy of the configuration is probably better. They can just copy/paste---the horror!---improvements where appropriate. If you're worrying about configurations drifting out-of-sync, schedule a work item every few sprints that evaluates and possibly re-syncs configurations. <h>Conclusion</h> There are always trade-offs. Improving code-quality is an incremental process. So is configuring the tools that support that process. It gets easier with practice. Good luck!