QualityGate: Making the Build Fail Before the Code Rots
The Problem: Quality Is Invisible Until It's Too Late
Most .NET teams discover code quality problems reactively. A method quietly grows to 200 lines. A class accumulates 40 dependencies. A namespace drifts into the zone of pain. The DDD boundaries you carefully designed erode silently. By the time someone notices, the cost of fixing it has compounded.
The standard response is "add SonarQube" or "enable Roslyn analyzers." These tools are valuable, but they share a limitation: they operate on individual rules (don't use var, don't exceed N lines) without understanding the architectural health of your codebase. They don't tell you whether your namespaces respect the Main Sequence, whether your types are cohesive, or whether your tests actually kill mutants.
What if you could define a quality gate -- a set of thresholds for complexity, coupling, cohesion, coverage, and mutation testing -- and have the build fail if any of them are violated? What if the entire analysis ran from Roslyn syntax trees, produced a JSON report, and served an interactive dashboard?
That's what FrenchExDev.Net.QualityGate does.
Architecture: Four Interfaces, Six Analyzers, One Engine
The design follows a simple principle: analyzers are pure functions, infrastructure is injected.
QualityEngine (orchestrator)
|-- ISolutionLoader → MsBuildSolutionLoader (Roslyn MSBuild)
|-- ICoverageReportParser → DefaultCoverageReportParser (Cobertura XML)
|-- IMutationReportParser → DefaultMutationReportParser (Stryker JSON)
|-- IReportWriter → DefaultReportWriter (filesystem)
|
|-- ProjectAnalyzer (static, per-project)
| |-- InterfaceAnalyzer (interface discovery, orphans)
| |-- ApiSurfaceAnalyzer (public API counting)
| |-- CouplingAnalyzer (Ca/Ce at namespace + type level)
| +-- TypeMetricsBuilder (aggregates per-type metrics)
| |-- CohesionAnalyzer (LCOM4 via union-find)
| +-- ComplexityAnalyzer (cyclomatic, cognitive, LOC, MI)
|
+-- QualityGateEvaluator (pure function: report × thresholds → violations)QualityEngine (orchestrator)
|-- ISolutionLoader → MsBuildSolutionLoader (Roslyn MSBuild)
|-- ICoverageReportParser → DefaultCoverageReportParser (Cobertura XML)
|-- IMutationReportParser → DefaultMutationReportParser (Stryker JSON)
|-- IReportWriter → DefaultReportWriter (filesystem)
|
|-- ProjectAnalyzer (static, per-project)
| |-- InterfaceAnalyzer (interface discovery, orphans)
| |-- ApiSurfaceAnalyzer (public API counting)
| |-- CouplingAnalyzer (Ca/Ce at namespace + type level)
| +-- TypeMetricsBuilder (aggregates per-type metrics)
| |-- CohesionAnalyzer (LCOM4 via union-find)
| +-- ComplexityAnalyzer (cyclomatic, cognitive, LOC, MI)
|
+-- QualityGateEvaluator (pure function: report × thresholds → violations)The QualityEngine orchestrates the entire pipeline:
public class QualityEngine
{
private readonly QualityGateConfig _config;
private readonly ISolutionLoader _solutionLoader;
private readonly ICoverageReportParser _coverageParser;
private readonly IMutationReportParser _mutationParser;
private readonly IReportWriter _reportWriter;
public QualityEngine(
QualityGateConfig config,
ISolutionLoader? solutionLoader = null,
ICoverageReportParser? coverageParser = null,
IMutationReportParser? mutationParser = null,
IReportWriter? reportWriter = null)
{
_config = config;
_solutionLoader = solutionLoader ?? new MsBuildSolutionLoader();
_coverageParser = coverageParser ?? new DefaultCoverageReportParser();
_mutationParser = mutationParser ?? new DefaultMutationReportParser();
_reportWriter = reportWriter ?? new DefaultReportWriter();
}
public async Task<QualityReport> AnalyzeAsync(CancellationToken ct = default)
{
var solution = await _solutionLoader.LoadAsync(_config.Solution);
var projectMetricsList = new List<ProjectMetrics>();
foreach (var project in solution.Projects)
{
ct.ThrowIfCancellationRequested();
projectMetricsList.Add(await ProjectAnalyzer.AnalyzeAsync(project, solution, ct));
}
var solutionDir = Path.GetDirectoryName(Path.GetFullPath(_config.Solution)) ?? ".";
var report = new QualityReport
{
SolutionPath = _config.Solution,
Timestamp = DateTimeOffset.UtcNow,
Projects = projectMetricsList,
Coverage = _coverageParser.TryParseGlobs(solutionDir, _config.CoverageGlobs),
Mutation = _mutationParser.TryParseGlobs(solutionDir, _config.MutationGlobs)
};
return new QualityReport
{
/* ... same fields ... */
GateResults = QualityGateEvaluator.Evaluate(report, _config.Gates)
};
}
}public class QualityEngine
{
private readonly QualityGateConfig _config;
private readonly ISolutionLoader _solutionLoader;
private readonly ICoverageReportParser _coverageParser;
private readonly IMutationReportParser _mutationParser;
private readonly IReportWriter _reportWriter;
public QualityEngine(
QualityGateConfig config,
ISolutionLoader? solutionLoader = null,
ICoverageReportParser? coverageParser = null,
IMutationReportParser? mutationParser = null,
IReportWriter? reportWriter = null)
{
_config = config;
_solutionLoader = solutionLoader ?? new MsBuildSolutionLoader();
_coverageParser = coverageParser ?? new DefaultCoverageReportParser();
_mutationParser = mutationParser ?? new DefaultMutationReportParser();
_reportWriter = reportWriter ?? new DefaultReportWriter();
}
public async Task<QualityReport> AnalyzeAsync(CancellationToken ct = default)
{
var solution = await _solutionLoader.LoadAsync(_config.Solution);
var projectMetricsList = new List<ProjectMetrics>();
foreach (var project in solution.Projects)
{
ct.ThrowIfCancellationRequested();
projectMetricsList.Add(await ProjectAnalyzer.AnalyzeAsync(project, solution, ct));
}
var solutionDir = Path.GetDirectoryName(Path.GetFullPath(_config.Solution)) ?? ".";
var report = new QualityReport
{
SolutionPath = _config.Solution,
Timestamp = DateTimeOffset.UtcNow,
Projects = projectMetricsList,
Coverage = _coverageParser.TryParseGlobs(solutionDir, _config.CoverageGlobs),
Mutation = _mutationParser.TryParseGlobs(solutionDir, _config.MutationGlobs)
};
return new QualityReport
{
/* ... same fields ... */
GateResults = QualityGateEvaluator.Evaluate(report, _config.Gates)
};
}
}Every infrastructure dependency is behind an interface. Every analyzer is a static class with no state. The engine ties them together and nothing else.
The Six Analyzers
1. ComplexityAnalyzer: Cyclomatic, Cognitive, Maintainability
The ComplexityAnalyzer walks Roslyn syntax trees to compute four metrics per method:
Cyclomatic Complexity counts decision points. One for the method, plus one for each if, for, foreach, while, do, catch, case, ternary, conditional access, and logical operator:
public static int CyclomaticComplexity(SyntaxNode methodBody)
{
int complexity = 1;
foreach (var node in methodBody.DescendantNodes())
{
if (IsDecisionPoint(node))
complexity++;
}
return complexity;
}
private static bool IsDecisionPoint(SyntaxNode node)
{
return node switch
{
IfStatementSyntax => true,
ForStatementSyntax => true,
WhileStatementSyntax => true,
CatchClauseSyntax => true,
ConditionalExpressionSyntax => true,
CaseSwitchLabelSyntax => true,
ConditionalAccessExpressionSyntax => true,
BinaryExpressionSyntax binary => IsLogicalOperator(binary),
_ => false
};
}public static int CyclomaticComplexity(SyntaxNode methodBody)
{
int complexity = 1;
foreach (var node in methodBody.DescendantNodes())
{
if (IsDecisionPoint(node))
complexity++;
}
return complexity;
}
private static bool IsDecisionPoint(SyntaxNode node)
{
return node switch
{
IfStatementSyntax => true,
ForStatementSyntax => true,
WhileStatementSyntax => true,
CatchClauseSyntax => true,
ConditionalExpressionSyntax => true,
CaseSwitchLabelSyntax => true,
ConditionalAccessExpressionSyntax => true,
BinaryExpressionSyntax binary => IsLogicalOperator(binary),
_ => false
};
}Cognitive Complexity is nesting-aware. A nested if inside a for inside a try is harder to understand than three sequential if statements, even though they have the same cyclomatic complexity. Each decision point adds 1 + current nesting depth:
private static void ComputeCognitive(SyntaxNode node, int nesting, ref int total)
{
foreach (var child in node.ChildNodes())
{
var (isIncrement, isNesting) = ClassifyCognitiveNode(child);
if (child is ElseClauseSyntax)
total += 1;
else if (isIncrement)
total += 1 + nesting;
ComputeCognitive(child, isNesting ? nesting + 1 : nesting, ref total);
}
}private static void ComputeCognitive(SyntaxNode node, int nesting, ref int total)
{
foreach (var child in node.ChildNodes())
{
var (isIncrement, isNesting) = ClassifyCognitiveNode(child);
if (child is ElseClauseSyntax)
total += 1;
else if (isIncrement)
total += 1 + nesting;
ComputeCognitive(child, isNesting ? nesting + 1 : nesting, ref total);
}
}Logical Lines of Code counts statement nodes (excluding blocks) to avoid counting braces.
Maintainability Index uses a Halstead-based formula clamped to [0, 100]:
public static double MaintainabilityIndex(int cyclomaticComplexity, int linesOfCode)
{
double vocabulary = Math.Max(2.0 * linesOfCode, 2.0);
double halsteadVolume = Math.Max(linesOfCode * Math.Log2(vocabulary), 1.0);
double mi = 171.0
- 5.2 * Math.Log(halsteadVolume)
- 0.23 * cyclomaticComplexity
- 16.2 * Math.Log(linesOfCode);
return Math.Clamp(mi * 100.0 / 171.0, 0.0, 100.0);
}public static double MaintainabilityIndex(int cyclomaticComplexity, int linesOfCode)
{
double vocabulary = Math.Max(2.0 * linesOfCode, 2.0);
double halsteadVolume = Math.Max(linesOfCode * Math.Log2(vocabulary), 1.0);
double mi = 171.0
- 5.2 * Math.Log(halsteadVolume)
- 0.23 * cyclomaticComplexity
- 16.2 * Math.Log(linesOfCode);
return Math.Clamp(mi * 100.0 / 171.0, 0.0, 100.0);
}2. CohesionAnalyzer: LCOM4 via Union-Find
LCOM4 (Lack of Cohesion of Methods) answers the question: "does this class have a reason to exist as one class, or is it really two classes glued together?"
The algorithm builds a graph where nodes are methods and fields. An edge connects a method to a field it reads/writes, or to another method it calls. The number of connected components in this graph is the LCOM4 score:
- LCOM4 = 1: perfectly cohesive -- every method and field is reachable from every other
- LCOM4 > 1: the class has disconnected groups of methods/fields and should probably be split
The implementation uses a union-find (disjoint-set) data structure for O(α(n)) component counting:
public static int Lcom4(INamedTypeSymbol typeSymbol, SemanticModel model, SyntaxNode typeDeclaration)
{
var methods = CollectMethods(typeSymbol);
var fields = CollectFields(typeSymbol);
if (methods.Count + fields.Count == 0)
return 0;
var uf = new UnionFind();
uf.Initialize(methods, fields);
var methodDeclarations = MapMethodDeclarations(typeDeclaration, model, methods);
BuildEdges(methods, methodDeclarations, model, fields, uf);
return uf.CountComponents(methods, fields);
}public static int Lcom4(INamedTypeSymbol typeSymbol, SemanticModel model, SyntaxNode typeDeclaration)
{
var methods = CollectMethods(typeSymbol);
var fields = CollectFields(typeSymbol);
if (methods.Count + fields.Count == 0)
return 0;
var uf = new UnionFind();
uf.Initialize(methods, fields);
var methodDeclarations = MapMethodDeclarations(typeDeclaration, model, methods);
BuildEdges(methods, methodDeclarations, model, fields, uf);
return uf.CountComponents(methods, fields);
}The BuildEdges method walks each method's syntax tree looking for identifiers that resolve to fields or other methods of the same class:
private static void ScanBodyForEdges(
IMethodSymbol method, SyntaxNode body, SemanticModel model,
HashSet<IFieldSymbol> fieldSet, HashSet<IMethodSymbol> methodSet, UnionFind uf)
{
foreach (var identifier in body.DescendantNodes().OfType<IdentifierNameSyntax>())
{
var symbol = model.GetSymbolInfo(identifier).Symbol;
if (symbol is IFieldSymbol f && fieldSet.Contains(f))
uf.Union(method, f);
else if (symbol is IMethodSymbol m && methodSet.Contains(m))
uf.Union(method, m);
}
}private static void ScanBodyForEdges(
IMethodSymbol method, SyntaxNode body, SemanticModel model,
HashSet<IFieldSymbol> fieldSet, HashSet<IMethodSymbol> methodSet, UnionFind uf)
{
foreach (var identifier in body.DescendantNodes().OfType<IdentifierNameSyntax>())
{
var symbol = model.GetSymbolInfo(identifier).Symbol;
if (symbol is IFieldSymbol f && fieldSet.Contains(f))
uf.Union(method, f);
else if (symbol is IMethodSymbol m && methodSet.Contains(m))
uf.Union(method, m);
}
}3. CouplingAnalyzer: Afferent, Efferent, and the Main Sequence
Coupling analysis operates at two levels:
Namespace-level computes Robert C. Martin's package metrics:
- Ca (Afferent Coupling): how many types outside this namespace reference types inside it
- Ce (Efferent Coupling): how many types inside this namespace reference types outside it
From these, the NamespaceMetrics model computes derived metrics:
public class NamespaceMetrics
{
public double Abstractness => TypeCount == 0 ? 0 : (double)AbstractTypeCount / TypeCount;
public double Instability =>
AfferentCoupling + EfferentCoupling == 0
? 0
: (double)EfferentCoupling / (AfferentCoupling + EfferentCoupling);
public double DistanceFromMainSequence => Math.Abs(Abstractness + Instability - 1);
}public class NamespaceMetrics
{
public double Abstractness => TypeCount == 0 ? 0 : (double)AbstractTypeCount / TypeCount;
public double Instability =>
AfferentCoupling + EfferentCoupling == 0
? 0
: (double)EfferentCoupling / (AfferentCoupling + EfferentCoupling);
public double DistanceFromMainSequence => Math.Abs(Abstractness + Instability - 1);
}The Main Sequence is the line where Abstractness + Instability = 1. Namespaces far from this line are either in the Zone of Pain (concrete and stable -- hard to change) or the Zone of Uselessness (abstract and unstable -- unused abstractions).
Type-level computes efferent coupling per class: the count of distinct external types that a given type references.
4. InterfaceAnalyzer: Discovery and Orphan Detection
Scans each project to find all interfaces, their implementations, and orphan interfaces -- interfaces with no implementations. Orphans are a code smell: either dead code, or a missing implementation that should exist.
5. ApiSurfaceAnalyzer: Public API Tracking
Counts public types, methods, and properties. Useful for tracking API surface growth over time -- a signal that a library is becoming too broad.
6. TypeMetricsBuilder: Aggregation Engine
The TypeMetricsBuilder ties everything together at the type level. For each type in the compilation, it:
- Classifies the type kind (Class, Interface, Struct, Record, Enum)
- Computes LCOM4 via
CohesionAnalyzer - Computes efferent coupling via
CouplingAnalyzer - For each method: computes cyclomatic complexity, cognitive complexity, LOC, and maintainability index via
ComplexityAnalyzer - Computes inheritance depth by walking the base type chain
- Groups everything by namespace
Quality Gate Evaluation
The QualityGateEvaluator is a pure function: it takes a QualityReport and GateThresholds, and returns a list of violations. The evaluation is hierarchical:
public static List<QualityGateResult> Evaluate(QualityReport report, GateThresholds thresholds)
{
var results = new List<QualityGateResult>();
foreach (var project in report.Projects)
foreach (var ns in project.Namespaces)
{
EvaluateNamespace(ns, thresholds, results); // Distance from Main Sequence
foreach (var type in ns.Types)
{
EvaluateType(type, thresholds, results); // Coupling, LCOM, inheritance depth
foreach (var method in type.Methods)
EvaluateMethod(method, thresholds, results); // CC, cognitive, MI
}
}
EvaluateDuplication(report, thresholds, results); // Code duplication %
EvaluateTestQuality(report, thresholds, results); // Coverage + mutation score
return results;
}public static List<QualityGateResult> Evaluate(QualityReport report, GateThresholds thresholds)
{
var results = new List<QualityGateResult>();
foreach (var project in report.Projects)
foreach (var ns in project.Namespaces)
{
EvaluateNamespace(ns, thresholds, results); // Distance from Main Sequence
foreach (var type in ns.Types)
{
EvaluateType(type, thresholds, results); // Coupling, LCOM, inheritance depth
foreach (var method in type.Methods)
EvaluateMethod(method, thresholds, results); // CC, cognitive, MI
}
}
EvaluateDuplication(report, thresholds, results); // Code duplication %
EvaluateTestQuality(report, thresholds, results); // Coverage + mutation score
return results;
}The Test Quality Score combines code coverage and mutation testing into a single number:
private static double? ComputeTestQualityScore(QualityReport report)
{
double? coverage = report.Coverage?.BranchRate;
double? mutation = report.Mutation?.MutationScore;
return (coverage, mutation) switch
{
(not null, not null) => (coverage.Value + mutation.Value) / 2.0,
(not null, null) => coverage.Value,
(null, not null) => mutation.Value,
_ => null
};
}private static double? ComputeTestQualityScore(QualityReport report)
{
double? coverage = report.Coverage?.BranchRate;
double? mutation = report.Mutation?.MutationScore;
return (coverage, mutation) switch
{
(not null, not null) => (coverage.Value + mutation.Value) / 2.0,
(not null, null) => coverage.Value,
(null, not null) => mutation.Value,
_ => null
};
}Why average coverage and mutation score? Because 100% line coverage with 50% mutation score means half your tests are asserting nothing useful. Mutation testing (via Stryker) modifies your code and checks whether tests detect the change. If a mutant survives, your test didn't actually verify that behavior.
Configuration: YAML-Driven Thresholds
Everything is controlled from a single quality-gate.yml:
solution: MyApp.slnx
coverage:
- "**/coverage.cobertura.xml"
mutations:
- "**/mutation-report.json"
output: .quality-gate/
gates:
max-cyclomatic-complexity: 15
max-cognitive-complexity: 20
max-class-coupling: 20
max-inheritance-depth: 5
min-maintainability-index: 60
max-lcom: 3
max-distance-from-main-sequence: 0.3
max-duplication-percent: 5
min-test-quality-score: 0.80solution: MyApp.slnx
coverage:
- "**/coverage.cobertura.xml"
mutations:
- "**/mutation-report.json"
output: .quality-gate/
gates:
max-cyclomatic-complexity: 15
max-cognitive-complexity: 20
max-class-coupling: 20
max-inheritance-depth: 5
min-maintainability-index: 60
max-lcom: 3
max-distance-from-main-sequence: 0.3
max-duplication-percent: 5
min-test-quality-score: 0.80The GateThresholds class maps directly to this YAML via YamlDotNet:
public class GateThresholds
{
[YamlMember(Alias = "max-cyclomatic-complexity")]
public int MaxCyclomaticComplexity { get; set; } = 15;
[YamlMember(Alias = "max-cognitive-complexity")]
public int MaxCognitiveComplexity { get; set; } = 20;
[YamlMember(Alias = "max-class-coupling")]
public int MaxClassCoupling { get; set; } = 20;
[YamlMember(Alias = "min-maintainability-index")]
public double MinMaintainabilityIndex { get; set; } = 60;
[YamlMember(Alias = "max-lcom")]
public int MaxLcom { get; set; } = 3;
[YamlMember(Alias = "max-distance-from-main-sequence")]
public double MaxDistanceFromMainSequence { get; set; } = 0.3;
[YamlMember(Alias = "min-test-quality-score")]
public double MinTestQualityScore { get; set; } = 0.80;
}public class GateThresholds
{
[YamlMember(Alias = "max-cyclomatic-complexity")]
public int MaxCyclomaticComplexity { get; set; } = 15;
[YamlMember(Alias = "max-cognitive-complexity")]
public int MaxCognitiveComplexity { get; set; } = 20;
[YamlMember(Alias = "max-class-coupling")]
public int MaxClassCoupling { get; set; } = 20;
[YamlMember(Alias = "min-maintainability-index")]
public double MinMaintainabilityIndex { get; set; } = 60;
[YamlMember(Alias = "max-lcom")]
public int MaxLcom { get; set; } = 3;
[YamlMember(Alias = "max-distance-from-main-sequence")]
public double MaxDistanceFromMainSequence { get; set; } = 0.3;
[YamlMember(Alias = "min-test-quality-score")]
public double MinTestQualityScore { get; set; } = 0.80;
}Every threshold has a sensible default. Start there, then ratchet tighter as your codebase improves.
The CLI: From Init to CI/CD
The tool ships as a .NET local tool with seven commands:
| Command | What It Does |
|---|---|
init |
Scaffolds quality-gate.yml + coverage.runsettings for your solution |
validate |
Checks config validity before running |
test |
Runs tests with XPlat Code Coverage, then full analysis |
analyze |
Analysis only (uses existing coverage data) |
check |
CI/CD mode -- exits with code 1 on any gate failure |
interfaces |
Prints interface-to-implementation map |
serve |
Launches interactive dashboard at localhost:3000 |
Development Workflow
# First time setup
dotnet quality-gate init
dotnet quality-gate validate
# During development: interactive mode
# (runs tests, analyzes, serves dashboard, waits for Enter to re-run)
dotnet quality-gate test --interactive
# Watch mode: auto-reruns on *.cs / *.csproj save
dotnet quality-gate test --loop --watch --serve# First time setup
dotnet quality-gate init
dotnet quality-gate validate
# During development: interactive mode
# (runs tests, analyzes, serves dashboard, waits for Enter to re-run)
dotnet quality-gate test --interactive
# Watch mode: auto-reruns on *.cs / *.csproj save
dotnet quality-gate test --loop --watch --serveThe --interactive flag is an alias for --loop --manual --serve. After each run, the SPA dashboard auto-reloads via WebSocket, and the CLI waits for Enter to re-run. This creates a tight feedback loop: write code, press Enter, see quality metrics update in real time.
CI/CD Integration
# Exits 1 if any gate fails -- designed for pipeline steps
dotnet quality-gate check# Exits 1 if any gate fails -- designed for pipeline steps
dotnet quality-gate checkNo serve, no loop, no interaction. Just analysis and a return code. Wire this into your PR checks and quality degradation becomes a build failure.
Report Pipeline and External Integrations
Cobertura (Code Coverage)
The CoberturaParser ingests the standard XPlat Code Coverage XML format produced by dotnet test --collect:"XPlat Code Coverage". It extracts:
- Overall line rate and branch rate
- Per-class coverage breakdown
Stryker (Mutation Testing)
The StrykerReportParser ingests Stryker's JSON mutation report. For each mutant, it tracks:
- Killed: test detected the change (good)
- Survived: test didn't detect the change (bad -- assertion gap)
- NoCoverage: no test executed the mutated code
- Timeout: mutation caused infinite loop (usually killed)
The mutation score (killed / total) feeds into the test quality score.
Output
Each run produces:
report.json: complete analysis in camelCase JSON -- every metric for every method, type, namespace, projectsummary.txt: human-readable overview with gate pass/fail statusruns.json: manifest tracking historical runs (for trend analysis)- SPA dashboard: interactive HTML report served via
npx serve
Testing Without MSBuild
The four abstraction interfaces exist primarily for testing. Unit tests never touch MSBuild, the filesystem, or real coverage files:
// In-memory solution from C# source code
var engine = new QualityEngine(
new QualityGateConfig { Solution = "fake.slnx" },
solutionLoader: FakeSolutionLoader.WithSource("class C {}"),
coverageParser: new FakeCoverageParser(someCoverage));
var report = await engine.AnalyzeAsync();// In-memory solution from C# source code
var engine = new QualityEngine(
new QualityGateConfig { Solution = "fake.slnx" },
solutionLoader: FakeSolutionLoader.WithSource("class C {}"),
coverageParser: new FakeCoverageParser(someCoverage));
var report = await engine.AnalyzeAsync();For analyzer tests, RoslynTestHelper creates in-memory compilations:
var project = RoslynTestHelper.CreateProject("""
namespace Ns
{
public class Calculator
{
private int _total;
public void Add(int x) { _total += x; }
public int GetTotal() => _total;
}
}
""");
// Both methods reference _total → 1 connected component → perfectly cohesive
var lcom = CohesionAnalyzer.Lcom4(typeSymbol, model, typeDeclaration);
lcom.ShouldBe(1);var project = RoslynTestHelper.CreateProject("""
namespace Ns
{
public class Calculator
{
private int _total;
public void Add(int x) { _total += x; }
public int GetTotal() => _total;
}
}
""");
// Both methods reference _total → 1 connected component → perfectly cohesive
var lcom = CohesionAnalyzer.Lcom4(typeSymbol, model, typeDeclaration);
lcom.ShouldBe(1);No mocking framework. Just hand-written fakes and direct construction. The test suite targets 100% line and branch coverage with defensive null-checks on Roslyn internals marked [ExcludeFromCodeCoverage] with justification comments.
The Metrics at a Glance
| Level | Metric | What It Catches |
|---|---|---|
| Method | Cyclomatic Complexity | Too many decision paths → hard to test |
| Method | Cognitive Complexity | Deep nesting → hard to understand |
| Method | Maintainability Index | Combined volume + complexity → likely to rot |
| Type | LCOM4 | Disconnected method groups → class should be split |
| Type | Efferent Coupling | Too many dependencies → fragile to change |
| Type | Inheritance Depth | Deep hierarchies → rigid, hard to modify |
| Namespace | Distance from Main Sequence | Zone of Pain or Uselessness → architectural imbalance |
| Solution | Code Duplication % | Copy-paste → maintenance multiplier |
| Solution | Test Quality Score | Coverage × mutation → are tests actually testing? |
Ratcheting: How to Onboard an Existing Codebase
You don't need a clean codebase to start. The recommended approach:
dotnet quality-gate init-- generate config with defaultsdotnet quality-gate test-- see what fails- Adjust thresholds in
quality-gate.ymlto match current state - Commit the config -- this is your baseline
- After each improvement, tighten the threshold by one notch
- Eventually reach strict targets: 100% coverage, low complexity, LCOM4 ≤ 3
The key insight: quality gates are not aspirational goals. They are ratchets. They encode the worst the codebase is allowed to be, and they only move in one direction.
Design Principles
Pure Functions Over Abstractions
All six analyzers are static classes. They take Roslyn types in and return model types out. No interfaces, no state, no DI registration. This makes them trivially testable and impossible to misconfigure.
Interfaces exist only at the four infrastructure seams where I/O happens: loading solutions, parsing reports, writing output. This is the minimum surface needed to substitute fakes in tests.
Hierarchical Evaluation
Quality is not a single number. A method can be complex without the class being poorly cohesive. A class can be well-cohesive but in a namespace that has drifted from the Main Sequence. The evaluation walks the full hierarchy: method → type → namespace → solution. Each violation names the specific element that failed.
Configuration-Driven, Not Convention-Driven
Every threshold is explicit in quality-gate.yml. There are no hidden rules, no "opinionated defaults" that silently fail your build. If a gate triggers, you can look at the config and see exactly why.
Temporal Tracking
Each run writes to a timestamped directory and updates a runs.json manifest. This enables trend analysis: is the codebase getting better or worse over time? The interactive dashboard can show this history.
What This Replaces
| Traditional Approach | QualityGate Approach |
|---|---|
| SonarQube server (Java, PostgreSQL, setup overhead) | Single dotnet tool, YAML config, no server |
| Individual Roslyn analyzers (per-rule, no architecture view) | Holistic: complexity + coupling + cohesion + coverage + mutations |
| Code review "please simplify this" | Build fails before review: dotnet quality-gate check |
| Periodic "tech debt" sprints | Continuous ratcheting: thresholds tighten as code improves |
| Dashboard tools that show metrics but don't enforce | Gate evaluation with exit code 1 in CI |
The goal is not to replace human judgment. It's to ensure that quality cannot silently degrade. The dashboard shows the state. The gates enforce the floor. The ratchet ensures the floor only moves up.
QualityGate closes the feedback loop on architectural decisions. DDD gives you aggregate boundaries and layered architecture. Modeling gives you compile-time DSL validation. QualityGate gives you the runtime proof that those boundaries hold -- and the build failure when they don't.