Skip to main content

How We Increased Code Coverage by 28% Without Writing a Single Test

Tom Noah
Apr 26 - 5 min read
How We Increased Code Coverage by 28% Without Writing a Single Test featured image

In our Engineering Energizers Q&A series, we highlight the engineering minds driving innovation across Salesforce. Today, we spotlight Tom Noah, Senior Software Engineer on Salesforce’s Security Mesh platform team, who integrates signals from CrowdStrike, Okta, and internal event monitoring systems to analyze massive security data streams. To support this, Tom increased code coverage by 28% without adding a single test by restructuring how data models were implemented.

Learn how Tom navigated CI-enforced code coverage constraints at the module level that blocked feature development at the file level, how he addressed the distortion of code coverage metrics, and much more.

What is your team’s mission as it relates to building high-quality, reliable systems within the Security Mesh platform?

We build systems that reliably detect and surface security signals across integrated providers. Our team aggregates events from external and internal sources to identify anomalies that indicate malicious activity.

Because these insights directly impact trust, correctness and reliability remain critical. We design services so engineers confidently evolve detection logic without introducing regressions.

This goal requires strong validation and an adaptable codebase for new integrations. In this environment, code coverage acts as a gating factor for shipping detection capabilities while maintaining system integrity.

What constraints did CI-enforced module-level code coverage introduce when it began blocking feature development?

CI-enforced code coverage at the module level created a constraint where validation depended on specific modified files. While the overall project met the threshold, individual modules often fell below the required bar.

Developing a new feature in these low-coverage modules failed validation and blocked merges. This mismatch forced engineers to address unrelated coverage gaps instead of delivering functionality.

Writing tests for code unrelated to the feature extended timelines and increased overhead. This granular constraint exposed structural inefficiencies that required a systemic solution.

What challenges did auto-generated code introduce in accurately measuring code coverage?

Auto-generated code in CI pipelines distorted coverage metrics by including logic outside the core business functionality. Measurements included generated methods like getters, setters, and utility functions that did not represent meaningful system behavior.

This inclusion made coverage percentages appear lower for the logic that mattered. Consequently, engineers felt pushed to write tests for code already validated by underlying libraries.

These tests increased volume without improving system confidence. The metric no longer reflected real quality because non-essential code was measured alongside critical logic.

The original code contained many redundant methods before the refactor.

The update ensures we measure only relevant methods for code coverage.

What engineering tradeoffs made adding more tests the wrong solution for meeting code coverage requirements in this case?

Adding more tests to increase coverage offered an easy path but introduced several downsides. This approach increased CI runtime, which slowed feedback loops and consumed more resources during pipeline executions.

Tests also define the system contract. Writing tests for non-essential code creates a false contract that future developers must preserve, even when the behavior lacks significance. This makes refactoring harder and increases the risk of breaking tests that do not reflect true requirements.

These tests also mislead AI-assisted development tools, which interpret them as valid constraints and avoid better implementations. Furthermore, more tests increase the codebase size, impacting maintainability and review complexity.

Adding tests would have improved the metric while degrading overall system quality. Instead of increasing test volume, we shifted the focus from testing to system design.

What step-by-step changes did you make to increase code coverage without adding new tests?

This approach increased code coverage by reducing unnecessary code instead of increasing test volume. We treated coverage as a structural issue within our data model implementation rather than a testing problem. The process followed these steps:

  1. Identify coverage distortion: We isolated data classes where annotations generated boilerplate code. These classes inflated the denominator in coverage calculations without adding business logic.
  2. Classify the immutable classes: Many classes were @Data annotated, but it made the classes be mutable and in addition it generated a lot of redundant code in the background. We examined which classes should really be mutable and which can be immutable, which was better not only for eliminating the redundant code, but also for thread safety and more reasons, as suggested by the modern Java best practices.
  3. Classify data models: We evaluated each immutable class to determine if it served as a pure data carrier or a model with behavior. This dictated the appropriate structure for each.
  4. Use leaner constructs: For simple carriers, Java records replaced annotation-driven implementations. For complex models, @Value annotation retained behavior while reducing redundancy.
  5. Introduce the Builder pattern: We used the Builder pattern to construct new instances during object enrichment. This preserved immutability while supporting data transformations.

Restructuring data models removed redundant code and reduced the total lines measured. This reduction improved the coverage ratio naturally, resulting in a 28% increase without adding any tests.

What design constraints shaped your transition from mutable data structures to immutable patterns such as records and value annotations?

The main constraint involved balancing immutability with the need to evolve and enrich objects. While immutable data structures in Java reduce complexity and improve predictability, they also limit direct modification.

The process followed these steps:

  1. Distinguish between data models: We separated simple and complex data models to address modification limits.
  2. Implement records: We used records for pure data carriers to ensure efficiency.
  3. Apply @Value annotation approach: We utilized @Value annotation approach for cases requiring complex behavior or inheritance.
  4. Integrate the Builder pattern: We used the Builder pattern to handle transformations by constructing new objects from existing ones.

This approach ensured flexibility without reintroducing unnecessary code. It reduced code volume, improved thread safety in a distributed system, and maintained a clean design.

What challenges did large amounts of generated code create when working with AI-assisted development tools?

Excessive auto-generated code created noise that disrupted the performance of AI development tools and LLM agents. Since these tools function within restricted context windows, heavy boilerplate takes up valuable room intended for essential logic. This clutter caused models to focus on minor details or maintain poor patterns because the core logic remained hidden behind generated distractions.

Redundant tests also created issues by validating flawed assumptions about how the system works. Models viewed these tests as fixed requirements, which frequently blocked superior technical solutions. Minimizing repetitive code and removing these false constraints restored clarity, enabling AI tools to function with better precision and speed.

Learn more

Related Articles

View all