Why We Stopped Treating Our Monorepo as One SonarQube Project
, Mar 2026 - 2 min read 
Image source: Pexels
As our monolithic application grew, with a .NET backend and a TypeScript front-end, so did the complexity of managing code quality. With 7 to 8 developers sharing a single codebase, maintaining consistent standards became increasingly difficult. Issues were surfacing after code had already been pushed. Quality concerns, overlooked edge cases, and security vulnerabilities in third-party dependencies were slipping through. Peer reviews helped, but weren’t enough to consistently catch problems across both layers.
We needed an automated way to enforce code quality and detect security issues early, before changes reached production. That’s when we decided to implement SonarQube and integrate it into our CI/CD pipeline.
In this first installment of our series on scaling code analysis, we are going to focus on the technical foundation: how we configured SonarQube for our dual-stack environment and the steps we took to integrate it with Bitbucket.
Setting Up SonarQube Cloud on Bitbucket
Before running any scans, we needed to set up SonarQube Cloud and connect it to our Bitbucket repository.
Choosing SonarQube Cloud (Free Plan)
We chose SonarQube Cloud over a self-hosted instance. This allowed us to bypass the overhead of managing infrastructure, databases, and server maintenance, and to focus instead on immediate integration. We used the Free Plan, which meant relying on out-of-the-box quality profiles and gates (Sonar Way), with no advanced rule customization or enterprise governance features. The focus was on getting immediate visibility into bugs, vulnerabilities, and code smells rather than deep configuration.
Bitbucket Requirement: Paid Workspace
One prerequisite became clear early on. To enable full integration between SonarQube Cloud and Bitbucket, particularly for pull request analysis and decoration, the Bitbucket workspace must be on a paid plan. Without it, pull request decoration won’t work properly, and some automated analysis features are restricted. Upgrading the workspace was a necessary step before completing the integration.
SonarQube Strategy for Our Monolith
When setting up SonarQube for a monolith, you generally face a fork in the road: scan the entire repository as a single, massive project or decouple it into multiple SonarQube projects. For our transition, we chose the latter.
Our Repository Structure
Our codebase is a monorepo containing four distinct components: two .NET API projects and two TypeScript UI projects. Although they live in one repository, they operate with independent build processes, distinct languages, and separate release cadences. Treating these as a single unit was impractical from the start.
Why We Avoided a Single Project Scan
Treating the entire monorepo as a single SonarQube project would have fundamentally compromised our visibility by mixing backend and frontend results, often masking critical issues in one layer behind the high performance of another. With our team of 7 to 8 developers split across specialized stacks, a unified dashboard would have created ambiguous ownership; instead, we wanted the backend and frontend teams to have clear, independent accountability for their respective metrics. This separation also ensures that troubleshooting a failed scan or a broken quality gate is significantly faster, as we can immediately pinpoint which specific component caused the friction.
Furthermore, since our components utilize different CI/CD triggers in Bitbucket, adopting separate SonarQube projects allows us to run targeted scans only when a specific project’s code actually changes, rather than wasting resources on a full repository analysis. Ultimately, for our team, the clarity provided by a separation of concerns far outweighed the surface-level convenience of a single dashboard, a decision that became the strategic foundation for our entire quality journey.
Therefore, we created four separate SonarQube Cloud projects, each mapped to its corresponding subdirectory in the monorepo; API 1 was SonarQube Project 1, API 2 was SonarQube Project 2 and so forth and with that we ended up with 4 projects. This gave each project its own independent quality gate, clear ownership, and a simpler CI/CD configuration. Scan times also became more manageable because each scanner processes only its relevant slice of the codebase.
CI/CD Integration
Each SonarQube project is wired directly into our Bitbucket Pipeline to ensure automated, continuous oversight. During execution, the relevant project is built, the SonarQube scanner runs against it, and the results are published to its corresponding SonarCloud project. This architecture allows every component in the monolith to be analyzed independently, despite remaining in the same repository. By decoupling the scans, we ensured that a failure in a TypeScript UI component wouldn’t stall a .NET API deployment, and vice versa.
Conclusion: From Setup to Strategy
With our four independent projects configured and the Bitbucket integration set, we finally had the technical foundation required to see our codebase clearly. However, setting up the “plumbing” was only the beginning. Moving away from a unified monorepo scan was a calculated risk, one that forced us to rethink how we define “quality” across different stacks.
In the next part of this series, we’ll move past the configuration files and look at what happened once the data started flowing. We will dive into the key discoveries we made about our legacy code, the unexpected benefits of team accountability, and how this granular visibility fundamentally changed our development culture.