Technical Due Diligence Checklist for Software Acquisition
Quick Answer: A technical due diligence checklist for software acquisition should cover six domains: codebase quality and technical debt, system architecture and scalability, security and compliance posture, infrastructure and operational costs, third-party dependencies and licensing, and team knowledge concentration. Expect the process to take two to four weeks and surface findings that affect valuation by 10 to 30 percent in complex deals.
Most acquisition failures do not announce themselves at signing. They show up six months later when the engineering team inherits a monolith written in an unsupported framework, discovers the entire authentication layer was built by a contractor who left in 2021, or finds out the product is sitting on a cloud bill that triples every time customer count grows by 20 percent.
And honestly? Nobody warns you about this part in the pitch meeting.
Technical due diligence is not a checkbox exercise. It is the process of building an honest picture of what a software asset actually is versus what its pitch deck says it is. Done well, it either validates the deal or gives you the numbers to renegotiate. Done badly, it turns into a post-close autopsy where everyone is pointing at each other.
This guide is written for founders, CFOs, and product leaders approaching a software acquisition, whether that is a full company purchase, an acqui-hire, or buying a standalone SaaS product. The work is harder than most expect. The findings are almost always more complicated than the seller believes they will be.
1. Where Deals Actually Get Complicated: The Codebase
So where do you start? Most people assume the code review is a formality, a box to tick before the lawyers take over. It is not. This is where deals quietly fall apart.
The goal is not to find perfect code. Perfect code does not exist. The goal is to understand the cost and timeline of getting the codebase to a state where your team can actually operate and extend it without burning out the first three engineers who touch it.
What to assess:
- Language versions and framework currency. A Rails 4 app or a Python 2 codebase is not just outdated. It is a security liability with a remediation timeline measured in quarters, not sprints.
- Test coverage and test quality. Coverage percentage is a starting point, not a conclusion. A codebase with 80 percent coverage but no integration tests and brittle unit tests is not well-tested. It just looks well-tested on a dashboard.
- Static analysis results. Run tools like SonarQube, CodeClimate, or Semgrep. Look at cyclomatic complexity, duplication rates, and critical issue counts.
- Commit history patterns. Irregular commits, long-lived branches, and no code review history are signals about team discipline. You are reading the culture in the version control.
- Documentation quality. Inline comments, README files, and architectural decision records tell you whether this codebase was built to be handed off or built to be hoarded.
When Salesforce acquired Slack in 2021 for $27.7 billion, one of the post-close engineering challenges was integrating a codebase built at startup speed with Salesforce's enterprise architecture standards. I keep thinking about this example because people misread it. That is not a criticism of either company. It is an illustration of the gap between what a product does in market and what it actually costs to maintain and evolve over years.
For smaller acquisitions, technical debt findings routinely justify price reductions of 15 to 25 percent when the remediation cost is modeled properly. That math is not speculative. It shows up in deal after deal.
2. Architecture Review: A Completely Different Question
Code review and architecture review are not the same thing. Not even close.
Code review asks whether the software is well-built. Architecture review asks whether it was built for the right problem at the right scale. You need both. And honestly, teams that skip the architecture review tend to regret it faster.
What to assess:
- Monolith versus services breakdown. Neither is automatically better. The question is whether the architecture matches the product's current scale and your roadmap going forward.
- Database design. Schema complexity, indexing strategy, and the presence of stored procedures indicate how tightly business logic is coupled to the database layer. Tightly coupled means expensive to change.
- API design and versioning. If the product has external API consumers, how are breaking changes managed? Absence of versioning is a future customer problem you are about to inherit.
- Scalability bottlenecks. Ask for load test results. If none exist, that is itself a finding. Request production metrics around peak traffic events.
- Multi-tenancy implementation. For SaaS products, how tenant data is isolated matters for security and for the cost of serving large enterprise customers.
One pattern worth watching, and we see this more than you would expect: products built by strong individual engineers sometimes have elegant code sitting on fragile architectural decisions. The code quality passes review. The system cannot handle a 5x growth event without a partial rewrite. The architecture review catches what the code review misses.
Most teams skip this distinction. They look at clean code and assume the system is sound. Not always, but often.
3. Security Findings Are the Ones That Restructure Deals
Look, security is where things get structural fast. Technical debt you can price in. Some security issues carry regulatory exposure that no price adjustment fully covers. That is a different category of problem.
What to assess:
- Authentication and authorization implementation. Are credentials stored correctly? Is there role-based access control applied consistently, or is it applied in some places and quietly missing in others?
- Dependency vulnerabilities. Run the dependency manifest through Snyk or OWASP Dependency-Check. Count the high and critical CVEs. Ask how long they have been sitting open.
- Data handling practices. Where is PII stored? Is it encrypted at rest and in transit? Is there a data retention policy that is actually enforced in the code, not just written in a document?
- Compliance certifications. SOC 2 Type II, HIPAA, GDPR, PCI-DSS. Which apply? Which exist? Which are claimed but not certified? The gap between claimed and certified is common. It is also material.
- Penetration test history. When was the last one? What did it find? What was actually remediated versus noted and ignored?
Companies selling into healthcare or financial services buyers often discover mid-diligence that their compliance posture is thinner than they believed. Not fraudulent. Just optimistic. That distinction matters, but it still needs to be priced into the deal.
My advice? Do not let the seller's legal team frame compliance gaps as minor administrative issues. Get your own assessment.
4. Cloud Costs: The Most Frequently Misrepresented Line Item
Personally, I think infrastructure cost is where acquirers get burned most quietly. It does not show up as a crisis. It shows up as a margin problem six months post-close when the cloud bill comes in and nobody can explain it.
Sellers often do not understand their own unit economics at the infrastructure level. Not always intentionally. They just have not looked at it the way a buyer needs to look at it.
What to assess:
- Cloud spend breakdown. Request 12 months of AWS, GCP, or Azure billing. Map spend to product lines and customer segments. Look for cost spikes that correlate with customer growth. A bill that scales faster than revenue is a structural problem.
- Infrastructure as code maturity. Is infrastructure defined in Terraform, Pulumi, or CloudFormation? Or is it managed manually through console clicks? Manual infrastructure is fragile and expensive to migrate. You know how that goes.
- Disaster recovery and backup posture. What is the RPO and RTO? Has recovery actually been tested? Many companies have backups that have never been restored. That is not a backup. That is a hope.
- On-call and operational burden. How many production incidents occurred in the last 12 months? What does the on-call rotation look like? Operational burden that depends on two specific engineers is a risk that needs pricing.
- Vendor lock-in assessment. Deep reliance on proprietary managed services can limit architectural flexibility and create cost exposure if pricing changes.
Right. So now you have a picture of what it costs to run the thing. Which is different from what they told you it costs to run the thing.
5. Dependencies and Licensing: Always Rushed, Always Worth It
To be fair, this section gets less attention than it deserves because it feels administrative. It is not. An open source licensing problem can threaten the entire commercialization model. A dependency on a third-party API that is getting deprecated can threaten a core product feature.
What to assess:
- Open source license audit. GPL-licensed components in a commercial product can create legal exposure. LGPL, MIT, and Apache 2.0 are generally safe. GPL and AGPL require careful legal review before you assume anything.
- Critical third-party service dependencies. Payment processors, email providers, identity providers, mapping APIs. What happens if any of these change terms, deprecate a product, or go down for 48 hours?
- SaaS tool contracts. What software licenses is the engineering team using? Are those transferable post-acquisition, or do they reset at the acquiring entity?
- Data vendor agreements. If the product depends on licensed data sets, what are the usage rights? Can those rights be transferred to you?
My take? This section surfaces something worth knowing in almost every deal we have reviewed. Almost every one. Rushing it is how you find out too late.
6. Team Knowledge: The Risk That Does Not Show Up in the Code
Software is not just code. It is the knowledge held by the people who built it. An acquisition that loses three key engineers in the first 90 days may have technically acquired a codebase it cannot actually operate.
This is not a soft concern. It is a structural risk.
What to assess:
- Bus factor analysis. How many engineers need to leave before critical systems become unmaintainable? A bus factor of one or two on core systems is a retention risk that belongs in the deal structure itself.
- Documentation and knowledge transfer artifacts. Is institutional knowledge written down or does it live in specific people's heads? Both are real situations. Only one of them survives an acquisition intact.
- Team sentiment. If key engineers are not excited about the acquisition, that is a finding. Exit interviews from recent departures, when you can get them, are worth requesting.
- Contractor and vendor dependencies. Is critical work being done by contractors who may not transition? Who owns those relationships post-close?
Retention bonuses and earnouts tied to engineering team stability are common mechanisms for managing this risk. Worth considering for any acquisition where product complexity is high relative to documentation quality.
Especially in year two, when the original team has largely moved on.
How to Actually Structure the Process
Most technical due diligence engagements run two to four weeks depending on codebase size and documentation quality. A typical engagement breaks down roughly like this:
- Days 1 to 3: Document and access requests. Repository access, infrastructure dashboards, dependency manifests, compliance certifications, incident logs.
- Days 4 to 10: Automated analysis and manual code review. Architecture mapping. Security scan results.
- Days 11 to 18: Engineering team interviews. Infrastructure cost modeling. Dependency licensing audit.
- Days 19 to 25: Findings synthesis. Risk scoring. Remediation cost estimation. Report delivery.
The output should include a risk register, a remediation cost estimate with high and low ranges, and clear recommendations on deal-breakers versus negotiating points. Deal-breakers are rare. Negotiating points are almost always present. Expecting otherwise is how you walk into a bad deal feeling confident.
One more thing. Engaging an independent technical advisor rather than relying solely on your internal engineering team reduces bias in both directions. Internal teams sometimes minimize risk because they want the deal to close. They also sometimes inflate risk to protect their own workload. Both happen. Independence matters more than most buyers realize until they have been burned by the alternative.
Frequently asked questions
How long does technical due diligence take for a software acquisition?
For a typical SaaS product with a team of 5 to 30 engineers, technical due diligence runs two to four weeks. Larger codebases, multiple products, or significant compliance requirements can extend that to six to eight weeks. Compressed timelines are possible but increase the risk of missing material findings, particularly in security and licensing.
What is the most common finding that affects valuation in software acquisitions?
Unquantified technical debt is the most frequent valuation-affecting finding. Sellers often know their product has debt but have not modeled the remediation cost in engineering hours. When that cost is properly estimated, it commonly justifies a 10 to 25 percent price adjustment depending on the severity and the buyer's roadmap dependency on the affected systems.
Should the buyer or seller commission the technical due diligence report?
Buyers commission their own diligence and should use an independent technical advisor rather than relying solely on the seller's documentation. Some sellers commission a pre-sale technical audit to accelerate buyer confidence, which can be useful as a starting point, but buyers should not treat it as a substitute for independent review. Seller-commissioned reports are not adversarial documents.
What qualifies as a deal-breaker in technical due diligence?
Genuine deal-breakers are uncommon but do occur. The clearest examples are undisclosed data breaches with regulatory exposure, GPL license contamination in commercial code, critical infrastructure with no documentation and a single engineer who is not staying, and active security vulnerabilities in systems that process payment or health data. Most other findings are pricing or structuring issues, not reasons to walk away.
Can technical due diligence be done without source code access?
A partial assessment is possible using API behavior, performance testing, infrastructure documentation, and engineering team interviews. However, without source code access, codebase quality, licensing compliance, and security posture cannot be evaluated with any reliability. If a seller will not provide source code access under a mutual NDA and clean room process, that resistance is itself a finding worth discussing.

