Fixing IntelliSense Performance Issues in Nx and Turborepo
It started innocently. An hour-long video call with Petr, a lead dev from a team we were collaborating with. "Something is very wrong with our monorepo," he began, and in the background, I could hear the faint hum of his laptop's fans running at full throttle. "IntelliSense in VS Code sometimes takes 10 seconds to suggest a type. Saving a TypeScript file can freeze the editor for several seconds. Our CI is screaming that tsc --noEmit takes forever. It's a nightmare; we're losing a ton of time."
This wasn't just typical developer complaining. It was a description of productivity paralysis. Every 10-second delay broke their flow, forced a context switch, and bred frustration. They were seeing "ghost errors" that TypeScript would show for a second, only to remove them moments later once the language server finally caught up with its analysis.
As a hint, Petr mentioned that about two months ago, there was no such problem. The monorepo was fast and worked great, which was critical for them because the project they were working on was meant to be a template for others. During prolonged development, it turned out there was a potential risk that other teams would face the same issues over time. It had to be solved quickly.
This was a classic description of a performance issue in a large project based on TypeScript and an Nx/Turborepo project. The diagnosis seemed simple, but as is often the case in forensics, the first suspect is rarely the right one. Our list included four hypotheses, each of which we decided to investigate thoroughly.
- Bad Nx configuration: Maybe somewhere in
nx.json,package.json, or theproject.jsonfiles, there were incorrect paths or circular dependencies? Common mistakes include path aliases intsconfig.base.json(paths) that don't correctly reflect the library structure, forcing TS to look for files in the wrong places. Another suspect wasimplicitDependenciesinnx.json, which could create dependencies not visible at first glance, causingnx affectedto trigger rebuilds for too many projects at once. - Missing
referencesin TypeScript: This was a strong candidate. In the context of a monorepo,referencesintsconfig.jsonfiles (along with thecomposite: trueanddeclaration: trueflags) allow TypeScript to understand the boundaries between individual packages. They act like checkpoints, telling the compiler, "Hey, this project has already been checked; you can use its compiled type definitions (.d.ts) instead of analyzing all the source code again." Without them, TypeScript treats the entire monorepo as one giant "program," meaning any small change in one file could potentially force a re-analysis of everything. - Overly broad
includeandexclude: Thetsconfig.jsonfiles allow you to precisely specify which files the compiler should analyze (include) and which it should ignore (exclude). A misconfiguration could pull unnecessary files into the analysis. Imagine accidentally including thedistor.nx/cachedirectory. The compiler would then try to parse thousands of already compiled JavaScript files and definition files, dramatically slowing down the entire process. - The law of large numbers and type complexity: Maybe the codebase had simply become so huge that the power of TypeScript's type inference, normally a blessing, had turned against us? This wasn't just about the number of lines of code, but about "typological complexity." A single line with a deeply nested generic type or a conditional type can be more computationally expensive for the compiler than a hundred lines of simple business logic.
Before we started the hunt, however, it's worth taking a moment to understand how our "detective"—the TypeScript language server—actually thinks.
How IntelliSense Works: A Quick Look Under the Hood
When you open a .ts file in your editor, the TypeScript Language Server starts up in the background. Its job is to provide you with all those magical suggestions. In simple terms, its work looks like this:
- Parsing and Abstract Syntax Tree (AST): The server reads the code and builds a data structure from it called an Abstract Syntax Tree. This is a tree-like representation of your code. For example, the line
const x = 1;is transformed into a tree where the main node isVariableDeclaration, and its "children" are anIdentifier(the namex) and aNumericLiteral(the value1). This structure is the foundation for further analysis. - Binding and Symbol Table: Next, the server traverses the AST and links symbols together in a "Symbol Table." When it sees the use of a variable
x, it looks for its declaration in the table, creating a semantic link. This enables features like "go to definition." - Type Checking: This is the heart of the operation and the source of our problems. The server analyzes types using advanced techniques, such as:
- Control Flow Analysis: TypeScript understands that after an
if (typeof x === 'string')statement, inside that block,xis definitely a string. - Structural Typing: Instead of relying on explicit declarations (
implements), TypeScript checks if an object has the right "shape" (properties and methods). - Type Inference: Deducing types from the context. This is the "magic" that means we don't have to write types everywhere, but it's also what can be expensive.
- Control Flow Analysis: TypeScript understands that after an
- Caching and the "Pull" Model: The results of this hard work are stored in memory. The server operates on a "pull" model—it doesn't calculate everything at once. Only when you request information (e.g., by hovering over a variable) does it compute "just enough" to provide an answer. The problem is that with very complex types, that "just enough" might mean analyzing a long chain of dependencies, which takes time. The
.d.tsfiles from referenced projects act as a firewall here, stopping this cascade at the package boundary.
The Monorepo Investigation: Debunking Myths and Finding the Truth
With this knowledge, we began a systematic investigation, treating each hypothesis as a serious lead.
Step 1: Configuration Audit – Are the Foundations Sound?
Our first target was the configuration files—nx.json, tsconfig.base.json, and the project.json files for individual applications and libraries. A bad configuration is a common cause of problems in a monorepo. We checked:
- Path aliases (
paths) intsconfig.base.json: We made sure that every alias (@my-org/my-lib) correctly pointed to thesrcdirectory of the corresponding library. An incorrect path could force TS to search through unnecessary files. includeandexcludeintsconfig.json: We carefully analyzed these arrays. We looked for the classic mistake: accidentally including folders likedist,node_modules, or.nx/cachein the analysis. A single incorrect entry could pull thousands of files into the compilation process.- The dependency graph in NX: Using the
nx graphcommand allowed us to visualize the entire monorepo. We looked for circular dependencies and "bottlenecks"—libraries that too many other projects depend on, which could slow down rebuilds.
Verdict: The configuration was solid. All paths matched, exclude correctly ignored unnecessary folders, and the dependency graph was clean. This wasn't where the problem lay.
Step 2: Implementing references – Sealing the Borders
Next, we tackled the hypothesis of missing TypeScript references. This is a key optimization for monorepos. The process involved:
- Adding
{ "composite": true, "declaration": true }to thetsconfig.lib.jsonfiles in all libraries. - Adding a
"references": [{ "path": "../libs/my-lib" }]section to thetsconfig.app.jsonfiles in all applications that used those libraries.
After this change, we ran tsc --build and measured the time. The result? There was an improvement! The compilation time in CI was reduced by about 15%.
Let's be clear: this doesn't mean references are useless. They are a critical and non-negotiable best practice for managing large monorepos. They drastically improve incremental build times and create the logical boundaries that tools like Nx and Turborepo leverage for caching.
However, this only solves part of the puzzle. Monorepo tools like Nx and Turborepo are brilliant at caching the output of tsc --build for CI runs, but this caching doesn't help the TypeScript Language Server that powers IntelliSense. The LSP performs semantic analysis in real-time within your editor, and it needs to understand the source types. So, while our CI was faster, the developers' core problem—the agonizing wait for type suggestions—remained.
Step 3: Code Analysis – Hunting for "Monster Types"
Since the configuration was correct and basic optimizations were in place, we had to dive into the code itself. Petr mentioned that the problem had grown over time, suggesting that the culprit was something that "grows" with the project. This led us to the final discovery.
Interestingly, the problem was exacerbated by the presence of AI tools. Assistants like GitHub Copilot and Cursor work by constantly asking the language server questions about types and context. Each such "semantic query" (e.g., "what are the possible properties of this object?") forces the server to fully resolve a type. If that type is complex, the server performs an expensive operation with every keystroke. It was like one already-overloaded analyst having to answer questions from ten other people simultaneously.
Finally, Petr had the idea to test a file with business logic by temporarily removing the imports related to translations. Eureka. The file immediately became responsive again. The culprit had been found.
The Main Suspect: next-intl and the Trap of Generated Types
The next-intl library is a powerful tool for internationalization in Next.js applications. One of its great features is generating types for all translation keys. This way, when you try to use a key that doesn't exist, TypeScript immediately throws an error. Convenient, right?
Yes, but only up to a point. In Petr's project, the translations file contained over 5,000 keys. next-intl generated a single, gigantic union type from this:
type TranslationKey = "common.save" | "common.cancel" | "login.title" | "login.error" | ... (5000+ more)
For the TypeScript server, working with such a type is a computational nightmare. Checking if a given string literal belongs to an N-element union has a computational complexity close to O(N). When such an operation is nested within generic or conditional types, the complexity can grow exponentially. Every time a developer used the translation function, the server had to:
- Check if the provided string matched ONE of the 5,000+ possible literals in the union.
- Prepare a list of 5,000+ options when displaying suggestions.
This operation, repeated in hundreds of files, was what choked IntelliSense. The problem wasn't with the next-intl library itself, but with how it was being used at such a large scale. This pattern of performance degradation isn't unique to next-intl. It's a classic symptom of any tool that generates massive union types from a large input set. For example, a zod schema using z.discriminatedUnion('type', [ ... dozens of complex sub-schemas ... ]) is incredibly powerful for validation, but for IntelliSense trying to determine the resulting type, it can be deadly. Similarly, a trpc router with hundreds of procedures can create a massive client-side type that slows down autocompletion.
The Second Patient: "Too Many Actions" Syndrome with @greenonsoftware/vibetest
I observed the same problem pattern in another project that used the @greenonsoftware/vibetest library (this is my own library) for crafting end-to-end tests. The library is based on the Gherkin convention, allowing developers to write declarative tests using a sentence-like structure with methods like Given, When, and Then. Its power lies in providing full TypeScript IntelliSense for these custom-defined test "sentences."
You can read more about this library and its implementation here: Why I Crafted My Own Gherkin Interpreter for E2E Tests.
In this monorepo, developers had created over 500 different test sentences (Actions, Queries, and Tasks):
// Example Sentences defined across the monorepo
const authSentences = {
'User is logged in': (userId: string) => ({ /* implementation */ }),
'User is on the login page': () => ({ /* implementation */ }),
};
const reportSentences = {
'I generate a new report': () => ({ /* implementation */ }),
'I see the report dashboard': () => ({ /* implementation */ }),
};
// ... and so on, resulting in 500+ unique sentence strings
The main test creation function was configured with all available sentences from the entire project, leading to a massive type inference challenge. The configuration would aggregate all sentences into a single object, and the test runner's type would be inferred from it:
// A type representing all possible test sentences from the project
type AllSentences = keyof typeof authSentences | keyof typeof reportSentences | ...; // a 500+ member string literal union
// Simplified VibeTest creation
function createVibeTest<T extends Record<string, Function>>(
config: { sentences: T }
): VibeTest<keyof T> {
// ... returns a test object with typed Given, When, Then methods
}
// In the global test setup:
const allProjectSentences = { ...authSentences, ...reportSentences, ... };
const test = createVibeTest({ sentences: allProjectSentences });
// THE BOTTLENECK: Typing `test.Given.` would be extremely slow,
// as IntelliSense tries to suggest from the massive AllSentences union.
test.Given['']
This issue isn't a flaw in the @greenonsoftware/vibetest library's implementation, but rather a computational limit imposed by the nature of TypeScript's structural typing. When you type test.Given[' and expect a list of 500+ valid strings, the language server has to work extremely hard.
The solution turned out to be a strategic split. Instead of creating one global test instance configured with allProjectSentences, they created domain-specific instances. A test file for authentication would now use an instance configured with only authSentences (e.g., 10 sentences), a change that immediately restored IntelliSense responsiveness by drastically reducing the compiler's workload.
The Third Suspect: Manually-Crafted Monster Types
It's worth emphasizing that the problem isn't limited to generated types. Often, with good intentions, we create types ourselves that become bottlenecks. Imagine a central type for API responses in a large application:
interface UserData { entityType: 'user'; id: string; name: string; }
interface ProductData { entityType: 'product'; id: number; price: number; }
interface OrderData { entityType: 'order'; id: string; items: number[]; }
// ... and 20 other interfaces for different entities
type ApiData = UserData | ProductData | OrderData | ... // a 20+ member union
interface ApiResponse {
status: 'success' | 'error';
data: ApiData | null;
}
At the beginning, this pattern works great. But as ApiData grows, every component or function that operates on ApiResponse forces TypeScript to analyze the entire, massive union. This is another example where a strategic split (e.g., creating specific response types for each domain) is better than a single, central "god type."
Practical Toolkit: Measuring, Debugging, and Preventing
So, how can you defend against such problems in the future?
How to Measure TypeScript Performance?
tsc --diagnostics: This flag displays basic statistics. When analyzing the output, pay attention toCheck time. If it's disproportionately high compared toParse timeandBind time, it's a sign that the compiler is struggling with type complexity.tsc --generateTrace <out_dir>: This is an advanced tool. After generating the trace, open it in the Perfetto UI. You'll see a flame chart. Look for the longest, continuous blocks in thechecksection. Hovering over such a block will show you the file, and sometimes even the specific type, whose analysis is taking the most time. This is your prime suspect.
Strategies for Debugging Performance Issues
git bisect: This automated tool is perfect for this task. Typegit bisect start, mark the current commit asgit bisect bad, and a commit from 2 months ago asgit bisect good [hash]. Git will automatically check out commits for you, and after each checkout, you just need to check if the problem occurs and typegit bisect goodorgit bisect bad. This way, you can find the culprit in minutes.- Isolation by Commenting: This is exactly what Petr did. If you suspect a particular file is problematic, start temporarily removing (or commenting out) imports and observe when performance returns.
- Dependency Graph Analysis (
nx graph): In an Nx monorepo, this tool is invaluable. It allows you to visualize how packages depend on each other and identify potential "bottlenecks"—packages that too many other projects depend on.
Preventive Strategies and Best Practices
-
Consciously Split Types: If you're using a tool that generates types, consider whether you can split the output into smaller, contextual files.
-
Be Careful with "Barrel Files":
index.tsfiles that export everything from a directory are convenient, but they can be a trap, forcing TypeScript to load and analyze far more files than needed at any given moment. You can read more about how they work and the kind of damage that may be done if they are used badly in this article: Everything About Barrel Exports In JavaScript -
Monitoring in CI: Include a
tsctime measurement in your Continuous Integration process. If a new Pull Request significantly increases this time, it should be a warning sign for further analysis. To illustrate the concept, you could use a simple bash script that fails the build if a threshold is exceeded:#!/bin/bash # Set the threshold in seconds (e.g., 10 seconds) THRESHOLD=10 echo "Running tsc --noEmit and measuring time..." # Measure the command's execution time, converting the `time` format to seconds ELAPSED_TIME=$( (time tsc --noEmit) 2>&1 | grep real | awk '{print $2}' | sed 's/m/ /g' | sed 's/s//g' | awk '{print $1*60 + $2}' ) # Use `bc` to compare floating-point numbers if (( $(echo "$ELAPSED_TIME > $THRESHOLD" | bc -l) )); then echo "Error: Type checking time (${ELAPSED_TIME}s) exceeded the threshold (${THRESHOLD}s)!" exit 1 else echo "Success: Type checking time is within the limit." exit 0 fiHowever, a more robust and efficient approach is to parse the logs your CI process already generates. Most CI runners and tools like Nx display task execution times by default. Instead of re-running
tscjust for a time check, write a small script that scrapes the build log for thetscduration. This avoids redundant work and uses data you already have. The same principle applies to monitoring other key metrics: you can measure application build time, the size of generated JavaScript chunks, or the response time of key API endpoints. All of this is a topic for a separate, extensive article—maybe we'll write one someday :)
Conclusion: Think Like a Compiler
Petr's story and the performance issues in large monorepos teach us one crucial thing: the problem is rarely the sheer size of the code. The real culprit is the computational complexity of types. A single, gigantic, generated type can slow down work more than thousands of simple files with business logic.
The power of type inference in TypeScript is incredible, but it doesn't come for free. As developers, we need to learn to "think like a compiler," understand which patterns can lead to performance issues, and consciously maintain the "typological hygiene" of our projects. A strategic division of complexity is the key to maintaining a smooth workflow, even in the largest of monorepos. Sometimes, less is more.
Thanks to Petr Kratochvíl for the contribution and the opportunity to tell this story :D
Currently engaged in mentoring, sharing insights through posts, and working on a variety of full-stack development projects. Focused on helping others grow while continuing to build and ship practical solutions across the tech stack. Visit my Linkedin or my site for more 🌋🤝