solid
backend
patterns
architecture
google-cloud
aws
micro-services
typescript
node

Here you have the source code that uses all these techniques.

Crafting SOLID And Type-Safe APIs In Node And TypeScript

If you've been working long enough in web development, you know there is no one-size-fits-all solution. If someone claims otherwise, they're simply not telling the truth. It's crucial to understand the common challenges developers face and have ready solutions for even the smallest, specific issues.

Since every case is unique, breaking it down into smaller chunks using the "Divide and Conquer" methodology will lead to viable solutions.

Some problems can be addressed through design patterns or programming techniques. Others may be mitigated by implementing the right architecture for your application's scale. It's all about breaking down each topic into small, solvable pieces that are easier to manage.

Today, I want to tackle challenges related to API creation, scaling, providing proper maintenance, ensuring changeability, and handling code duplication effectively. All without adding layers of abstraction that risk breaking the application in other areas, often with a significant cost in performance.

Let's address all these aspects and build a SOLID API with TypeScript and Node.

Before We Start

This article briefly mentions some patterns and techniques without deep dives - I want to avoid wasting space. Under some sections, I've linked articles that explain the concepts in more detail.

Additionally, the proposed techniques should not be "forced." Use them only where you see real value. Forcing solutions in programming is a bad idea, especially when the code dictates the approach.

Here is a small dictionary before we start:

Lambda/Cloud Function

A serverless service that lets you run code without managing servers. You usually pay as you go, and it scales automatically based on traffic.

Controller

The code that you're deploying to a lambda/cloud function - the core logic wrapped inside a single function, class, or module.

Endpoint

A specific URL or URI in a web service that defines what logic should be triggered and what resource should be retrieved. For example, GET: /users returns a list of users.

Picking The Right Paradigm

Most controllers in an API are stateless. This means they don’t store anything themselves; they simply take input data, perform internal logic, and return a value. Of course, there are exceptions, but most of the time, you're writing code like this:

// Some imagined framework code
controller(`users/{id}`, async (_, rawPayload) => {
  // Validation
  const { id } = parse(rawPayload);
  // Getting user from DB by "id"
  const user = await getUser(id);

  return user;
});

This code does everything within a single function scope - the anonymous one. Nothing is changed outside of it. Here's an example of the opposite case - a stateful function (one that relies on or modifies external data):

let userCache = {};

controller(`users/{id}`, async (_, rawPayload) => {
  const { id } = parse(rawPayload);
  // Using cache to avoid unnecessary calls
  if (userCache[id]) {
    return userCache[id]; 
  }
  
  const user = await getUser(id);
  
  userCache[id] = user;

  return user;
});

The second case is quite rare, especially in a microservices approach, where services should be stateless by nature and should not store data in memory (because lambdas are killed or re-created based on traffic). If you need a cache, it should be implemented externally, like with Redis.

Now, let's re-create it in an OOP way:

class UserService {
  constructor() {
    this.userCache = {};
  }

  async getUserById(rawPayload) {
    const { id } = parse(rawPayload);
    
    if (this.userCache[id]) {
      return this.userCache[id]; 
    }
    
    const user = await getUser(id);
    
    this.userCache[id] = user;

    return user;
  }
}

const userService = new UserService();

This illustrates a key point: object-oriented programming is designed for storing and managing data inside objects. You define a class with some basic data, then add methods to modify that data. OOP is great for game development or stateful applications, but most modern backends are stateless. Thus, using a stateful paradigm in such systems can introduce complexity and unnecessary boilerplate.

Why? Because when you start using OOP, it won't be long before a developer starts adding properties or methods to classes, as that’s the natural progression of OOP.

Soon, more "user-related" methods will be added to the class, making it larger. This happens frequently in OOP-based backends. Experienced developers may block this, but it’s almost inevitable when using OOP.

class UsersService {
  constructor() {}

  async getUserById(rawPayload) {
    // Logic...
  }

  async getUsers(rawPayload) {
    // Logic with some big lib...
  }
}

// The need for instance creation...
const userService = new UsersService();

// Inside user.endpoint.ts
endpoint(`users/{id}`, (req) => userService.getUserById(req.rawPayload));

// Inside users.endpoint.ts
endpoint(`users`, (req) => userService.getUsers(req.rawPayload));

This approach leads to the following problems:

  1. Cold starts may increase for other Lambdas that don’t need certain methods or behaviors.
  2. Instance creation is required due to using OOP.
  3. Using external instances introduces the risk of unexpected behaviors.
  4. Lambdas will take longer to execute.

In contrast, the functional paradigm fits naturally into a stateless approach. It emphasizes what should be done, not how to represent data with the proper abstraction. With functional programming, you focus on separating and decoupling everything.

Functions take arguments and return responses without modifying external state (ideally). Additionally, functional codebases tend to be smaller, which is beneficial in terms of cold starts. You won’t need to import the entire UserService with methods that are only used in specific controllers.

Some may suggest, "Just use a dependency injection framework". Of course, but these problems can be avoided altogether by using the functional paradigm:

// getUserById.ts
export const getUserById = (rawPayload) => {
  // Logic...
};

// getUsers.ts
export const getUsers = (rawPayload) => {
  // Logic with some big lib...
};

// Inside user.endpoint.ts
endpoint(`users/{id}`, (req) => getUserById(req.rawPayload));

// Inside users.endpoint.ts
endpoint(`users`, (req) => getUsers(req.rawPayload));

Others may say, "You don't need to create a class to use methods; you can use static ones." Yes, I can, but everything would still be grouped in the same class, which would still affect cold starts - only instance creation would be skipped.

I’m not advocating against OOP; I’m simply saying that functional programming fits better into the microservices approach. The smaller lambdas are, the faster they’re created, and there’s no need for "pinging" to keep their instances alive (to avoid the natural lambda killing mechanism when not in use).

To summarize, here are the benefits of using the functional paradigm when building microservices APIs:

  1. More predictable behavior of endpoints.
  2. Faster cold starts.
  3. Faster builds.
  4. Lower infrastructure costs.
  5. Lower risk of impacting other endpoints.

In contrast, if you’re crafting an API hosted on-premise, many of these benefits still apply, and what's even better is that transitioning to the cloud will be much easier (and you'll gain the same benefits).

However, if you're working in game development, where certain features need to be abstracted using objects, OOP might be the better choice. Every paradigm has its pros, cons, and appropriate use cases.

The same issue arises when there’s a single or rare case to create something stateful (also on the cloud). You can use OOP where it makes sense - just don’t force it across the entire codebase for the sake of "consistency."

Lastly, some may ask: "But what if I'm using Java or C#?" So, I'll ask you back - what are you doing here, little fella? Why are you interested in JavaScript stuff (~ ̄▽ ̄)~? This article is about building APIs in Node and TypeScript, but if you're still curious, in your case, it makes sense to stick with the base paradigm of the language - OOP.

JavaScript is unique in that it allows us to choose which paradigm to use in different scenarios... It's a great power, but with it comes great responsibility ༼ つ ◕_◕ ༽つ.

Type-Safety For Input Data

Type-safety is simply the alignment between the type of something at runtime and compile time.

So, if you have a variable typed as a string, but at runtime, you're changing it to a number, you'll have a bug.

let myVariable: string = "Hello, World!";

// Prints "Hello, World!"
console.log(myVariable.toUpperCase()); 

myVariable = 42 as any;
// Number 42
console.log(myVariable.toUpperCase());

This can happen with anything - after all, TypeScript is just JavaScript with some type definitions, but at runtime, you’ve got a wild mess on your hands.

This problem can be entirely solved with type-guards - a magical if statement or function that performs the necessary validation and says: "Everything from now on is of this type at runtime and compile time".

interface User {
  id: number;
  name: string;
  email: string;
}

const isObject = (
  maybeObj: unknown,
): maybeObj is Record<string | number | symbol, unknown> =>
  typeof maybeObj === `object` && maybeObj !== null;

const isUser = (obj: unknown): obj is User => {
  if (isObject(obj)) {
    // Inside this if, the "obj" is a real "object"
    return (
      typeof obj.id === `number` &&
      typeof obj.name === `string` &&
      typeof obj.email === `string`
    );
  }

  return false;
};

And usage:

Type Guards

Using type guards gives you type-safety because everything is verified before assigning a type. However, there’s still a chance of making mistakes. For example, you could introduce an error in the if statement, leading to incorrect casting from unknown to User, simply due to a typo. This could break your code:

const isObject = (
  maybeObj: unknown,
): maybeObj is Record<string | number | symbol, unknown> =>
  // The "maybeObj !== null" is a valid statement!
  typeof maybeObj === `object` && maybeObj === null;

Type guards are fragile and prone to developer errors, and they provide type-safety only if no logical errors have been introduced. While nothing is perfect, there are validation libraries that automatically generate types based on predefined schemas. This is a slightly different approach - from schema to type. It reduces the logical risk described before.

// Superstruct
import { object, number, type Describe } from 'superstruct';

const UserSchema = object({
  id: number(),
});

type User = Describe<typeof UserSchema>;

By combining type guards with the type inference mechanism (auto-deduction of types), this risk is minimized. Schema validators from libraries dynamically generate types based on their structure, ensuring accurate type validation (of course, assuming the library author correctly implemented the type definitions and logic ☜(゚ヮ゚☜)). But still, it's much safer and easier to maintain when done in a "schema-based" way.

Now that we understand type-safety, type guards, and type inference, we can use the Superstruct library to create schemas and types for any kind of endpoint parameters. Thanks to that, we'll avoid unnecessary operations if the payload is invalid.

const schema = object({
  id: number(),
});

type Payload = Describe<typeof schema>;

endpoint(`/user/{id}`, async (payload: unknown) => {
  // Validate the payload against the schema
  if (!schema.is(payload)) {
    throw new Error('Invalid schema!');
  }

  // At this point, you're safe to use the payload.
  // The "id" is guaranteed to be a "number" both at compile-time and runtime
  await getUser(payload.id);
});

If you're interested in other libraries for validation and their benchmarks, see Searching for the Holy Grail in Validation World article.

You know for sure that validation is required, but doing it in a type-safe way via schemas that generate types is simply easier to maintain and more straightforward. Of course, there's a minimal impact on runtime performance or bundle size, but like any abstraction, it comes with a cost.

Don’t overuse this everywhere. This type-safety approach makes sense for areas where validation is truly needed, such as frontend forms, backend endpoints, or other places involving user input.

Modular Monolith And Islands Architecture

In software development, you have various metrics that you can apply to your codebase (I'll list the most important ones from my perspective):

  • Changeability
  • Reusability
  • Scalability
  • Performance
  • Predictability

The challenge is that if you focus too much on one of these metrics, you may negatively impact others. It’s similar to the skill graph in FIFA games, where each player has different strengths.

Different Programming Metrics

The key challenge is picking the right balance for your situation. You can't have everything at a 100% level. For example, take the forEach function. It simplifies your code and makes it more reusable - allowing you to pass a function to handle logic without repeating the iteration process. However, this comes with a small performance cost because each iteration introduces some additional overhead.

For more details on loop performance, check out this article: Loops Performance in JavaScript.

The approach known as modular monolith with vertical slicing is a compromise that balances key architectural metrics. While some code duplication may occur, performance is generally not compromised. By dividing the application into isolated, self-contained modules (similar to islands), you can achieve good levels of predictability, scalability, and maintainability.

What’s crucial is that you store these separated modules in a single repository (monorepo), and the cloud provider is responsible for turning these isolated modules into lambda functions or equivalent serverless services. Locally, it resembles a modular monolith, but when deployed to the cloud, it behaves like a microservices architecture. This enables you to maintain simplicity in development while leveraging the scalability and isolation of microservices in production.

Let’s take a look at an example of this structure:

Modular Monolith
Each Module Is A Totally Separated Being

While it's a single application, each controller will be wrapped into a separate lambda, and will be created and destroyed by the cloud provider as needed (based on traffic and cloud settings).

These lambdas must be kept "small" and as encapsulated as possible, exposing only their own functionality. This ensures that each module remains isolated, with minimal dependencies on other parts of the system, optimizing scalability and resource efficiency in the cloud environment.

const payloadSchema = z.object({
  id: validators.id,
});

type Dto = void;

export const deleteDocumentController = protectedController<Dto>(
  async (rawPayload, { uid, db }) => {
    const { id: documentId } = await parse(payloadSchema, rawPayload);
    const documentRef = db.collection(`docs`).doc(uid);
    const documentRateRef = db.collection(`documents-rates`).doc(documentId);

    return await db.runTransaction(async (transaction) => {
      const [documentSnap, documentRateSnap] = await transaction.getAll(
        documentRef,
        documentRateRef,
      );

      const documentData = documentSnap.data();

      if (!documentData) {
        throw errors.notFound(`Document not found`);
      }

      documentData[documentId] = FieldValue.delete();

      transaction.update(documentRef, documentData);

      if (documentRateSnap) {
        transaction.delete(documentRateRef);
      }
    });
  }
);

I've omitted some imports, but the direction should be clear - maximum encapsulation. Each folder under the modules directory will function as its own "island" of files, which should not be imported into any other part of the application, except for the configuration file that generates the lambdas. For the Google Cloud provider, this is typically the main index.ts file.

If you're in a situation where 2, 3, or even 10+ controllers need some repetitive logic - such as an authorization check - that’s perfectly fine. However, avoid creating a large class, service, or object. Instead, create a small, simple function that is as agnostic to the application domain as possible, and reuse it in those n places.

// libs/auth/is-authorized.ts
export const isAuthorized = (uid: string | undefined): boolean => {
  return uid !== undefined;
};
// modules/delete-document/delete-document.controller.ts
const deleteDocumentController = protectedController<Dto>(
  async (rawPayload, { uid, db }) => {
    if (!isAuthorized(uid)) {
      throw errors.unauthenticated();
    }
  },
);

If you use tools like Nx to structure your project into separate libraries and application-specific code, it becomes much easier to manage - especially for large APIs with many separate endpoints.

One more thing: if you're considering adding something "reusable" that knows about the details of a specific controller or application domain, it's often better to duplicate the code when it's a small amount (e.g., 2-3 duplications). The DRY (Don't Repeat Yourself) principle has its benefits, but only when you're not forcing it to remove every kind of duplication.

Always aim for maximum isolation, and if reusability is necessary, ensure that it's as agnostic as possible. Each module can have its own internal files - especially if the code inside the controller.ts file becomes too large. For better readability, you may want to split it into smaller files.

However, the files inside the module should never be exported outside of this folder - remember the islands concept, where each module operates like a small, self-contained app.

Here’s a diagram to summarize the idea:

Architecture Overview Diagram

Take a look: we're focused on creating agnostic libraries, but we don't treat it as a strict rule to follow at all costs - it's a guiding principle. The app-specific library can still utilize agnostic libraries, and there's no issue with that.

Additionally, there may be modules that are entirely isolated, like the last one in this diagram, which has no dependencies at all. This demonstrates that some modules can operate independently from the rest of the system, further promoting modularity and isolation.

Dividing an app like that is also called vertical slicing.

Before we continue, let's summarize what we've achieved with this approach:

  1. Changeability: Changing code in one module will not affect any others.
  2. Reusability: Anything that needs to be reused is placed under a library (whether agnostic or app-specific).
  3. Scalability: With this structure, we simply add a new folder and files, and voilà! If using the cloud, the cloud provider will create a microservice and scale it effortlessly.
  4. Performance: Each lambda has only what it needs - nothing more - thanks to the atomic nature of our approach.
  5. Predictability: Code changes are predictable, with no excessive abstractions or situations where changing one feature introduces a huge risk of breaking another.

This will also work well if you're not using the cloud and microservices, but an on-premise solution. It prepares you for both scenarios.

Generic And DRY Code

I love being baffled in software development. I remember a colleague telling me - write generic, reusable code, ensure DRY ("Don't Repeat Yourself") is followed, and you'll have nothing to worry about.

Seven years later, I have a completely different point of view. To illustrate what "shared code" really means (and I'm not talking about libraries exposed on npm), I'm referring to application/domain-aware codebases.

To show my point, let's consider the following example:

  1. We have 5 endpoints.
  2. Each endpoint has some code duplication, such as query logic and "DTO" creation for responses.
  3. Our 5 endpoints return the same "DTO".

DTO stands for Data Transfer Object, and its purpose is to mitigate the risk of returning data directly from the database, improve performance, while also providing a response contract between the API and its consumer, or between two system layers (backend/frontend).

A typical developer who follows "DRY" blindly would create the following abstraction:

// @@@ utils/get-users.ts @@@
import { db } from 'application/database';

type UserModel = {
  // Some typical props...
};

type Dto = Pick<UserModel, 'id' | 'name' | 'firstName' | 'lastName'>[];
type Payload = { limit: number };

const getUsers = async (limit: Payload['limit']): Promise<Dto> => {
  const users = await db('users').limit(limit);

  return users.map(({ id, name, firstName, lastName }) => ({
    id,
    name,
    firstName,
    lastName,
  }));
};

// @@@ Endpoints @@@
import { getUsers } from 'utils/get-users';
import { get } from 'application/rest';

get('users/{limit}', async (req, res) => {
  const users = await getUsers(req.payload.limit);
  // ...Logic goes below...
});

Then, I'm 100% sure they'll think - "Hey, I can parameterize this getUsers function and make it truly generic! But now it's for all cases, so we need to change its name!".

const getMultipleElements = async <
  TQueryResult extends Record<string, unknown>[],
  TDto extends Record<string, unknown>,
>(
  key: string,
  limit: number,
  map: (queryResult: TQueryResult[number]) => TDto,
): Promise<TDto> => {
  const users = await db(key).limit(limit);

  return users.map(map);
};

// Usage...

const posts = await getMultipleElements<{ id: number }, { id: number }>(
  'posts',
  10,
  ({ id }) => ({ id }),
);
const users = await getMultipleElements<{ id: number }, { id: number }>(
  'users',
  10,
  ({ id }) => ({ id }),
);

Look, so we've reduced duplication in 5 endpoints. But let me tell you what happens next – I want to avoid taking up too much space here. The next developer will realize that getMultipleElements isn't really that generic – it has some generic parameters, but you're only able to set the limit – it's a narrow use case. They'll start making changes and trying to make it even more generic. After several "improvements," you might find that your get10Users endpoint is taking 4 seconds...

It's not that you shouldn't "share" any application-specific code. It's more about making it too generic. Application-specific code is application-specific. It's not a reusable library exposed on npm, meant for every possible application or domain.

Keep it simple, isolated, and don't expose things that aren't needed between modules/islands, as mentioned earlier. Expose things only when they're necessary and visible, as it will provide value. Don't over-abstract code for the "future".

There is one more problem with "too generic" code. It's readability in terms of implementation. Generic code encourages developers to use it because, in theory, you're producing less code in the features, right? Sure, it's fine if it's a library like lodash, where you're consuming a sort function. But for app-specific logic, abstracting it to generic forms usually ends with a huge mess in terms of readability and changeability.

Then, there’s a single edge case that’s not covered, and you want to support it in this generic chunk, and boom 💥. Another feature is destroyed because everything is if-based.

If you do that, your hands will be "DIRTY," not "DRY" ☜(゚ヮ゚☜).

Using Porting To Isolate From Framework

Porting refers to adapting software to work in different environments.

I won't spam with content here. Instead, I'll link a Porting React application article that shares the same idea.

Porting is essentially writing a codebase in such a way that you can copy and paste the entire source code, and the only thing that needs to be changed is the infrastructure or specific environment/framework setup.

When I think about porting, I imagine it like this: let's say you have an old TV and want to connect a PlayStation 5. It won't work at all. But if you buy a port, connect the console to the port, and then connect the port to the TV, it works. The TV's software or hardware hasn't changed; the "stuff in between" has been added.

The same concept applies here. Our application codebase and domain codebase are like the "TV". The only thing that changes is the infrastructure and environment code responsible for using it. Here's an example in the diagram:

Porting Technique On Diagram

In the codebase, it looks like this:

// @@@ Before porting @@@
import { getMultipleElements } from 'application/utils';
import { onCall } from 'firebase/functions';

// Google Cloud creates a lambda "getUsers"
export const getUsers = onCall(async (payload: { limit: number }) => {
  // Application logic is bound to "onCall"
  const users = await getMultipleElements<{ id: number }, { id: number }>(
    `users`,
    payload.limit,
    ({ id }) => ({ id }),
  );

  return users;
});
// @@@ After porting @@@
// Inside get-users.controller.ts file
import { getMultipleElements } from 'application/utils';

export const getUsersController = async (payload: { limit: number }) => {
  const users = await getMultipleElements<{ id: number }, { id: number }>(
    `users`,
    payload.limit,
    ({ id }) => ({ id }),
  );

  return users;
};

import { getUsersController } from './get-users.controller';

// This file doesn't know anything about the app 
// - it just provides the setup
export const getUsers = onCall(getUsersController);

Writing code this way for the entry point of every endpoint will save you a lot of time during migration and provides nice separation of concerns. Another benefit is testability. You don't need to mock onCall behavior in tests anymore; just test getUsersController in isolation, and in a separate test, verify if it's integrated with onCall.

Small And Reusable Code Chunks

Developers intuitively start grouping codebases into modules, then create really large modules that get embedded into lambdas, causing longer cold starts. To illustrate this issue, consider the following Zod library schema, which is reused across many different areas.

import { UserProfileEntity } from 'entities/user-profile.entity';
// Schema is using another schema.
export const userProfileSchema = z.object({
  mdate: UserProfileEntity.schema.shape.mdate,
  profile: UserProfileEntity.schema.pick({
    displayName: true,
    id: true,
    avatar: true,
    bio: true,
    githubUrl: true,
    linkedInUrl: true,
    blogUrl: true,
    fbUrl: true,
    twitterUrl: true,
  }),
});
// The same happens in other files (skipped here to save space)...

Schemas Relationship Diagram Representation Of Schemas Relationship

Now, what happens when a developer makes a change in the UserEntity schema? It could break other schemas. The same applies to UserProfileEntity. Additionally, the amount of unnecessary code shipped to lambdas during initialization grows significantly - often including code that's not needed at all. For example, the endpoint might only take a single parameter like limit, yet the entire schema is imported to pick one property.

If you think that's crazy, yes, it is. That's why I'm proposing a completely different solution, one that eliminates the risk of breaking things due to hierarchical dependencies. The approach: use only what you need, rather than what you're forced to use because of code structure.

// schemas/core.ts
export const id = z.string().uuid();

// user-validation.ts
export const displayName = z.string().min(2).max(50);

// When attempting to validate
import { id } from 'schemas/core';

const userProfileSchema = z.object({
  id,
  displayName,
});

Look how minimal, predictable, and simple this is. There's no longer any risk of breaking things because someone redefined a schema you're dependent on.

Isolating Types From Implementations

I would also call it "Contract/API design first, implementation later."

To keep things clear and concise, this is simply a situation where you have type definitions stored in one file and the implementation in another. In languages like C#, this is the default, and every developer using the language follows this approach.

However, in the "Mechico" of the JavaScript ecosystem, there are many approaches. This is because TypeScript has a built-in type inference mechanism, so if you have an object, you can create a type from it and save some "boilerplate" - as seen in Zod and other validation libraries, or when using as const to get the exact type from an object.

const APP_CONFIG = { API_URL: "https://" } as const;
type AppConfig = typeof APP_CONFIG;

There’s no problem if you're sure the type you're crafting is something that is used in one or two places. But this tactic won’t work for something highly reusable - like components, utils, or services logic. In these cases, what you want is low coupling and the ability to re-implement using the same contracts (type definitions).

// create-user.controller.defs.ts
// Contracts
type CreateUserController = (payload: { id: string }) => { mdate: string };
export { CreateUserController };

// create-user.controller.ts
const createUserControllerV1: CreateUserController = () => {
  // Implementation 1...
};

const createUserControllerV2: CreateUserController = () => {
  // Implementation 2...
};

Forcing this everywhere does not make sense, but as you can see for controllers, it provides a nice quick option to change the implementation while keeping the same contract. This way, you won't modify the existing code - just add new one. Like everything, it has pros and cons, and specific use cases, but I recommend trying it.

Additionally, splitting types from implementation reduces the risk of circular dependency. You can read more about that in the Concerns about separating types from implementation article, which fully covers this topic.

Using Facade To Isolate From Library

We've used porting to isolate from the entire environment and infrastructure. Now it's time to isolate code from risky libraries.

The easiest way to achieve this is by using a facade. You create your own code that provides a wrapper for a library or other developers' code, which is used in many places throughout the application, to avoid direct coupling with it - this gives you easier replacement later and the option to customize some things.

The best candidates for this approach are:

  1. Database connection and query logic.
  2. Storages.
  3. Cloud provider-specific code.
  4. Error object creation.

Here's an example with error handling:

// errors.ts
import { https } from 'firebase-functions';
import { z } from 'zod';

const error = (
  code: https.FunctionsErrorCode,
  symbol: string,
  content: string | { message: string; key: string }[],
): https.HttpsError =>
  new https.HttpsError(
    code,
    JSON.stringify({
      symbol,
      content,
      message: Array.isArray(content) ? content[0].message : content,
    }),
  );

const exists = (content = `Record already exists`) =>
   error(`already-exists`, `already-exists`, content);

// Usage in other file

throw exists('Record is already here');

By using a facade, you can change or swap the underlying library or external service in the future with minimal impact on the rest of your codebase, reducing the risk of tight coupling.

Performance Monitoring And Testing

I love an approach where every code change is driven by user needs. The performance of a system is a key aspect. Users and clients who pay for your work don’t care what’s under the hood. Just like when you use a TV, you don’t worry about what’s going on inside (unless you're a TV constructor). You just expect the channel to change as fast as possible when you press a button on the remote.

To see what I mean, read the User first approach in web development.

The same applies to any application. Performance tests are critical, and this is a broad topic with many ways to measure performance. Typically, much of this information is now available in dashboard consoles (if you're using Google Cloud or similar). You can see how long it takes to call specific lambdas, how long cold starts take, and whether to increase the power for higher traffic or toggle the "auto scale" feature.

What’s really cool about cloud services is that you can use tools like AWS X-Ray or more generic solutions like Dynatrace to see how long specific parts of functions are taking. This isn’t the place to dive deep into those tools, but they’re incredibly helpful for identifying problems when real users start interacting with your code.

If you want to test things yourself, you can always add custom logs or test algorithms with libraries like BenchmarkJS. You can also use Postman to verify response times and failure rates for specific endpoints - just set it up to call them in a loop.

I’m sure there are plenty of other tools that allow you to do this from the console. So, to conclude:

  1. Add custom alerts/monitoring in production to detect slow endpoints.
  2. For custom code (algorithms), use BenchmarkJS to see how fast specific operations run.
  3. Use tools like Postman or others to stress-test your lambdas and observe how they behave in real-world scenarios.
  4. Consider tools like X-Ray or Dynatrace to enhance your monitoring experience.

The Testing Balance

If you're writing tests just to meet coverage requirements, I’ve got bad news for you - you’re playing with fire. I’ve written a specific article on this topic, Reflections on test coverage in web development, which dives deeper into this issue.

The more tests you create - especially those that mock something - the more gaps you're introducing. I’m not saying don't write tests, just use the right test type for the right case. Here's a list:

  1. Unit tests – Verify algorithmic logic or simple object creation.
  2. Integration tests – Check if the lambda is using core services required to make the feature work, even if those services can't be directly tested.
  3. Black Box API tests (done in Playwright, Postman, or similar) – Focus on entire endpoints, or groups of endpoints, to verify workflows rather than code coverage.

So, of course, you could have 90% test coverage in your entire API codebase. But those tests would mostly be tied to the code’s structure and implementation. It’s much safer and faster to write tests in external tools that don’t depend on implementation details.

What if you migrate your API from Node to Java? The tests will still be valuable. What if you change from OOP to functional programming? The tests still hold up. Do you see the point? It’s much better and less "maintenance-heavy" to create tests that can verify endpoints without caring what’s inside. So, prefer covering the "paths" and verifying whether your endpoint works entirely from the consumer's perspective, rather than focusing on complicated and hard-to-maintain code-related tests with jest or similar tools.

As I mentioned - this doesn’t mean don’t write those tests, just pick the right type for the case.

Here’s an example of a simple Postman syntax for black box API testing:

Some Black Box Tests In Postman

Some of these cases may already be covered in e2e tests if you're calling real endpoints without mocking them. The hardest cases will still require manual testing and test scenarios. It’s all about balance and choosing the right approach based on the complexity and how difficult it is to automate certain things.

Using An Event-Driven Approach

In more advanced scenarios, you may want to create a stream of events, reacting to specific events and executing logic accordingly. For example, when a user creates a resource and you want to send notifications to others, it’s inefficient to embed heavy logic in each lambda function. Instead, using an Event-Driven approach with a publish/subscribe model is a more scalable solution.

Here’s a scenario: A lambda function creates a user comment, triggers an event that is dispatched to a specific topic, and another mechanism listens for that event to send notifications.

// Inside the Lambda that creates the comment
dispatchEvent({
  topicName: 'sendNotifications ',
  type: "SEND_NOTIFICATIONS",
  payload: {
    rate: payload.rate,
  },
});

// Inside the event handler
const sendNotifications = (payload: { rate: number, type: string }) => {
  // Logic for sending notifications...
}

The implementation depends on your server infrastructure. For on-premise setups, a memory-based event-driven approach might be sufficient. However, in the Cloud, relying solely on memory is impractical because each lambda function is created and terminated based on traffic, meaning you could lose data if it’s stored only in memory. External services are usually necessary to ensure reliability and persistence.

Versioning And Contracts

Sometimes, improving certain aspects of your API requires a contract change. In such cases, API versioning becomes crucial. If your clients need a faster version of the API but the contract has changed, they will need to connect to the new version. Over time, you can notify them about the new version and inform them that the old endpoint will be deprecated after a set period. While this can result in some duplicated effort, it minimizes the risk of breaking existing functionality (if you've isolated codebase).

If you follow an approach with modules and isolated islands, as mentioned earlier in the article, you can reuse small chunks of code where necessary and create multiple versions with minimal risk.

Versioned API Structure Structure Of Versioned API

Using Fail Fast Approach

Sometimes your app may not work under certain conditions. It's good to detect these issues and be informed as early as possible.

For example, let's say you have specific configuration requirements for a lambda function - such as an environment variable. If someone starts calling this endpoint when the setup is missing, you'll likely encounter an internal error because you're not expecting API consumers to interact with it in an incomplete state.

Reality is complex, and perhaps you're asynchronously creating environment data. There's a small chance someone could call the lambda before everything is ready.

To handle such cases, it's wise to throw custom errors to avoid hours of debugging when a random internal error occurs in production.

To achieve this, you can use a simple fail fast technique, performing required checks in a type-safe way. If the data you're relying on is incorrect, you throw a custom error to indicate the issue.

// Example controller code

const isValidUserId = (userId: unknown): userId is string => {
  return typeof userId === 'string' && userId.length > 5;
}

const userId = req.params.userId;

if (!isValidUserId(userId)) {
  throw errors.badRequest('Invalid User ID');
}

// Proceed with the rest of the logic

Using Early Returns

It's a small, but important change. Early returns mean checking conditions at the beginning, validating them, and returning errors early, to avoid deeply nested if statements that can be hard to read.

Instead of this:

function processRequest(data) {
  if (data) {
    if (data.isValid) {
      if (data.hasPermission) {
        // Process the request
      } else {
        return 'No permission';
      }
    } else {
      return 'Invalid data';
    }
  } else {
    return 'No data provided';
  }
}

You use early returns like this:

function processRequest(data) {
  if (!data) return 'No data provided';
  if (!data.isValid) return 'Invalid data';
  if (!data.hasPermission) return 'No permission';
  
  // Process the request
}

This reduces complexity, improves readability, and makes it easier to follow the logic.

Using Dependency Injection To Decouple

Instead of directly importing dependencies, create an interface or type, and pass required components as function parameters. This approach decouples your code from specific implementations. For example, earlier in the article, we avoided referencing a complex authorization object by injecting only the subset of its properties needed for the check.

// libs/auth/is-authorized.ts
export const isAuthorized = (uid: string | undefined): boolean => {
  return uid !== undefined;
};

// modules/delete-document/delete-document.controller.ts
const deleteDocumentController = protectedController<Dto>(
  async (rawPayload, { uid, db }) => {
    if (!isAuthorized(uid)) {
      throw errors.unauthenticated();
    }
  },
);

This way, the function doesn't directly reference the entire authorization object.

Dependency Injection can seem complex, but I've written an article, Dependency Injection Does Not Need To Be Complex, if you're interested in exploring this further.

Repo And Source Code

I've crafted a small version that uses all the techniques mentioned above. Feel free to check it out and use it!

Here you have the source code that implements these techniques.

Summary

This was a really big article, and honestly, it’s more suited for a book or a comprehensive course rather than a single piece. But I’ve aimed to highlight the most critical factors, patterns, and techniques that mitigate the biggest risks in backend API development - instability, performance issues, maintenance, changeability, isolation, and predictability.

The backend is a unique area where a single mistake can be costly. It’s crucial not to overcomplicate things and to choose the right tools for the right problems. Measuring and monitoring are extremely important too - how can you know if you’ve improved something without the data?

Also, the DRY concept shouldn’t be applied blindly everywhere. Unnecessary abstractions and the complexity they bring can hurt performance and make code harder to read and change.

If I had to summarize this article in one sentence, it would be: "Divide and Conquer, Keep It Simple Stupid, and Design First".

Author avatar
About Authorpolubis

👋 Hi there! My name is Adrian, and I've been programming for almost 7 years 💻. I love TDD, monorepo, AI, design patterns, architectural patterns, and all aspects related to creating modern and scalable solutions 🧠.