zod
joi
yup
class-validator
superstruct
validation
libraries
comparison

Searching For The Holy Grail In Validation World

Modern web development oriented around JS is weird. Languages like C# or Java, probably have one standardized validation library (I haven't checked that due to laziness). Thus, there’s no need for articles or presentations comparing n robust, modern, or game-changing libraries...

Unfortunately, we're in what feels like a Clown Fiesta of the JavaScript ecosystem. Developers constantly create their libraries that do the same as the previous one, but with another syntax. When there is a performance difference, they call it "game-changing", and if the author of the library has a connection with the Tech Influencers Mafia, be sure that for a second you'll see a cringe thumbnail on YouTube, and the next video that shows how "game-changing" it is.

Of course, I’m joking, but it’s crucial to verify things before using them. I enjoy these kinds of experiments because they reveal how often tech influencers make claims without thoroughly testing things. I love hearing phrases like "performance impact", "maintenance issues", and "scalability problems" - all of which are generic terms that can’t be easily measured or boiled down to simple true or false outcomes.

So, today we’ll determine which option is the most worthwhile to use, considering its trade-offs and advantages.

The Essentials Of Validation Libraries

I’ve identified and explained the core features that, in my opinion, every validation library should include. Each will be detailed in its own section.

Advanced Model

The ability to check whether the form has been submitted, whether it’s dirty, and what errors are present. Additionally, the ability to identify which validator failed, allowing you to display a list of errors with green/red markers if needed.

{
    "errors": {
        "username": {
            "required": ["This field is required", true],
            "minLength": ["The min length is 3", false]
        }
    },
    "values": {
        "username": "John"
    },
    "dirty": "false",
    "valid": "false",
    "invalid": "true",
    "pristine": "true",
    "touch": {
        "username": true
    },
    "touched": true,
    "keys": ["username"]
}

Validation Strategies

The option to choose between the fail-fast approach or validate everything and return the full report. This depends on the use case and the desired outcome. It’s often necessary for more complex frontend forms. At this point, we're not discussing specific libraries, so I'll just introduce the concept.

validate(schema, 'report');
// or
validate(schema, 'fail-fast');

Good Enough Size

Anyone who has used MomentJS knows how significantly libraries can affect application performance, especially those that aren’t tree-shakable and are built as monolithic packages. MomentJS, for example, is notably large (67 kB gzipped) for a library that primarily handles date manipulation. In contrast, alternatives like Date-Fns are much smaller (7.46 kB gzipped) and fully tree-shakable, making them far more efficient.

Therefore, library size is a critical factor to consider.

Developer Experience And TypeScript

Auto-hints during schema creation, type-safety, and strong TypeScript support are highly sought after by developers. While not all libraries seamlessly integrate with TypeScript, developers appreciate how it simplifies their workflow, especially when they can rely on pre-defined types from a library instead of manually creating complex ones.

For instance, Zod and a few others (keep reading) offer excellent TypeScript support. They provide complete type-safety for validation logic, eliminating the need to manually define types for schemas. Developers can simply infer types from the configuration object.

const UserSchema = z.object({
  id: z.number(),
  name: z.string(),
  email: z.string().email(),
});
// Type is automatically determined based on schema!
type User = z.infer<typeof UserSchema>;

Eye-Catching API

Whenever I see code like this, I can't help but think: "Interesting ψ(`∇´)ψ".

import { v } from "vortex";
const schema = v({
   username: '1-10*'
})

This comes from an old validation library I worked on a while ago. The boilerplate is small, but it sacrifices readability. In this case, the syntax means the username should have a length between 1 and 10, and the field is required. Developers usually seek a balance - an API that’s both readable and avoids unnecessary boilerplate.

Good Enough Run-Time Performance

The last thing you want is glitches when users are entering data into form inputs. Validation, especially the validation of entire schemas during every interaction, can be slow. That’s why both rendering and validation performance are crucial. Developers consider this, and it becomes a key factor when building large forms. Many libraries with framework adapters include built-in memoization or other mechanisms to prevent unnecessary operations.

Trends And Popularity

The number of downloads on npm is a somewhat raw metric. Before choosing a library, developers try to predict its longevity to avoid future migrations. They check who is using the library (such as big companies or projects) and investigate the author’s credibility (perhaps a well-known engineer). After that, they review the issues list and roadmap to assess the library’s future prospects.

Features And Utils

You need to ensure the library provides the essential features you need - nothing groundbreaking there. However, some libraries offer more advanced capabilities, such as first-class support for translation mechanisms, observability, or other valuable features. It’s always a good idea to explore a library’s unique features alongside its core functionality before making a decision. For example, a unique feature of the Class-Validator library is its use of decorators:

import { IsEmail, IsNotEmpty, IsBoolean, Equals, MinLength } from 'class-validator';

class UserForm {
  @IsNotEmpty({ message: 'Login is required' })
  login: string;

  @MinLength(8, { message: 'Password must be at least 8 characters long' })
  password: string;
}

Additionally, the availability of built-in validation functions, such as email(), username(), and others, is crucial. Many libraries either lack pre-defined validators or offer only basic ones by default. A rich set of built-in validators saves time and effort, reducing the need to write custom validation logic for common scenarios.

Paradigm Alignment

Mixed paradigms aren’t always a major issue - many technologies combine object-oriented, functional, and reactive paradigms. However, many engineers prefer to maintain consistency within their codebase. That’s why they often seek libraries that align with the paradigm they’re using. For example, if your codebase is primarily object-oriented and relies on classes, using the Class-Validator library might be a better fit.

Integration With Frameworks

This is the final and perhaps most important point - ensuring that the library integrates smoothly with the frontend and backend frameworks you're using. For example, using Class-Validator in a React project might force you to write a lot of custom code, leading to ongoing maintenance and potentially poor runtime performance.

Comparison Candidates

By now, you know what I was considering during my research and selection process for this comparison. I’ve taken many factors into account - features, community opinion, number of downloads, and insights from various articles. Based on this, I’ve chosen the following candidates:

The order is entirely random for now. Let's keep going, and by the end, we’ll choose the best one!

Comparing Popularity And Usage

Let's see what it looks like on Npm Trends.

Trends Libs On Npm

Setup Impact

Developers prefer simple setups, so I’ve compiled a list of steps for each library, along with brief descriptions. I've focused on setups that work well with TypeScript. Fewer steps, fewer dependencies, and minimal configuration mean less maintenance in the future.

Superstruct

  1. Install the library via npm i superstruct.

Zod

  1. Install the library via npm i zod.

Yup

  1. Install the library via npm i yup.

Class-Validator

  1. Install the library via npm i class-validator.
  2. Install class-transformer for handling JSON-to-class transformation via npm i class-transformer.
  3. Update your TypeScript configuration to enable decorators: set experimentalDecorators and emitDecoratorMetadata to true.
  4. Add the Babel plugin for decorators: npm i @babel/plugin-proposal-decorators.
  5. Update your Babel configuration to support decorators.

Joi

  1. Install the library via npm i joi.
  2. Install the type definitions via npm i @types/joi.
  3. Explore solutions for schema type inference (see this thread).

Comparing TypeScript Support

Let's take a look quickly at the table that wraps up the differences and then dive into each:

LibraryType Inference From SchemasType-SafetyType Definitions
Yup✅ Out of the box✅ By Design✅ Included
Zod✅ Out of the box✅ By Design✅ Included
Joi⚠️ Via separate lib⚠️ Via separate lib⚠️ Via separate lib
Class-Validator❌ Not supported❌ Not supported✅ Included
Superstruct✅ Out of the box✅ By Design✅ Included

Yup, Zod and Superstruct are first class supporting TypeScript, and all of them have dedicated utility type or build-in type inference from schemas (so, you don't need to manually add types, and then assign them), all you need is to do:

// Zod
import { z } from 'zod';

const UserSchema = z.object({
  id: z.number(),
});

type User = z.infer<typeof UserSchema>;

// Yup
import * as yup from 'yup';

const UserSchema = yup.object({
  id: yup.number().required(),
});

type User = yup.InferType<typeof UserSchema>;

// Superstruct
import { object, number, type Describe } from 'superstruct';

const UserSchema = object({
  id: number(),
});

type User = Describe<typeof UserSchema>;

It means these three, are special - because they're providing type-safety for us. If we're using a string, we know it's always a string in compile and runtime. Without type-safety, you never know. Here is the violated type-safety in the Class-Validator library (it's not possible due to the fact it is using decorators, that have limited type-inference).

import { IsInt } from 'class-validator';

class User {
  @IsInt() // This works...
  id: string;
}

const user = new User();
// In TypeScript it's passing, but at runtime, it fails.
user.id = "123";

The most interesting is Joi. By default, it's crafted in plain JavaScript, but you will gain the same as Yup, Zod, and Superstruct, after installing separate type packages. To have TypeScript supported you need to install: npm install --save-dev @types/joi. However, schema inference will require much work (as mentioned before), again here is the thread.

Here is the link to all schemas.

Load Time Check

If the size is smaller, then it's better for us developers and users. Let's examine and compare the size of each and the potential impact on application load time (it has been taken from Bundlephobia):

LibrarySize Gzipped+MinifiedDownload Time (3G)Download Time (4G)Tree-Shakable Design
Superstruct~3.4 KB~0.069 s~0.004 s✅ Yes
Yup~12.8 KB~0.256 s~0.015 s⚠️ Limited
Zod~14.2 KB~0.284 s~0.016 s⚠️ Limited
Joi~42.6 KB~0.85 s~0.049 s⚠️ Limited
Class-Validator + Class-Transformer~94.1 KB + ~4KB~1.88 s + ~0.08s~0.108 s + ~0.005s⚠️ Limited

The Class-Validator is usually used with Class Transformer, which programmatically transforms JSON data into JavaScript classes. It's required because we want to have class instances on which we'll run decorators logic.

Boilerplate And Syntax Check

It's difficult to discuss boilerplate in this article since all of these libraries aim to reduce it. However, we can still compare them to see which one offers the cleanest syntax. To save space, I've included a GIF with linked examples (imagine how much space it would take if I added them all here).

Comparing Libraries On GIF

LibraryLines Of Code
Joi53
Zod58
Yup64
Superstruct67
Class-Validator + Class-Transformer124

To be honest, it depends on the formatting and the APIs you're using. However, it's clear that Class-Validator is quite boilerplate-heavy. Additionally, having to manually define type definitions alongside the decorators can be frustrating and time-consuming.

class Mindmap {
  @IsString()
  id!: string;
  // or...
  @IsString()
  id: string | undefined;
  // Other...
}

Here is the link to all schemas.

Bundle Size Benchmark

The described size of a library and its actual size after installation and usage in a project are two completely different topics. Many factors come into play. To clarify this, let's conduct an experiment: install each validation library, embed its code and schema into an application, and observe how it impacts the bundle size for the same feature using different libraries.

For testing purposes, I followed these steps:

  1. Installed the library and configured it.
  2. Imported the schema into the client-side code and executed it inside a useEffect hook.
  3. Measured the bundle size before and after for each library.
Validation LibraryBundle Size Sum (kB)
None841.24
Superstruct1278.64
Yup1311.47
Zod1330.38
Joi1419.58
Class-Validator1515.52

As you can see, the actual size can be quite different from what is declared by the library authors. Many factors influence this, and it’s difficult to predict the final outcome. It depends on various things such as the bundler you are using, how it’s configured, whether you're using gzip or not, and the minification algorithm applied.

Here you can find all assets.

Runtime Benchmark

It's important to have a complex enough case to compare the impact on runtime performance. The model must be kinda complex. So, I'll write a validation for my mindmap creator (that I'm working on, and will be released soon).

Mindmap Creator Demo

import { z } from 'zod';

const id = z.string();
const position = z.object({
  x: z.number(),
  y: z.number(),
});
const nodeName = z
  .string()
  .min(2)
  .max(100)
  .regex(/^[A-Za-zÀ-ÖØ-öø-ÿ' -]$/);
const nodeDescription = z.string().min(25).max(300).optional();

const zodMindmapSchema = z.object({
  id,
  name: z.string().min(2).max(100),
  description: z.string().min(25).max(300).optional(),
  orientation: z.enum([`x`, `y`]),
  cdate: z.string().date(),
  mdate: z.string().date(),
  nodes: z.array(
    z.union([
      z.object({
        type: z.literal(`internal`),
        id,
        position,
        data: z
          .object({
            id,
          })
          .nullable(),
      }),
      z.object({
        type: z.literal(`external`),
        id,
        position,
        data: z.object({
          name: nodeName,
          description: nodeDescription,
          path: z.string().url(),
        }),
      }),
    ]),
  ),
  edges: z.array(
    z.object({
      id,
      source: id,
      target: id,
      type: z.enum([`curved`, `linear`]),
    }),
  ),
});

export { zodMindmapSchema };

We'll use a probing technique explained in the article Loops in JavaScript and Their Performance via BenchmarkJS library. In short, we'll wrap each validation function invocation, run it n times, and take the average. This process will be repeated for each library. Here you have interesting results:

LibraryOps/SecMargin of ErrorSamplesTotal Time (s)
Superstruct9,136±5.55%1210.013
Class-Validator347±4.22%1160.334
Joi203±2.95%1150.566
Zod131±3.74%1160.883
Yup62.84±7.56%1131.798

Yes, the computation performance is not a mistake in Superstruct. I was confused when I first time saw it too :D. It seems like it's not only outstanding in code size but also at runtime.

Margin of error represents the range within which the true performance result (operations per second) is likely to fall. For example, if a benchmark shows Ops/Sec: 62.84 ± 7.56%, it means the actual performance could vary by ±7.56% from the measured result. This variation can be influenced by background processes, system load, environmental factors, and other variables that affect test stability. A higher margin of error suggests greater variability or instability during test execution, while a lower margin of error indicates more consistent and reliable results.

Here is a link to the Source Code to see how I've measured that.

My Thoughts And Rank

In my opinion, Zod is definitely not the top choice, contrary to what many web developers say. For me, the top choice right now is Superstruct. The performance is excellent, and since I care a lot about loading times, this is a critical factor for me.

The setup of Superstruct is straightforward, the validation model offers everything I need, and the schema is not overloaded with a boilerplate. The number of downloads is significant enough, and it has garnered a good level of trust in the community.

Additionally, the runtime validation in Superstruct is excellent, making it the best candidate for any kind of validation task.

The TypeScript support is more than sufficient and offers the same schema inference as Zod, but Superstruct handles other aspects much better. In my opinion, it feels like a better version of Zod, and it fits perfectly into my favorite programming paradigm - functional programming.

I'm using a user-first approach in web development daily. This means that user requirements and app quality come first, with the developer experience being secondary. That's why Superstruct is my top choice now.

  1. Superstruct
  2. Zod
  3. Yup
  4. Joi
  5. Class-Validator

Summary

As you can see, there are many good validation tools available. To be honest, all of them will probably meet your needs. Balancing the required features, a smaller package size and better performance is key.

Superstruct, Zod, Yup, and Joi are all solid choices. However, I would caution against using Class-Validator for greenfield projects. It can be risky because of its added complexity, dependencies, and performance overhead. If you're already using it in existing projects and it's working fine without issues, there's no reason to replace it - unless you start encountering visible problems.

The most important factor is not to make your life harder. If a library is heavier, performs slower, or requires a lot of extra work, it can lead to more effort and headaches down the road. Choosing something lightweight and efficient from the start is crucial.

I continuously evaluate and update my tech stack. Sometimes, I replace one tool with another to ensure I have the best foundation for both myself and the users. It's an investment in the long-term health of the project, and it's worth it.

Anyway, it’s time to wrap up this long article. I hope it has been helpful to you. Take all these factors into account (I’m sure I’ve missed something), and choose the tools that fit your needs best.

Author avatar
About Authorpolubis

👋 Hi there! My name is Adrian, and I've been programming for almost 7 years 💻. I love TDD, monorepo, AI, design patterns, architectural patterns, and all aspects related to creating modern and scalable solutions 🧠.