Welcome to the Schema Benchmarks project. This aims to compare the performance of different schema validation libraries in detail, including separating each step of the process.
Download
We first test the bundle size of each library. This is important for browser usage, where this will affect download time.
We do this by compiling example usage files with Rolldown, and measuring the size of the output, both minified and unminified.
With minification and tree shaking:
Without minification (but still treeshaken):
Benchmarks
Runtime benchmarks are run in sequence, on a GitHub runner.
Steps benchmarked include:
Initialization
Creating the schema itself. This is usually a one time cost.
schemas.tsCopy to clipboardimport * as v from "valibot"; export const personSchema = v.object({ name: v.string(), age: v.number(), }); export type Person = v.InferOutput<typeof personSchema>;
Note
For graphs on this page, the best result for each library is shown.
Validation
Checking if a given value matches the schema. Crucially, this is different to parsing because it doesn't return a new value.
Copy to clipboardimport * as v from "valibot"; import { personSchema } from "./schemas"; if (v.is(personSchema, data)) { // data is narrowed to Person }
Validating valid data:
Validating invalid data:
Note
Some libraries only support validation (e.g. ajv) or parsing (e.g. zod). In these cases, we
categorise them accordingly.
Parsing
Checking if a given value matches the schema, and returning a new value. This will include any transformations.
Copy to clipboardimport * as v from "valibot"; import { personSchema } from "./schemas"; const person = v.parse(personSchema, data); // person is of type Person
Info
Libraries with an asterisk (*) throw an error when parsing invalid data (and have no non-throwing equivalent), so the benchmark includes a try/catch - which may have an unknown performance impact.
Results with a dagger (†) abort early when parsing invalid data, so will tend to be faster.
Parsing valid data:
Parsing invalid data:
Tags
Optimizations
Some libraries utilise specific optimizations to improve performance. We specifically track:
- JIT: Libraries that use Just-In-Time compilation (usually via
new Function) to generate optimized code at runtime, e.g.arktype - Precompiled: Libraries that generate optimized code at build time, e.g.
typia
Error handling
Some libraries support different error handling strategies. We specifically track:
- All errors: Parse the entire value before returning/throwing an error.
- Abort early: Return/throw an error as soon as an issue is found.
Standard Schema
Many libraries implement the Standard Schema interface, which allows many other libraries to accept them without needing to specialise for each library.
Copy to clipboardimport { personSchema } from "./schemas"; const person = await upfetch(url, { schema: personSchema });
We benchmark the time taken to parse using a standard schema.
Info
Some libraries require an adapter before they can be used as a standard schema. The time to convert the schema is not measured, only the time to parse using it.
Parsing valid data:
Parsing invalid data:
Codec
Some libraries support two way conversion of data, often referred to as "encoding" and "decoding".
We benchmark the time taken to encode and decode a Date to and from a string (usually an ISO 8601 string, but we don't enforce this).
Copy to clipboardimport * as z from "zod"; const dateFromString = z.codec(z.iso.datetime(), z.date(), { encode: (date) => date.toISOString(), decode: (str) => new Date(str), }); dateFromString.encode(new Date(0)); // "1970-01-01T00:00:00.000Z" dateFromString.decode("1970-01-01T00:00:00.000Z"); // Date
Invalid data
We don't benchmark codecs with invalid data, as many libraries require the input to be correctly typed before passing it to the codec.
Codecs that do accept unknown input are marked with an asterisk (*), as they may be slower.
Encoding (Date → string):
Decoding (string → Date):