Currently the fastest JSON Schema validator for node.js and browser.
It uses doT templates to generate super-fast validating functions.
NB: Upgrading to version 2.0.0.
- ajv implements full JSON Schema draft 4 standard:
- all validation keywords (see JSON-Schema validation keywords)
- full support of remote refs (remote schemas have to be added with
addSchema
or compiled to be available) - support of circular dependencies between schemas
- correct string lengths for strings with unicode pairs (can be turned off)
- formats defined by JSON Schema draft 4 standard and custom formats (can be turned off)
- validates schemas against meta-schema
- supports browsers and nodejs 0.10-5.0
- asynchronous loading of referenced schemas during compilation
- "All errors" validation mode with option allErrors
- error messages with parameters describing error reasons to allow creating custom error messages
- i18n error messages support with ajv-i18n package (version >= 1.0.0)
- filtering data from additional properties
- NEW: custom keywords
- NEW: keywords
constant
andcontains
from JSON-schema v5 proposals with option v5
Currently ajv is the only validator that passes all the tests from JSON Schema Test Suite (according to json-schema-benchmark, apart from the test that requires that 1.0
is not an integer that is impossible to satisfy in JavaScript).
ajv generates code to turn JSON schemas into javascript functions that are efficient for v8 optimization.
Currently ajv is the fastest validator according to these benchmarks:
- json-schema-benchmark - 70% faster than the second place
- jsck benchmark - 20-190% faster
- z-schema benchmark
- themis benchmark
npm install ajv
Try it in the node REPL: https://tonicdev.com/npm/ajv
The fastest validation call:
var Ajv = require('ajv');
var ajv = Ajv(); // options can be passed, e.g. {allErrors: true}
var validate = ajv.compile(schema);
var valid = validate(data);
if (!valid) console.log(validate.errors);
or with less code
// ...
var valid = ajv.validate(schema, data);
if (!valid) console.log(ajv.errors);
// ...
or
// ...
ajv.addSchema(schema, 'mySchema');
var valid = ajv.validate('mySchema', data);
if (!valid) console.log(ajv.errorsText());
// ...
ajv compiles schemas to functions and caches them in all cases (using stringified schema as a key - using json-stable-stringify), so that the next time the same schema is used (not necessarily the same object instance) it won't be compiled again.
The best performance is achieved when using compiled functions returned by compile
or getSchema
methods (there is no additional function call).
Please note: every time validation function or ajv.validate
are called errors
property is overwritten. You need to copy errors
array reference to another variable if you want to use it later (e.g., in the callback). See Validation errors
You can require ajv directly from the code you browserify - in this case ajv will be a part of your bundle.
If you need to use ajv in several bundles you can create a separate browserified bundle using bin/create-bundle
script (thanks to siddo420).
Then you need to load ajv in the browser:
<script src="ajv.bundle.js"></script>
Now you can use it as shown above - require
will be global and you can require('ajv')
.
Ajv was tested with these browsers:
The following formats are supported for string validation with "format" keyword:
- date: full-date from http://tools.ietf.org/html/rfc3339#section-5.6
- date-time: date-time from the same source. Both
date
anddate-time
validate ranges infull
mode and only regexp infast
mode (see options). - uri: full uri with optional protocol.
- email: email address.
- hostname: host name acording to http://tools.ietf.org/html/rfc1034#section-3.5
- ipv4: IP address v4.
- ipv6: IP address v6.
- regex: tests whether a string is a valid regular expression by passing it to RegExp constructor.
There are two modes of format validation: fast
and full
that affect all formats but ipv4
and ipv6
. See Options for details.
You can add additional formats and replace any of the formats above using addFormat method.
You can find patterns used for format validation and the sources that were used in formats.js.
Starting from version 2.0.0 ajv supports custom keyword definitions.
WARNING: The main drawback of extending JSON-schema standard with custom keywords is the loss of portability of your schemas - it may not be possible to support these custom keywords on some other platforms. Also your schemas may be more challenging to read for other people. If portability is important you may prefer using additional validation logic outside of JSON-schema rather than putting it inside your JSON-schema.
The advantages of using custom keywords are:
- they allow you keeping a larger portion of your validation logic in the schema
- they make your schemas more expressive and less verbose
- they are fun to use
You can define custom keywords with addKeyword method. Keywords are defined on the ajv
instance level - new instances will not have previously defined keywords.
Ajv allows defining keywords with:
- validation function
- compilation function
- macro function
- inline compilation function that should return code (as string) that will be inlined in the currently compiled schema.
Validation function will be called during data validation. It will be passed schema, data and parentSchema (if it has 3 arguments) at validation time and it should return validation result as boolean. It can return an array of validation errors via .errors
property of itself (otherwise a standard error will be used).
This way to define keywords is added as a way to quickly test your keyword and is not recommended because of worse performance than compiling schemas.
Example. draft5 constant
keyword (that is equivalent to enum
keyword with one item):
ajv.addKeyword('constant', { validate: function (schema, data) {
return typeof schema == 'object && schema !== null'
? deepEqual(schema, data)
: schema === data;
} });
var schema = { "constant": 2 };
var validate = ajv.compile(schema);
console.log(validate(2)); // true
console.log(validate(3)); // false
var schema = { "constant": { "foo": "bar" } };
var validate = ajv.compile(schema);
console.log(validate({foo: 'bar'})); // true
console.log(validate({foo: 'baz'})); // false
Compilation function will be called during schema compilation. It will be passed schema and parent schema and it should return a validation function. This validation function will be passed data during validation; it should return validation result as boolean and it can return an array of validation errors via .errors
property of itself (otherwise a standard error will be used).
In some cases it is the best approach to define keywords, but it has the performance cost of an extra function call during validation. If keyword logic can be expressed via some other JSON-schema then macro
keyword definition is more efficient (see below).
Example. range
and exclusiveRange
keywords using compiled schema:
ajv.addKeyword('range', { type: 'number', compile: function (sch, parentSchema) {
var min = sch[0];
var max = sch[1];
return parentSchema.exclusiveRange === true
? function (data) { return data > min && data < max; }
: function (data) { return data >= min && data <= max; }
} });
var schema = { "range": [2, 4], "exclusiveRange": true };
var validate = ajv.compile(schema);
console.log(validate(2.01)); // true
console.log(validate(3.99)); // true
console.log(validate(2)); // false
console.log(validate(4)); // false
"Macro" function is called during schema compilation. It is passed schema and parent schema and it should return another schema that will be applied to the data in addition to the original schema (if schemas have different keys they are merged, otherwise allOf
keyword is used).
It is the most efficient approach (in cases when the keyword logic can be expressed with another JSON-schema) because it is usually easy to implement and there is no extra function call during validation.
range
and exclusiveRange
keywords from the previous example defined with macro:
ajv.addKeyword('range', { macro: function (schema, parentSchema) {
return {
minimum: schema[0],
maximum: schema[1],
exclusiveMinimum: !!parentSchema.exclusiveRange,
exclusiveMaximum: !!parentSchema.exclusiveRange
};
} });
Example draft5 contains
keyword that requires that the array has at least one item matching schema (see https://github.com/json-schema/json-schema/wiki/contains-(v5-proposal)):
ajv.addKeyword('contains', { macro: function (schema) {
return { "not": { "items": { "not": schema } } };
} });
var schema = {
"contains": {
"type": "number",
"minimum": 4,
"exclusiveMinimum": true
}
};
var validate = ajv.compile(schema);
console.log(validate([1,2,3])); // false
console.log(validate([2,3,4])); // false
console.log(validate([3,4,5])); // true, number 5 matches schema inside "contains"
See the example of defining recursive macro keyword deepProperties
in the test.
Inline compilation function is called during schema compilation. It is passed three parameters: it
(the current schema compilation context), schema
and parentSchema
and it should return the code (as a string) that will be inlined in the code of compiled schema. This code can be either an expression that evaluates to the validation result (boolean) or a set of statements that assign the validation result to a variable.
While it can be more difficult to define keywords with "inline" functions, it can have the best performance.
Example even
keyword:
ajv.addKeyword('even', { type: 'number', inline: function (it, schema) {
var op = schema ? '===' : '!==';
return 'data' + (it.dataLevel || '') + ' % 2 ' + op + ' 0';
} });
var schema = { "even": true };
var validate = ajv.compile(schema);
console.log(validate(2)); // true
console.log(validate(3)); // false
'data' + (it.dataLevel || '')
in the example above is the reference to the currently validated data. Also note that schema
(keyword schema) is the same as it.schema.even
, so schema is not strictly necessary here - it is passed for convenience.
Example range
keyword defined using doT template:
var doT = require('dot');
var inlineRangeTemplate = doT.compile("\
{{ \
var $data = 'data' + (it.dataLevel || '') \
, $min = it.schema.range[0] \
, $max = it.schema.range[1] \
, $gt = it.schema.exclusiveRange ? '>' : '>=' \
, $lt = it.schema.exclusiveRange ? '<' : '<='; \
}} \
var valid{{=it.level}} = {{=$data}} {{=$gt}} {{=$min}} && {{=$data}} {{=$lt}} {{=$max}}; \
");
ajv.addKeyword('range', {
type: 'number',
inline: inlineRangeTemplate,
statements: true
});
'valid' + it.level
in the example above is the expected name of the variable that should be set to the validation result.
Property statements
in the keyword definition should be set to true
if the validation code sets the variable instead of evaluating to the validation result.
All custom keywords but macro keywords can create custom error messages.
Validating and compiled keywords should define errors by assigning them to .errors
property of validation function.
Inline custom keyword should increase error counter errors
and add error to vErrors
array (it can be null). See example range keyword.
When inline keyword performes validation Ajv checks whether it created errors by comparing errors count before and after validation. To skip this check add option errors
to keyword definition:
ajv.addKeyword('range', {
type: 'number',
inline: inlineRangeTemplate,
statements: true,
errors: true // keyword should create custom errors when validation fails
});
Each error object should have properties keyword
, message
and params
, other properties will be added.
Inlined keywords can optionally define dataPath
property in error objects.
If custom keyword doesn't create errors, the default error will be created in case the keyword fails validation (see Validation errors).
Starting from version 1.3 ajv supports asynchronous compilation when remote references are loaded using supplied function. See compileAsync
method and loadSchema
option.
Example:
var ajv = Ajv({ loadSchema: loadSchema });
ajv.compileAsync(schema, function (err, validate) {
if (err) return;
var valid = validate(data);
});
function loadSchema(uri, callback) {
request.json(uri, function(err, res, body) {
if (err || res.statusCode >= 400)
callback(err || new Error('Loading error: ' + res.statusCode));
else
callback(null, body);
});
}
With option removeAdditional
(added by andyscott) you can filter data during the validation.
This option modifies original object.
TODO: example
Create ajv instance.
All the instance methods below are bound to the instance, so they can be used without the instance.
Generate validating function and cache the compiled schema for future use.
Validating function returns boolean and has properties errors
with the errors from the last validation (null
if there were no errors) and schema
with the reference to the original schema.
Unless the option validateSchema
is false, the schema will be validated against meta-schema and if schema is invalid the error will be thrown. See options.
Asyncronous version of compile
method that loads missing remote schemas using asynchronous function in options.loadSchema
. Callback will always be called with 2 parameters: error (or null) and validating function. Error will be not null in the following cases:
- missing schema can't be loaded (
loadSchema
calls callback with error). - the schema containing missing reference is loaded, but the reference cannot be resolved.
- schema (or some referenced schema) is invalid.
The function compiles schema and loads the first missing schema multiple times, until all missing schemas are loaded.
See example in Asynchronous compilation.
Validate data using passed schema (it will be compiled and cached).
Instead of the schema you can use the key that was previously passed to addSchema
, the schema id if it was present in the schema or any previously resolved reference.
Validation errors will be available in the errors
property of ajv instance (null
if there were no errors).
Please note: every time this method is called the errors are overwritten so you need to copy them to another variable if you want to use them later.
Add schema(s) to validator instance. From version 1.0.0 this method does not compile schemas (but it still validates them). Because of that change, dependencies can be added in any order and circular dependencies are supported. It also prevents unnecessary compilation of schemas that are containers for other schemas but not used as a whole.
Array of schemas can be passed (schemas should have ids), the second parameter will be ignored.
Key can be passed that can be used to reference the schema and will be used as the schema id if there is no id inside the schema. If the key is not passed, the schema id will be used as the key.
Once the schema is added, it (and all the references inside it) can be referenced in other schemas and used to validate data.
Although addSchema
does not compile schemas, explicit compilation is not required - the schema will be compiled when it is used first time.
By default the schema is validated against meta-schema before it is added, and if the schema does not pass validation the exception is thrown. This behaviour is controlled by validateSchema
option.
Adds meta schema that can be used to validate other schemas. That function should be used instead of addSchema
because there may be instance options that would compile a meta schema incorrectly (at the moment it is removeAdditional
option).
There is no need to explicitly add draft 4 meta schema (http://json-schema.org/draft-04/schema and http://json-schema.org/schema) - it is added by default, unless option meta
is set to false
. You only need to use it if you have a changed meta-schema that you want to use to validate your schemas. See validateSchema
.
Validates schema. This method should be used to validate schemas rather than validate
due to the inconsistency of uri
format in JSON-Schema standart.
By default this method is called automatically when the schema is added, so you rarely need to use it directly.
If schema doesn't have $schema
property it is validated against draft 4 meta-schema (option meta
should not be false).
If schema has $schema
property then the schema with this id (should be previously added) is used to validate passed schema.
Errors will be available at ajv.errors
.
Retrieve compiled schema previously added with addSchema
by the key passed to addSchema
or by its full reference (id). Returned validating function has schema
property with the reference to the original schema.
Remove added/cached schema. Even if schema is referenced by other schemas it can be safely removed as dependent schemas have local references.
Schema can be removed using key passed to addSchema
, it's full reference (id) or using actual schema object that will be stable-stringified to remove schema from cache.
Add custom format to validate strings. It can also be used to replace pre-defined formats for ajv instance.
Strings are converted to RegExp.
Function should return validation result as true
or false
.
Custom formats can be also added via formats
option.
Add custom validation keyword to ajv instance.
Keyword should be a valid JavaScript identifier.
Keyword should be different from all standard JSON schema keywords and different from previously defined keywords. There is no way to redefine keywords or remove keyword definition from the instance.
Keyword definition is an object with the following properties:
- type: optional string or array of strings with data type(s) that the keyword will apply to. If keyword is validating another type the validation function will not be called, so there is no need to check for data type inside validation function if
type
property is used. - validate: validating function
- compile: compiling function
- macro: macro function
- inline: compiling function that returns code (as string)
validate, compile, macro and inline are mutually exclusive, only one should be used at a time.
With macro function type must not be specified, the types that the keyword will be applied for will be determined by the final schema.
See Defining custom keywords for more details.
Returns the text with all errors in a String.
Options can have properties separator
(string used to separate errors, ", " by default) and dataVar
(the variable name that dataPaths are prefixed with, "data" by default).
Defaults:
{
allErrors: false,
removeAdditional: false,
verbose: false,
format: 'fast',
formats: {},
schemas: {},
meta: true,
validateSchema: true,
inlineRefs: true,
missingRefs: true,
loadSchema: function(uri, cb) { /* ... */ cb(err, schema); },
uniqueItems: true,
unicode: true,
beautify: false,
cache: new Cache,
errorDataPath: 'object',
jsonPointers: false,
messages: true
v5: true
}
- allErrors: check all rules collecting all errors. Default is to return after the first error.
- removeAdditional: remove additional properties. Default is not to remove. If the option is 'all', then all additional properties are removed, regardless of
additionalProperties
keyword in schema (and no validation is made for them). If the option istrue
(or truthy), only additional properties withadditionalProperties
keyword equal tofalse
are removed. If the option is 'failing', then additional properties that fail schema validation will be removed too (whereadditionalProperties
keyword is schema). - verbose: include the reference to the part of the schema and validated data in errors (false by default).
- format: formats validation mode ('fast' by default). Pass 'full' for more correct and slow validation or
false
not to validate formats at all. E.g., 25:00:00 and 2015/14/33 will be invalid time and date in 'full' mode but it will be valid in 'fast' mode. - formats: an object with custom formats. Keys and values will be passed to
addFormat
method. - schemas: an array or object of schemas that will be added to the instance. If the order is important, pass array. In this case schemas must have IDs in them. Otherwise the object can be passed -
addSchema(value, key)
will be called for each schema in this object. - meta: add meta-schema so it can be used by other schemas (true by default).
- validateSchema: validate added/compiled schemas against meta-schema (true by default).
$schema
property in the schema can either be http://json-schema.org/schema or http://json-schema.org/draft-04/schema or absent (draft-4 meta-schema will be used) or can be a reference to the schema previously added withaddMetaSchema
method. If the validation fails, the exception is thrown. Pass "log" in this option to log error instead of throwing exception. Passfalse
to skip schema validation. - inlineRefs: by default the referenced schemas that don't have refs in them are inlined, regardless of their size - that substantially improves performance at the cost of the bigger size of compiled schema functions. Pass
false
to not inline referenced schemas (they will be compiled as separate functions). Pass integer number to limit the maximum number of keywords of the schema that will be inlined. - missingRefs: by default if the reference cannot be resolved during compilation the exception is thrown. The thrown error has properties
missingRef
(with hash fragment) andmissingSchema
(without it). Both properties are resolved relative to the current base id (usually schema id, unless it was substituted). Pass 'ignore' to log error during compilation and pass validation. Pass 'fail' to log error and successfully compile schema but fail validation if this rule is checked. - loadSchema: asynchronous function that will be used to load remote schemas when the method
compileAsync
is used and some reference is missing (optionmissingRefs
should not be 'fail' or 'ignore'). This function should accept 2 parameters: remote schema uri and node-style callback. See example in Asynchronous compilation. - uniqueItems: validate
uniqueItems
keyword (true by default). - unicode: calculate correct length of strings with unicode pairs (true by default). Pass
false
to use.length
of strings that is faster, but gives "incorrect" lengths of strings with unicode pairs - each unicode pair is counted as two characters. - beautify: format the generated function with js-beautify (the validating function is generated without line-breaks).
npm install js-beautify
to use this option.true
or js-beautify options can be passed. - cache: an optional instance of cache to store compiled schemas using stable-stringified schema as a key. For example, set-associative cache sacjs can be used. If not passed then a simple hash is used which is good enough for the common use case (a limited number of statically defined schemas). Cache should have methods
put(key, value)
,get(key)
anddel(key)
. - errorDataPath: set
dataPath
to point to 'object' (default) or to 'property' (default behavior in versions before 2.0) when validating keywordsrequired
,additionalProperties
anddependencies
. - jsonPointers: set
dataPath
propery of errors using JSON Pointers instead of JavaScript property access notation. - messages: Include human-readable messages in errors.
true
by default.messages: false
can be added when custom messages are used (e.g. with ajv-i18n). - v5: add keywords
constant
andcontains
from JSON-schema v5 proposals
In case of validation failure Ajv assigns the array of errors to .errors
property of validation function (or to .errors
property of ajv instance in case validate
or validateSchema
methods were called).
Each error is an object with the following properties:
- keyword: validation keyword. For user defined validation keywords it is set to
"custom"
(with the exception of macro keywords and unless keyword definition defines its own errors). - dataPath: the path to the part of the data that was validated. By default
dataPath
uses JavaScript property access notation (e.g.,".prop[1].subProp"
). When the optionjsonPointers
is true (see Options)dataPath
will be set using JSON pointer standard (e.g.,"/prop/1/subProp"
). - params: the object with the additional information about error that can be used to create custom error messages (e.g., using ajv-i18n package). See below for parameters set by all keywords.
- message: the standard error message (can be excluded with option
messages
set to false). - schema: the schema of the keyword (added with
verbose
option). - data: the data validated by the keyword (added with
verbose
option).
Properties of params
object in errors depend on the keyword that failed validation.
maxItems
,minItems
,maxLength
,minLength
,maxProperties
,minProperties
- propertylimit
(number, the schema of the keyword).additionalItems
- propertylimit
(the maximum number of allowed items in case whenitems
keyword is an array of schemas andadditionalItems
is false).additionalProperties
- propertyadditionalProperty
(the property not used inproperties
andpatternProperties
keywords).dependencies
- properties:property
(dependent property),missingProperty
(required missing dependency - only the first one is reported currently)deps
(required dependencies, comma separated list as a string),depsCount
(the number of required dependedncies).
format
- propertyformat
(the schema of the keyword).maximum
,minimum
- properties:limit
(number, the schema of the keyword),exclusive
(boolean, the schema ofexclusiveMaximum
orexclusiveMinimum
),comparison
(string, comparison operation to compare the data to the limit, with the data on the left and the limit on the right; can be "<", "<=", ">", ">=")
multipleOf
- propertymultipleOf
(the schema of the keyword)pattern
- propertypattern
(the schema of the keyword)required
- propertymissingProperty
(required property that is missing).type
- propertytype
(required type(s), a string, can be a comma-separated list)uniqueItems
- propertiesi
andj
(indices of duplicate items).$ref
- propertyref
with the referenced schema URI.- custom keywords (in case keyword definition doesn't create errors) - property
keyword
(the keyword name).
Simple JSON-schema validation can be done from command line using ajv-cli package. At the moment it does not support referenced schemas.
npm install
git submodule update --init
npm test
All validation functions are generated using doT templates in dot folder. Templates are precompiled so doT is not a run-time dependency.
npm run build
- compiles templates to dotjs folder.
npm run watch
- automatically compiles templates when files in dot folder change