Executing generation parsers
When it comes to generate code from your SDTF client with the SDK, you can leverage Specify's built-in parsers and / or add your own custom implementation.
The SDK provides two main methods to face either the need of the flexibility or increased performance.
By default, the generation is run locally utilizing the host machine resources. To meet some environment limitation, any Specify's built-in parser can be executed remotely, on Specify servers.
Create parsers pipelines
With your SDTF client, you can create as many parsers pipelines as you need to generate your outputs.
The executePipelines
is an asynchronous function that will actually execute the generation. Hence:
The results
is a ParsersEngineResults
instance, which comes with its own set of helper methods to work with the outputs and messages issued over the generation.
Note that results
is plural since it can accumulate more than one parsers pipeline for a single execution.
Write the outputs to the file system
While being executed, the parsers engine produces outputs that gets returned within the ParsersEngineResults
instance which comes with few helper methods like writeToDisk
writeToDisk
takes an optional base path and returns a promise containing a report of the written files and errors if any.
Run many concurrent parsers pipelines
All parsers pipelines passed to createParsersPipelines
are run concurrently out of the box. It means, you can write the following:
And have both pipelines executed concurrently from the same initial token tree.
Chain parsers to run specific pre-generation
In some cases, you need to chain the parser functions to act like A -> B -> C where you are only interested in C. For that matter, you can leverage the chainParserFunctions
util.
In this example, we want to optimize the content of the vector tokens with SVGO, and then, generate JSX components.
Execute a built-in parser function remotely
In some cases, you might need to deal with few host machine resources (like in many CI). To help with this, any Specify's built-in generation parser can be executed remotely by passing the shouldExecuteRemotely: true
option.
Doing so, the SVGO process will be run on Specify's servers and the results will be returned to the SDK to be further processed / written on disk.
Create pipelines from parser rules configuration
A Parser rule is a JSON object representing a parsers pipeline where all parsers will be run sequencially. Rules configuration are primarily utilized within the configuration file for the CLI or GitHub.
With the SDK, the use of parser rules configuration reduces the interoperability with custom code, but can significantly increase the speed of a remote execution.
In order to build parsers pipelines from the SDTF client we need to call the createParsersPipelinesFromRules
method.
Doing so, it creates an async parsers engine executor in the exact same manner it did for parser functions.
Run faster remote executions
Any built-in generation parser holds a shouldExecuteRemotely: boolean
option to treat its inner execution as remote.
Yet, rules configuration also implement that option, allowing the SDK to collect all the remote rules and parsers, then sends out a single HTTP request for the whole execution.
Create your custom parser function
If the parsers that Specify is providing are not enough for your use case, you can create your own parser function!
Now that we are able to execute parsers locally, it means that parsers are simple functions, so creating a custom parser is only about writing a function. But before creating your own parser, you have to understand how a parser is working.
The next part will describe how parsers are working, but you'll quickly notice that it doesn't looks like the parsers above, e.g: an output option and a parser option. The reason for it is that all our parsers are actually functions that return a parser function. So don't worry if it doesn't looks like above, in the end it's all the same thing
The anatomy of a parser
A parser is a function that will take 3 parameters:
An
input
will be one of the type ofParsersEngineDataBox
:SDTFEngineDataBox
:{ type: 'SDTF Engine'; engine: SDTFEngine; }
SDTFDataBox
:{ type: 'SDTF'; graph: SpecifyDesignTokenFormat; }
JSONDataBox
:{ type: 'JSON'; json: Record<string, unknown>; }
SVGDataBox
:{ type: 'SVG'; svg: Array<{ ... }> }
UrlDataBox
:{ type: 'urls'; files: Array<{...}> }
BitmapDataBox
:{ type: 'bitmap'; files: Array<{...}> }
CustomDataBox
:{ type: 'custom'; custom: unknown }
The
ParserToolbox
, which helps accumulate the output that will be written to your file system
An important thing to understand is that a parser has 2 outputs:
The return type of the function, that can be passed to another parser if chained
The output that you want to write to the file system (files, text, JSON, SDTF), and that'll be accumulated into the
ParserToolbox
There's actually 2 reasons for this choice:
There's only 1 return value, but you can append as much output as you want to an accumulator
We need a difference between the output of a parser, and what we want to send to the next parser
Let's have a look to the output in itself.
The parser output
First, let's focus on the return. The output will be the input of the next parser if you use it inside a ParserChainer
. So as the return is the input of the next parser, you probably guessed it: it's the same one for the input, which means one of ParsersEngineDataBox
:
SDTFEngineDataBox
:{ type: 'SDTF Engine'; engine: SDTFEngine; }
SDTFDataBox
:{ type: 'SDTF'; graph: SpecifyDesignTokenFormat; }
JSONDataBox
:{ type: 'JSON'; json: Record<string, unknown>; }
SVGDataBox
:{ type: 'SVG'; svg: Array<{ ... }> }
BitmapDataBox
:{ type: 'bitmap'; files: Array<{...}> }
UrlDataBox
:{ type: 'urls'; files: Array<{...}> }
CustomDataBox
:{ type: 'custom'; custom: unknown }
Now, let's see how we can output files. To do so, we need to push into the outputsAccumulator
one of the types of the ParserOutput
:
TextOutput
:{ type: 'text'; text: string }
SDTFOutput
:{ type: 'SDTF'; graph: SpecifyDesignTokenFormat }
JSONOutput
:{ type: 'JSON'; graph: string }
FilesOutput
:{ type: 'files'; files: Array<{ path: string; content: { type: 'text'; text: string; } | { type: 'url'; url: string; }}> }
Most of the parsers take an SDTFDataBox
as an input, and return it as the output as they don't modify anything and only output some files. So if you're not sure about what to return, just return the input.
So now that we know what is a parser, let's have a look to an example of parser that create a file with all the token's name:
Let's break down the example:
We use the engine to get all the names
We populate the output into the accumulator
Finally, we return the input as we didn't modify anything and don't need to return something else
Now that we have our custom parser, we can use it freely in the ParserPipeline
or ParserChainer
.
Last updated