Executing parsers

With the SDK, there's two way of executing the parsers. You can either do it locally or remotely (there's actually a way to mix both, more on it later).

Doing it remotely is really similar to doing it with the CLI. So if you want to transition from the CLI to the SDK quite smoothly, executing parsers remotely is the way to go.

Execute a parser remotely

To execute a parser remotely, we will pass an object that is really similar to the configuration file to the transformWithRemoteParsers function:

const pipeEngineOutput = await sdtfClient.transformWithRemoteParsers([
  {
    name: 'to-css-custom-properties',
    output: { type: 'file', filePath: 'filtered-colors.css' },
  },
]);

Write the output to the file system

To write the output of your parsers to your file system, we can use the PipelineOutput which is returned when executing some parsers:

const pipelineOutput = await sdtfClient.transformWithRemoteParsers([
  {
    name: 'to-css-custom-properties',
    output: { type: 'file', filePath: 'tokens.css' },
  },
]);

const { errors, outputPaths } = await pipelineOutput.writeToDisk('./public')

console.log(errors) // []
console.log(outputPaths) // ['public/tokens.css']

Execute a parser locally

Now that we know how to do it remotely, let's have a look on how to do it locally. To do so, we'll need to create a ParserPipeline, and give it the parsers we want to execute.

import { toCssCustomProperties, toTailwind } from '@specifyapp/sdk/next';

const pipeline = sdtfClient.createParserPipeline(
  toCssCustomProperties({ filePath: 'myFile.css' }),
);

const output = await pipeline
  .pipe(toTailwind({ filePath: 'tailwind.conf.js' }))
  .execute();

What is the ParserPipeline ?

The ParserPipeline is a helper that wraps all your parsers and will execute them when calling the execute method. The point of it is to help you to create a pipeline for each of your destination without doing a lot of repetition:

import {
  toCssCustomProperties,
  toTailwind,
  svgo,
  toTypescript,
  toFlutter,
} from '@specifyapp/sdk/next';

const basePipeline = sdtfClient.createParserPipeline(svgo({ directoryPath: 'svgs' }));

const websitePipeline = basePipeline.pipe(
  toCssCustomProperties(
    { filePath: 'css/myFile.css' },
    { tokenNameTemplate: '{{token}}-{{groups}}' },
  ),
  toTypescript({ filePath: 'generated/tokens.ts' }),
);

const website1Output = await websitePipeline
  .pipe(toTailwind({ filePath: 'tailwind.conf.js' }))
  .execute();

const website2Output = await websitePipeline.execute();

const mobileOutput = await basePipeline.pipe(
  toFlutter({ filePath: 'generated/tokens.dart' }),
);

It's important to note that the pipe method create a new instance of the ParserPipeline every time you call it. It means that the following code would be useless:

import {
  toCssCustomProperties,
  toTailwind,
  svgo,
  toTypescript,
  toFlutter,
} from '@specifyapp/sdk/next';

const pipeline = sdtfClient.createParserPipeline(
  svgo({ directoryPath: 'svgs' }),
  toTypescript({ filePath: 'file.ts' }),
);

// 🚨: The `toCssCustomProperties` parser won't be added to `pipeline`
pipeline.pipe(toCssCustomProperties({ filePath: 'myFile.css' }));

// ✅: `pipelineWithCss` contains the `toCssCustomProperties` parser
const pipelineWithCss = pipeline.pipe(
  toCssCustomProperties({ filePath: 'myFile.css' })
);

Still want to execute a particular parser remotely ?

Some parsers might be quite heavy to run locally. For example, the SVGO parser is optimizing all your SVG files, and running it locally might require a lot of CPU usage, especially if there's a lot of files to optimize. To avoid it, you can choose to run any parser remotely by adding the shouldExecuteRemotely option.

import { toTailwind, svgo, toTypescript, toFlutter } from '@specifyapp/sdk/next';

const pipeline = sdtfClient.createParserPipeline(
  // Will be executed remotely
  svgo({ directoryPath: 'svgs' }, { shouldExecuteRemotely: true }),
  toTailwind({ filePath: 'tailwind.theme.js' }),
  toFlutter({ filePath: 'tokens.dart' }),
  // Will be executed remotely
  toTypescript({ filePath: 'tokens.ts' }, { shouldExecuteRemotely: true }),
);

How are the parsers executed ?

Because all the parsers are basically pure functions that don't depend on anything, we can run all the parsers concurrently. It also means that, now, the order is not important at all as we're not passing one parser output into the next one. Although this behaviour doesn't matter most of the time, being able to give the output of a parser to another one is still something that can be useful. Let's have a look to an example:

  • SDTF -> SVGToTSX

  • SDTF -> SVGO -> SVGToTSX

In the first case, we simply take our SVG tokens in our SDTF, and we transform them into TSX. In the second case, we first optimize our SVGs, and then we generate TSX from it. So the second case seems more desirable than the first one.

How to pass the result of a parser to another one ?

To achieve this behaviour, we'll need to create an instance of ParserChainer. This class will help you build a chain of parsers that will produce a single parser while ensuring that:

  1. The output of a parser will be the input of the next one

  2. Type check everything to make sure parsers are compatibles

Let's implement the previous example:

import { createParserChainer, svgo, SVGToTSX } from '@specifyapp/sdk/next';

const svgs = createParserChainer(svgo())
  .chain(SVGToTSX({ directoryPath: 'components/svgs' }))
  .build();

const output = await sdtfClient.createParserPipeline(svgs).execute();

Write the output to your file system

Now that we can generate our content, we still need to write it to the file system. To do so, we can do it the same way that we did it for the remote execution:

import {
  createParserChainer,
  svgo, 
  SVGToTSX,
} from '@specifyapp/sdk/next';

const svgs = createParserChainer(svgo())
  .chain(SVGToTSX({ directoryPath: 'components/svgs' }))
  .build();

const pipelineOutput = await sdtfClient.createParserPipeline(svgs).execute();

const { errors, outputPaths } = await pipelineOutput.writeToDisk('./public')

console.log(errors) // []
console.log(outputPaths) // ['components/svgs/Cross.tsx', ...]

Create your custom parser

If the parsers that we are providing are not enough for your use case, you can create your own parser!

Now that we are able to execute parsers locally, it means that parsers are simple functions, so creating a custom parser is only about writing a function. But before creating your own parser, you have to understand how a parser is working.

The next part will describe how parsers are working, but you'll quickly notice that it doesn't looks like the parsers above, e.g: an output option and a parser option. The reason for it is that all our parsers are actually functions that return a parser function. So don't worry if it doesn't looks like above, in the end it's the same thing

The anatomy of a parser

A parser is a function that will take 3 parameters:

  1. An input will be one of the type of PipeEngineDataBox :

    1. SDTFDataBox: { type: 'SDTF'; graph: SpecifyDesignTokenFormat; }

    2. JSONDataBox: { type: 'JSON'; json: Record<string, unknown>; }

    3. SVGDataBox: { type: 'SVG'; svg: Array<{ ... }> }

    4. CustomDataBox: { type: 'custom'; custom: unknown }

  2. A rawTokenTree, which is the SDTF as an object representation

  3. Finally the PipelineToolbox, which the way to accumulate the output that will be written to your file system

An important thing to understand is that a parser has 2 outputs:

  1. The return type of the function

  2. The output that you want to write to the file system (files, text, JSON, SDTF), and that'll be accumulated into the PipelineToolbox

There's actually 2 reasons for this choice:

  1. There's only 1 return value, but you can append as much output as you want to an accumulator

  2. We need a difference between the output of a parser, and what we want to send to the next parser

Let's have a look to the output in itself.

The parser output

First, let's focus on the return. The output will be the input of the next parser if you use it inside a ParserChainer. So as the return is the input of the next parser, you probably guessed it: it's the same one for the input, which mean one of PipeEngineDataBox :

  • SDTFDataBox: { type: 'SDTF'; graph: SpecifyDesignTokenFormat; }

  • JSONDataBox: { type: 'JSON'; json: Record<string, unknown>; }

  • SVGDataBox: { type: 'SVG'; svg: Array<{ ... }> }

  • CustomDataBox: { type: 'custom'; custom: unknown }

Now, let's see how we can output files. To do so we need to push into the outputsAccumulator one of the type of the PipeEngineRuleOutput:

  • TextOutput: { type: 'text'; text: string }

  • SDTFOutput: { type: 'SDTF'; graph: SpecifyDesignTokenFormat }

  • JSONOutput: { type: 'JSON'; graph: string }

  • FilesOutput: { type: 'files'; files: Array<{ path: string; content: { type: 'text'; text: string; } | { type: 'url'; url: string; }}> }

Most of the parsers take an SDTFDataBox as an input, and return it as the output as they don't modify anything and only output some files. So if you're not sure about what to return, just return the input.

So now that we know what is a parser, let's have a look to an example of parser that create a file with all the token's name:

import { SDTFDataBox } from '@specifyapp/specify-design-token-format'
import { PipeEngineRuleOutput, SpecifyDesignTokenFormat } from '@specifyapp/sdk'
import { PipelineToolbox } from '@specifyapp/next'

function nameParser(
  input: SDTFDataBox,
  _rawTokenTree: SpecifyDesignTokenFormat,
  toolbox: PipelineToolbox,
) {
  const engine = createSDTFEngine(input.graph);
  const names = tokenTreeClient
    .engine
    .query
    .getAllTokenStates()
    .map(tokenState => tokenState.name);
  
 toolbox.populateOutput(
   {
     type: 'files',
     files: [{ path: 'names.txt', content: { type: 'text', text: names.join('\n') } }] 
   }
 )
 
 return input
}

Let's break down the example:

  1. We create an engine from the input:

const engine = createSDTFEngine(input.graph); 

You may wonder why we use the input over the rawTokenTree. In that case, it doesn't matter, but if the input was an SVGDataBox, then the rawTokenTree would be the only way to create an engine

  1. We use the engine to get all the names

const names = tokenTreeClient
  .engine
  .query
  .getAllTokenStates()
  .map(tokenState => tokenState.name);
  1. We populate the output into the accumulator

ouputsAccumulator.push(
  {
    type: 'files',
    files: [{ path: 'names.txt', content: { type: 'text', text: names.join('\n') } }] 
  }
)
  1. Finally, we return the input as we didn't modify anything and don't need to return something else

return input

Now that we have our custom parser, we can use it freely in the ParserPipeline or ParserChainer .

Last updated