leac

lint status badge test status badge License: MIT npm deno

Lexer / tokenizer.

Features

Install

Node

```shell

npm i leac yarn add leac ```

ts import { createLexer, Token } from 'leac';

Deno

ts import { createLexer, Token } from 'https://deno.land/x/leac@.../leac.ts';

Examples

```typescript const lex = createLexer([ { name: '-', str: '-' }, { name: '+' }, { name: 'ws', regex: /\s+/, discard: true }, { name: 'number', regex: /[0-9]|[1-9][0-9]+/ }, ]);

const { tokens, offset, complete } = lex('2 + 2'); ```

API

A word of caution

It is often really tempting to rewrite token on the go. But it can be dangerous unless you are absolutely mindful of all edge cases.

For example, who needs to carry string quotes around, right? Parser will only need the string content...

We'll have to consider following things:

If we allow a token with zero length - it will cause an infinite loop, as the same rule will be matched at the same offset, again and again.

When put together, these things plus some intuition traps can lead to a broken array of tokens.

Strings can be empty, which means the token can be absent. With no content and no quotes the tokens array will most likely make no sense for a parser.

How to avoid potential issues:

Another note about quotes: If the grammar allows for different quotes and you're still willing to get rid of them early - think how you're going to unescape the string later. Make sure you carry the information about the exact string kind in the token name at least - you will need it later.

What about ...?

Some other lexer / tokenizer packages