What it is

grep, but for the Abstract Syntax Tree (AST). 🤯

Where to find it

Source: https://github.com/ast-grep/ast-grep

Site: https://ast-grep.github.io

My thoughts

I love it so much. I’ve used it to (kind of) quickly find all kinds of things in a codebase I’m not very familiar with. Mostly, I’ve used it to find code smells that would be hard to use ripgrep to find, but it’s so powerful.

An example

Let’s say you want to get an idea of what the level of effort would be to remove all the type assertions in your TypeScript project. Those type assertions are the symptoms of a different problem, and you want to address the root cause by removing the assertions and fixing the underlying reason they were there in the first place.

How would you go about finding all the type assertions?

You could with rg, but not easily. You could search for “as”, but then you get matches inside words. Okay, so “\bas\b”, right? But then you’ve got matches inside comments. And import { something as somthingElse } matches.

With grep, rg, ag, etc., you’d need to check that you’re not in a // or /* comment, and that you’re not in an import statement. Multiline greps are a pain.

And that’s not even counting the <SomeType> type assertions!

With ast-grep, though, it’s so simple.

rule:
  any:
    - kind: as_expression
    - kind: type_assertion

That’s it. The end. That’ll find you all your type assertions.

Then, you can get even more clever with the JSON output by piping to jq:

sg scan --json -r rules/my-custom-no-type-assertion-rule.yaml ./wherever/ | jq "length"

Or maybe you want to know what all the files are which have offending code. Maybe you want to know how many times each file has the offending code. Just a little jq magic and all that information is at your fingertips.

Another example

Not convinced? Let me try again.

In one project I was working in, I noticed we had a lot of tests that were just getting copy-pasta’d all over the place which weren’t actually asserting any behavior. We were getting increased code coverage (which was required for CI to pass and for PRs to be merged), but these tests could basically never fail. They looked something like this:

it("should do a thing", () => {
  doSomeSetup();
  screen.findByRole("button", /whatever/i);
});

Did you catch it?

Exactly. See? You’re smart; I knew you’d get it. 🤓

The test would finish before it had a chance to fail because findBy* queries are asynchronous and we’re not awaiting the result here.

I wanted to know how often—and where—we were doing this, so I wrote a rule for it:

rule:
  kind: call_expression
  has:
    regex: ^screen\.findBy*.*$
  not:
    inside:
      kind: await_expression

And as easy as that, I was able to find every test that forgot to await this async function.

Moarrrrr

Stop flaky tests: Wait for getBy* (or, y’know, just use findBy* 🤷‍♂)

rule:
  kind: call_expression
  has:
    regex: ^screen\.getBy*.*$
  not:
    inside:
      pattern: waitFor($$$)
      stopBy: end

Don’t mess up the mocks for all the other tests: This one was weird; a test would run fine by itself, but not with other tests. I had a hunch that some test somewhere was resetting a Jest mock, but not cleaning up after itself. This rule found that test.

rule:
  kind: member_expression
  has:
    all:
      - kind: property_identifier
      - regex: ^mockReturnValue$
  inside:
    any:
      - pattern: it($$$)
      - pattern: test($$$)
    stopBy: end