NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
A high-throughput parser for the Zig programming language (github.com)
ww520 3 days ago [-]
This is very cool. Extremely fast lexical tokenizer is the basis for a fast compiler. Zig has good integration and support for SIMD operations that's perfect for this kind of things. It's definitely doable. I did a proof of concept on using SIMD to operate on 32-byte chunk to parse identifiers a while back.

https://github.com/williamw520/misc_zig/blob/main/identifier...

norir 3 days ago [-]
When I run a profiler on a compiler I wrote (which parses at somewhere between 500K-1MM lines per second without a separate lexer), parsing barely shows up. I'd be very surprised if the zig compiler is spending more than 5% of the time tokenizing.

I assume there is some other use case that is motivating this work.

tarix29 3 days ago [-]
I imagine it would be quite useful for building a responsive language server, where parsing is a more significant portion of the work
seanmcdirmid 2 days ago [-]
No, the problem for a language server is incremental performance, not batch performance. Although there are a lot of bad implementations out there that just reparse the entire buffer on each edit (without the error recovery benefits an incremental parser would give you).
adev_ 2 days ago [-]
> No, the problem for a language server is incremental performance, not batch performance

"When something is fast enough, people start to use it differently" - Linus Torvalds.

Make your parser able to parse the current file at 30FPS and you do not need incremental parsing anymore nor error recovery. That is probably part of the idea here.

dzaima 2 days ago [-]
Here that can go both ways - SIMD parsing can allow handling arbitrary changes in reasonable time for files below like maybe 100MB (i.e. everything non-insane), whereas incremental parsing can allow handling small changes in truly-arbitrary-size files in microseconds. A trade-off between better average-case and worst-case time. (of course the ideal thing would be both, but that's even more non-trivial)
awson 2 days ago [-]
Absolutely.

Quite a long time ago I was working on a some business application's reporting facility.

It used to take about an hour, and my development reduced this time to a 1 or 2 seconds ballpark.

This was HUGE. And changed the way users create these reports forever.

seanmcdirmid 2 days ago [-]
It’s not good enough. Incremental parsers can save trees across edits, and you can hang type information off of those trees, so you just aren’t saving parsing time, you are saving type checking time as well. Even if you have a super fast batch parser, you are screwing yourself in other areas that are actually much more expensive.
adev_ 2 days ago [-]
Agreed. But all things considered:

The runtime cost of type checking is highly dependent on the type system / meta-programming complexity of your language.

For simple languages (Golang?) with a pretty well designed module system: it should be doable to reach ~500KLOC/sec (probably even 1MLOC/s in some case) so more than enough for an interactive usage.

And for complex languages with meta-programming capabilities: they are indeed slow to type check. But are also a giant pain in the butt to cache without side effects for incremental parsing. It is 2025 and clangd / intellisense still fail to do that reliably for C++ codebases that rely heavily on template usage.

So it does not seem a so-crazy approach to me: It is trading a complexity problem for a performance one.

dreamoffire 3 days ago [-]
The talks that Niles gave at the Utah Zig meetups (linked in the repo) were great, just wished the AV setup was a little smoother. There seemed like there some really neat visualizations that Niles prepared that flopped. Either way, I recommend it. Inspired me to read a lot more machine code these days.
neerajsi 3 days ago [-]
Very interesting project!

I wonder if there's a way to make this set of techniques less brittle and more applicable to any language. I guess you're looking at a new backend or some enhancements to one of the parser generator tools.

adev_ 3 days ago [-]
I have applied a subset of these techniques in a tokenizer in C++ to parse a language syntactically similar to Swift: no inline assembly, no intrinsics, no SWAR but reduce branching, cache optimization and SIMD parsing + explicit vectorization.

I get:

- ~4 MLOC/sec/core on a laptop

- ~ 8-9MLOC/sec/core on a modern AMD sever grade CPU with AVX512.

So yes, it is definitively possible.

3 days ago [-]
matu3ba 3 days ago [-]
Would be very cool, if once finished, the techniques are applied to user-schedulable languages https://www.hytradboi.com/2025/7d2e91c8-aced-415d-b993-f6f85....

I guess they are too tailored to the actual memory layout with respective memory access delay of the architecture, but I would like to be shown that I am wrong and it is feasible.

asdfman123 3 days ago [-]
This really moves Zig
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 12:38:52 GMT+0000 (Coordinated Universal Time) with Vercel.