> It uses the pcre-heavy Haskell library which in turn calls out to the system-wide pcre C library for actually compiling and running the regex.
That's not true regular expressions, but a backtracker with eclectic features.
The regex used in the regex solution is just:
mul\((\d+),(\d+)\)
not requiring PCRE.
If Haskell happens to provide bindings to the POSIX regcomp/regexec stuff, that would be something to try instead. \d has to be replaced with [0-9], needless to say.
austin-cheney 8 hours ago [-]
There are numerous posts, comments, articles and so forth about when to use regex versus using a parser. The general rule is this:
If you need to do a search from a string, such as needle(s) from a hat stack, regex is probably more ideal than a parser. If you need anything more intelligent than a list of search results you probably want a full formal parser.
Most languages allow a form of nested regex that allow for increased search precision. This occurs when a method that makes use of a regex returns to a function whose argument is a matching string result, which is why regex is probably enough when the business is primitive. There is a tremendously higher cost to using a full parser, considering the lexer and tokenizer plus rules, but it’s so much more intelligent that it’s not even comparable.
kleiba 2 hours ago [-]
Of course you could also pull out the good old Chomsky hierarchy and make an argument against regexes based on whatever the nature of your data is.
But the thing is: the beauty of regexes lies in their compactness (which, in turn, can make them quite difficult to debug). So, of course, if you want to optimize for some other measure, you'd use an alternative approach (e.g. a parser). And there are a number of worthwhile measures, such as e.g. the already mentioned debuggability, appropriateness for the problem at hand in terms of complexity, processing speed, ease of using the match result, the ability to get multiple alternative matches, support of regexes in your language of choice, etc.
But simply stating "X beats regexes" without saying in what respect leaves something to be desired.
giraffe_lady 7 hours ago [-]
The key thing for me in making this decision is of course predicting the future.
Parsers are more work up front but in my experience much easier to debug, maintain and extend. So if I predict something will be around for a while I'll want to do a parser and if it's just a one-off I usually go with regex.
Of course this predictably leads to me having a collection of parsers that I've never needed to touch again and a handful of monstrously complex regex grammars that I'll rewrite into a parser "the next time" I need to modify them.
I still think it's the right thing to base the decision on I just need to keep improving my prophesying.
asQuirreL 21 minutes ago [-]
This is a false dichotomy -- regexes and parsers both have their place, even when solving the same problem.
The troubles start when you try and solve the whole thing in one step, using just regular expressions, or parsers.
Regular expressions are good at tokenizing input (converting a stream of bytes into a stream of other things, e.g. picking out numbers, punctuation, keywords).
Parsers are good at identifying structure in a token stream.
Neither are good at evaluation. Leave that as its own step.
Applying this rule to the example in the article (Advent of Code 2024 Day 3), I would still use regular expressions to identify mul(\d+,\d+), do(), and don't(), I don't think I need a parser because there is no extra structure beyond that token stream, and I would leave it up to the evaluator to track the state of whether multiplication is enabled or not.
arn3n 8 hours ago [-]
“In other languages, it would be considered overkill to write a full parser when a simple regex can do the same thing. In Haskell, writing a parser is no big deal. We just do it and move on with our lives.”
I see a long code file filled with comments and long paragraph-level explanations. I think I’d rather just learn and use regex.
layer8 6 hours ago [-]
Whenever I write a regex, I end up with a comments roughly ten times longer than the regex. That being said, regular expressions are often the right tool for the job (i.e. parsing a regular language, as opposed to a context-free language or whatever), just the syntax becomes unreadable rather quickly. I’m sure you could build a nicer regular-expression syntax in Haskell.
CBLT 3 hours ago [-]
I love the verbose flag[0] to regex, so I can write comments inline.
Yes. Regex tend to become rather fast write only. One solution is commenting, but is still complex. What I like to do now (in C) is define parts of it. Just a crude example to get the idea:
// STRing: matches anything inside quotes (single or double)
#define STR "[\"'](.*)[\"']"
// NUMber: matches decimal or hexadecimal numbers
#define NUM "([[:digit:]]x?[[:xdigit:]]*)"
regcomp(®_exp, STR NUM , REG_EXTENDED | REG_ICASE);
So at the end I compose the RE with the various parts, which are documented separately.
OneDeuxTriSeiGo 7 hours ago [-]
I mean that's the nature of article code no? You are writing for a generic audience and want to be very explicit as to what your code is doing to teach the audience.
For your average haskell programmer you could drop all of those comments since the code isn't really doing anything that couldn't be determined by just reading it.
codebje 6 hours ago [-]
The main advantage of recursive descent parsing is readability. Forget the machinery of the parser, which is (a) trivial enough that AI will generate correctly it for you with next to no prompting, and (b) widely available in libraries anyway. The win is writing a parser that reads like the grammar it implements, which means changes to your grammar are easy to apply to your parser.
Regexes are effectively write-only. If you change the grammar parsed by a regex, you're probably going to discard the regex and make a new one.
yen223 7 hours ago [-]
Parser combinators are one of those ideas that is really powerful in practice, but will never be mainstream because it had the bad fortune of coming from the functional programming world. It's a shame too, because parser combinators are what made parsers click for me.
Take the hard problem of parsing a language, break it down into the much simpler problem of reading tokens, then write simple "combinators" (and, or, one_or_many, etc etc) to combine these simple parsers into larger more complex parsers.
You could almost build an entire parser combinator library from scratch from memory, because none of the individual components are complicated.
mrkeen 18 minutes ago [-]
> You could almost build an entire parser combinator library from scratch from memory, because none of the individual components are complicated.
I do this every now and then. Usually in the days leading up to advent of code.
"A parser is a function from an input to either failure or a (remaining input, result) pair" is all you need to remember to build the rest.
egonschiele 7 hours ago [-]
Hey! I love parser combinators and wish they were more mainstream. That's why I wrote Tarsec, which is Parsec for TypeScript:
https://github.com/egonSchiele/tarsec
It also has a neat debug mode, and a way to benchmark performance. I've loved using it, hopefully someone reading will use it too! Use this as a chance to learn something cool (parser combinators)!
furyofantares 3 hours ago [-]
These are great.
I accidentally invented parser combinators for a project once. I was quite fond of the approach but it ended up being pretty nearly unusable for anyone else on the team. This was problematic because the project was such that we were always adding new things to be parsed. The whole reason I took the approach was that I wanted people to be able to freely add parsers as new requirements came in.
It being my first shot at it, it was for sure not as usable as it could be. But looking at your stuff, which is delightful, I am filled with doubt that my goal was even possible. Your stuff looks about as clean as possible, and yet seems likely to be almost as impenetrable to someone not versed in parsing.
That said, I am gonna look for a chance to use this in appropriate side projects.
yen223 6 hours ago [-]
Love the intro articles you wrote!
Tabular-Iceberg 4 hours ago [-]
I’ve seen it in practice. I decided to use a Java parser combinator library for one problem. As soon as I was off the project they ripped out my code and replaced it with regexes.
That what was to be parsed wasn’t regular apparently did not matter.
fooker 2 hours ago [-]
Parser combinators are already mainstream. All/most real languages are implemented with hand written recursive descent parsers that closely resemble parser combinators, or at the very least borrow a bunch of concepts.
It's the name that's unlikely to become mainstream. That's squarely upon FP folks giving sci-fi names to common sense concepts.
I remember logstash having a quite intuitive parser language, called grok I think, where recursive decent was built on top of regex. We should use something like that!
skybrian 6 hours ago [-]
Never is a long time. Other ideas from functional language have become mainstream.
yen223 6 hours ago [-]
I would be very happy to be proven wrong here.
But I view parser combinators in the same lens as lenses, another powerful approach to solving a specific problem that is unlikely to receive mainstream recognition because of all the Haskell.
wk_end 6 hours ago [-]
FWIW lenses are lovely in Typescript with monocle-ts, and extremely useful when - speaking of functional ideas hitting the mainstream! - writing Redux reducers to provide updates for immutable data.
4 hours ago [-]
BoiledCabbage 6 hours ago [-]
> But I view parser combinators in the same lens as lenses
Hmm, anything else you have in that list I should be looking into? As I also agree they are two of the most interesting things I got from learning Haskell. (Along with type driven reasoning, monoids and functors).
yen223 5 hours ago [-]
The big one that springs to mind, though it's not a Haskell idea, are algebraic effects and effects handler
thaumasiotes 7 hours ago [-]
> Take the hard problem of parsing a language, break it down into the much simpler problem of reading tokens, then write simple "combinators" (and, or, one_or_many, etc etc) to combine these simple parsers into larger more complex parsers.
That's a terrible example for "parser combinators beat regexes"; those are the three regular operations. (Except that you use zero_or_many in preference to one_or_many.)
mrkeen 13 minutes ago [-]
Can I see the (implied) counter-example?
A regex which turns a language's source code into an AST?
torginus 1 hours ago [-]
Functional programming languages are uniquely suitable for writing parsers for grammars, their whole feature set (discriminated unions, pattern matching etc.) is very suited to turning text into ASTs. It's not often that one needs to parse a custom DSL grammar.
That said, I'd say it's a pretty rare occasion to have that need, and other languages have great libraries for accomplishing the same, I'm partial to Chevrotain for Javascript (https://chevrotain.io/docs/), which has a small DSL for describing grammars.
Parser combinators are both less efficient and less featureful than bespoke solutions, which is a problem when you're writing a parser for your custom language, and you want to add extra features, like resilience, or rich error reporting.
To add a bit of snark, functional languages seem to best suited for writing parsers/compilers for other functional programming languages, which is pretty telling of the inward-facing functional culture.
mrkeen 33 minutes ago [-]
> Parser combinators are both less efficient and less featureful than bespoke solutions, which is a problem when you're writing a parser for your custom language, and you want to add extra features, like resilience, or rich error reporting.
This is untrue. PCs are the bespoke solutions. The article shows how to introduce a feature like state. Parser generators are the technique which just dump fixed-functionality source code on you.
torginus 1 minutes ago [-]
Parser combinators define a recursive parsing implementation, as well as declaration syntax. While it's very flexible in terms of what it can parse, it has no fixed guarantees on making progress on the token stream or having a bounded lookahead size before they can parse anything.
A hand written LL(1) parser, (for a suitable grammar) is much more efficient than a parser combinator, afaik most production languages use a hand-written parser, with parser combinators not really being used in high-effort implementations.
I have no extensive experience in surveying parser generators (only using them, and they seemed to be fast enough), I'd guess it's up to the implementation on how efficient parsing code is, with my ballpark assumption being that it's worse than handwritten code.
o11c 8 hours ago [-]
Note that most implementations of both parser combinators and regexes can fail very badly (exponential time). Never use either on untrusted input, unless you can prove your implementation lets you stay linear.
internet_points 1 hours ago [-]
This is one of the reasons I've been afraid to use parser combinators too heavily. With regular (finite-state) languages I know their time usage, with parser combinators I have no clue, and there are so many different libraries and implementations and some are monadic and some are applicative and few mention worst-case. There are benchmarks https://gitlab.com/FinnBender/haskell-parsing-benchmarks#byt... but what kinds of "dangerous operations" do I have to watch out for when writing with parser combinators?
(I guess part of the problem is just that regular languages have been studied since Kleene had a full head of hair, while parser combinators were more or less unknown until the 80's.)
mrkeen 10 minutes ago [-]
> I've been afraid to use parser combinators
> With regular (finite-state) languages I know their time usage
Are you talking about parsers or grammars?
5 hours ago [-]
giraffe_lady 7 hours ago [-]
PEGs don't have this problem right? Linear time over length of input? Though I guess most peg libraries probably aren't "pure" pegs to get around its limitations and may reintroduce this issue?
o11c 7 hours ago [-]
No, PEG is prone to exactly this in its default form. Parser combinator usually means something PEG-ish anyway.
If you memoize (packrat), it improves to polynomial time (not sure what, but it's not linear; that's false advertising and a fraudulent claim), but you're still stuck with the vulnerability to bugs in the grammar.
A better idea is to create a parser-combinator-style API on top of an LL or LR backend, for guaranteed linear time and only enough stack space for the deepest part of the AST.
yen223 7 hours ago [-]
PEGs don't have this problem only because they place restrictions on the grammar.
In practice, this isn't a problem, but it does require you write grammar rules in a specific way (e.g. no left-recursion)
thaumasiotes 6 hours ago [-]
Only PCREs are exponential time, in service of a feature you basically never need. Regexes are always linear time.
They can take exponential space, though, so I'm not sure why knowing you'll be able to process the data in linear time is supposed to keep you safe.
moregrist 5 hours ago [-]
It’s not only PCRE. Any dialect with back references will require a backtracking engine that can go exponential with the wrong expression and input.
This includes most flavors of regexp you find in the wild: Python’s re module, JavaScript regular expressions, Ruby’s regular expressions, Perl, PCRE, and even basic and extended REs used in grep.
Russ Cox has written some very accessible posts on the linear (in input) properties of NFAs and the equivalence of Thompson regular expressions [0]. There’s also quite a bit of literature on the Glushkov construction of regular expressions (and its NFA equivalence) [1] that’s worth reading if you find the topic interesting.
Both Go and Rust have non-backtracking regular expression libraries, and you can find solid non-backtracking C and C++ libraries for regular expressions (eg: libfsm and Hyperscan).
0: https://swtch.com/~rsc/regexp/
1: My favorite is _Flexible Pattern Matching Strings: Practical On-Line Search Algorithms for Texts and Biological Sequences_ by Gonzalo Navarro
masklinn 3 hours ago [-]
> Only PCREs are exponential time, in service of a feature you basically never need. Regexes are always linear time.
Any re dialect which supports backtracking necessarily has a non-linear worst case, and while a select few have very high resilience against exponential backtracking (e.g. never managed to make postgres fall over) most can be made to fail with a pattern a few characters long.
FA-based engines are getting more popular, but they’re far from universal.
pjscott 6 hours ago [-]
This depends on the implementation of the regex engine; most are potentially superlinear in time, since that’s the easiest way of doing it, and quite fast until suddenly it isn’t. I always check the algorithm being used before I use regular expressions in production. I was surprised how many use a recursive descent strategy!
(Also, BTW, you can deal with the exponential space issue by using an NFA instead of a DFA – it’s slower that way, but the memory space required is reliably linearly proportional to the size of the regex.)
dullcrisp 6 hours ago [-]
I don’t believe space > time is possible on a Turing machine?
pjscott 6 hours ago [-]
You’re right, of course, but there was a minor miscommunication: the exponential space is exponentially proportional to the size of the regular expression, and the linear time is linearly proportional to the length of the string being searched through.
dullcrisp 6 hours ago [-]
Thanks! Though I imagine in most cases the regular expression itself would be a fixed part of the codebase and not given as input.
maxbond 5 hours ago [-]
I've been working a lot with parser combinators lately, they are a really nice compromise between using grammar languages (I'd put regexes in this category, as well as parser generators) and hand-rolled imperative parsers. The parser combinator library ends up being a DSL for describing grammars in your preferred language, but the boundary is fluid, and you can move back and forth between writing grammars and writing code in your preferred language.
The bold implementer could integrate regexes into their parser combinator library, and you could move between all three.
DadBase 6 hours ago [-]
Parser combinators are great until you need to parse something real, like CSV with embedded newlines and Excel quotes. That’s when you reach for the reliable trio: awk, duct tape, and prayer.
iamevn 3 hours ago [-]
I don't follow why parser combinators would be a bad tool for CSV. It seems like one would specify a CSV parser as (pardon the pseudocode):
Seems fairly natural to me, although I'll readily admit I haven't had to write a CSV parser before so I'm surely glossing over some detail.
kqr 2 hours ago [-]
I think GP was sarcastic. We have these great technologies available but people end up using duct tape and hope anyway.
masklinn 3 hours ago [-]
Seems like exactly the situation where you’d want parsers, because they can do any form of quoting \ escaping you want and have no reason to care about newlines any more than they do any other character.
henning 9 hours ago [-]
You can also use simple string search functions for this Advent of Code problem.
I used this approach for implementing orthography rules in a stenography application (implementing rules like `artistic + ly = artistically`) and in the benchmarks I did, a few lines of searching for character indexes was an order of magnitude faster than regexes. Each of my functions is about 3-10 ish lines of simple code compared to a 1-line regex.
You do have to handle cases like the character not being found, but I've had zero issues with the code and, in terms of what actually executes, it is vastly simpler. This is why the saying goes that if you make something twice as fast, you may have done something clever, but if you make it 10x faster, you stopped doing something dumb.
kqr 2 hours ago [-]
Basic string search and indexing operations (especially when there is language/library support for cheap spans) are definitely underappreciated.
I wouldn't reach for them as a first step though, because they'd take more automated tests to give me confidence of correctness, and most problems I end up solving are not CPU bound but programmer time bound.
Rendered at 08:11:02 GMT+0000 (Coordinated Universal Time) with Vercel.
That's not true regular expressions, but a backtracker with eclectic features.
The regex used in the regex solution is just:
not requiring PCRE.If Haskell happens to provide bindings to the POSIX regcomp/regexec stuff, that would be something to try instead. \d has to be replaced with [0-9], needless to say.
If you need to do a search from a string, such as needle(s) from a hat stack, regex is probably more ideal than a parser. If you need anything more intelligent than a list of search results you probably want a full formal parser.
Most languages allow a form of nested regex that allow for increased search precision. This occurs when a method that makes use of a regex returns to a function whose argument is a matching string result, which is why regex is probably enough when the business is primitive. There is a tremendously higher cost to using a full parser, considering the lexer and tokenizer plus rules, but it’s so much more intelligent that it’s not even comparable.
But the thing is: the beauty of regexes lies in their compactness (which, in turn, can make them quite difficult to debug). So, of course, if you want to optimize for some other measure, you'd use an alternative approach (e.g. a parser). And there are a number of worthwhile measures, such as e.g. the already mentioned debuggability, appropriateness for the problem at hand in terms of complexity, processing speed, ease of using the match result, the ability to get multiple alternative matches, support of regexes in your language of choice, etc.
But simply stating "X beats regexes" without saying in what respect leaves something to be desired.
Parsers are more work up front but in my experience much easier to debug, maintain and extend. So if I predict something will be around for a while I'll want to do a parser and if it's just a one-off I usually go with regex.
Of course this predictably leads to me having a collection of parsers that I've never needed to touch again and a handful of monstrously complex regex grammars that I'll rewrite into a parser "the next time" I need to modify them.
I still think it's the right thing to base the decision on I just need to keep improving my prophesying.
The troubles start when you try and solve the whole thing in one step, using just regular expressions, or parsers.
Regular expressions are good at tokenizing input (converting a stream of bytes into a stream of other things, e.g. picking out numbers, punctuation, keywords).
Parsers are good at identifying structure in a token stream.
Neither are good at evaluation. Leave that as its own step.
Applying this rule to the example in the article (Advent of Code 2024 Day 3), I would still use regular expressions to identify mul(\d+,\d+), do(), and don't(), I don't think I need a parser because there is no extra structure beyond that token stream, and I would leave it up to the evaluator to track the state of whether multiplication is enabled or not.
I see a long code file filled with comments and long paragraph-level explanations. I think I’d rather just learn and use regex.
[0] https://docs.python.org/3/library/re.html#re.VERBOSE
For your average haskell programmer you could drop all of those comments since the code isn't really doing anything that couldn't be determined by just reading it.
Regexes are effectively write-only. If you change the grammar parsed by a regex, you're probably going to discard the regex and make a new one.
Take the hard problem of parsing a language, break it down into the much simpler problem of reading tokens, then write simple "combinators" (and, or, one_or_many, etc etc) to combine these simple parsers into larger more complex parsers.
You could almost build an entire parser combinator library from scratch from memory, because none of the individual components are complicated.
I do this every now and then. Usually in the days leading up to advent of code.
"A parser is a function from an input to either failure or a (remaining input, result) pair" is all you need to remember to build the rest.
I have a short and a long intro, both illustrated: https://github.com/egonSchiele/tarsec/blob/main/tutorials/5-... and https://github.com/egonSchiele/tarsec/blob/main/tutorials/th...
It also has a neat debug mode, and a way to benchmark performance. I've loved using it, hopefully someone reading will use it too! Use this as a chance to learn something cool (parser combinators)!
I accidentally invented parser combinators for a project once. I was quite fond of the approach but it ended up being pretty nearly unusable for anyone else on the team. This was problematic because the project was such that we were always adding new things to be parsed. The whole reason I took the approach was that I wanted people to be able to freely add parsers as new requirements came in.
It being my first shot at it, it was for sure not as usable as it could be. But looking at your stuff, which is delightful, I am filled with doubt that my goal was even possible. Your stuff looks about as clean as possible, and yet seems likely to be almost as impenetrable to someone not versed in parsing.
That said, I am gonna look for a chance to use this in appropriate side projects.
That what was to be parsed wasn’t regular apparently did not matter.
It's the name that's unlikely to become mainstream. That's squarely upon FP folks giving sci-fi names to common sense concepts.
https://github.com/llvm/llvm-project/blob/main/flang/lib/Par...
But I view parser combinators in the same lens as lenses, another powerful approach to solving a specific problem that is unlikely to receive mainstream recognition because of all the Haskell.
Hmm, anything else you have in that list I should be looking into? As I also agree they are two of the most interesting things I got from learning Haskell. (Along with type driven reasoning, monoids and functors).
That's a terrible example for "parser combinators beat regexes"; those are the three regular operations. (Except that you use zero_or_many in preference to one_or_many.)
A regex which turns a language's source code into an AST?
That said, I'd say it's a pretty rare occasion to have that need, and other languages have great libraries for accomplishing the same, I'm partial to Chevrotain for Javascript (https://chevrotain.io/docs/), which has a small DSL for describing grammars.
Parser combinators are both less efficient and less featureful than bespoke solutions, which is a problem when you're writing a parser for your custom language, and you want to add extra features, like resilience, or rich error reporting.
To add a bit of snark, functional languages seem to best suited for writing parsers/compilers for other functional programming languages, which is pretty telling of the inward-facing functional culture.
This is untrue. PCs are the bespoke solutions. The article shows how to introduce a feature like state. Parser generators are the technique which just dump fixed-functionality source code on you.
A hand written LL(1) parser, (for a suitable grammar) is much more efficient than a parser combinator, afaik most production languages use a hand-written parser, with parser combinators not really being used in high-effort implementations.
I have no extensive experience in surveying parser generators (only using them, and they seemed to be fast enough), I'd guess it's up to the implementation on how efficient parsing code is, with my ballpark assumption being that it's worse than handwritten code.
(I guess part of the problem is just that regular languages have been studied since Kleene had a full head of hair, while parser combinators were more or less unknown until the 80's.)
> With regular (finite-state) languages I know their time usage
Are you talking about parsers or grammars?
If you memoize (packrat), it improves to polynomial time (not sure what, but it's not linear; that's false advertising and a fraudulent claim), but you're still stuck with the vulnerability to bugs in the grammar.
A better idea is to create a parser-combinator-style API on top of an LL or LR backend, for guaranteed linear time and only enough stack space for the deepest part of the AST.
In practice, this isn't a problem, but it does require you write grammar rules in a specific way (e.g. no left-recursion)
They can take exponential space, though, so I'm not sure why knowing you'll be able to process the data in linear time is supposed to keep you safe.
This includes most flavors of regexp you find in the wild: Python’s re module, JavaScript regular expressions, Ruby’s regular expressions, Perl, PCRE, and even basic and extended REs used in grep.
Russ Cox has written some very accessible posts on the linear (in input) properties of NFAs and the equivalence of Thompson regular expressions [0]. There’s also quite a bit of literature on the Glushkov construction of regular expressions (and its NFA equivalence) [1] that’s worth reading if you find the topic interesting.
Both Go and Rust have non-backtracking regular expression libraries, and you can find solid non-backtracking C and C++ libraries for regular expressions (eg: libfsm and Hyperscan).
0: https://swtch.com/~rsc/regexp/ 1: My favorite is _Flexible Pattern Matching Strings: Practical On-Line Search Algorithms for Texts and Biological Sequences_ by Gonzalo Navarro
Any re dialect which supports backtracking necessarily has a non-linear worst case, and while a select few have very high resilience against exponential backtracking (e.g. never managed to make postgres fall over) most can be made to fail with a pattern a few characters long.
FA-based engines are getting more popular, but they’re far from universal.
(Also, BTW, you can deal with the exponential space issue by using an NFA instead of a DFA – it’s slower that way, but the memory space required is reliably linearly proportional to the size of the regex.)
The bold implementer could integrate regexes into their parser combinator library, and you could move between all three.
I used this approach for implementing orthography rules in a stenography application (implementing rules like `artistic + ly = artistically`) and in the benchmarks I did, a few lines of searching for character indexes was an order of magnitude faster than regexes. Each of my functions is about 3-10 ish lines of simple code compared to a 1-line regex.
You do have to handle cases like the character not being found, but I've had zero issues with the code and, in terms of what actually executes, it is vastly simpler. This is why the saying goes that if you make something twice as fast, you may have done something clever, but if you make it 10x faster, you stopped doing something dumb.
I wouldn't reach for them as a first step though, because they'd take more automated tests to give me confidence of correctness, and most problems I end up solving are not CPU bound but programmer time bound.