NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Functional Programming Lessons Conclusion (jerf.org)
BWStearns 13 hours ago [-]
> I consider [having a big benefit at 100% vs an 80/20 rule] a characteristic of type systems in general; a type system that you can rely on is vastly more useful than a type system you can almost rely on, and it doesn’t take much “almost” to greatly diminish the utility of a given type system.

This! This is why I don't particularly care for gradual typing in languages like Python. It's a lot of extra overhead but you still can't really rely on it for much. Typescript types are just barely over the hump in terms of being "always" enough reliable to really lean on it.

dimal 13 hours ago [-]
I agree with the 100% rule. The problem with Typescript is how many teams allow “any”. They’ll say, “We’re using TypeScript! The autocomplete is great!” And on the surface, it feels safe. You get some compiler errors when you make a breaking change. But the any’s run through the codebase like holes in Swiss cheese, and you never know when you’ll hit one, until you’ve caused a bug in production. And then they try to deal with it by writing more tests. Having 100% type coverage is far more important.
klabb3 13 hours ago [-]
In my rather small code base I’ve been quite happy with ”unknown” instead of any. It makes me use it less because of the extra checks, and catches the occasional bug, while still having an escape hatch in cases of extensive type wrangling.

The other approach, having an absolutist view of types, can be very constraining and complex even for relatively simple domain problems. Rust for instance is imo in diminishing returns territory. Enums? Everyone loves them, uses them daily and even write their own out of joy. OTOH, it took years of debate to get GATs implemented (is it now? I haven’t checked), and not because people like and need them, but because they are a necessary technicality to do fundamental things (especially with async).

WorldMaker 11 hours ago [-]
Typescript's --strict is sometimes a very different ballgame from default. I appreciate why in a brownfield you start with the default, but I don't understand how any project starts greenfield work without strict in 2025. (But also I've fought to get brownfield projects to --strict as fast as possible. Explicit `any` at least is code searchable with the most basic grep skills and gives you a TODO burndown chart for after the fastest conversion to --strict.)

Typescript's --strict still isn't technically Sound, in the functional programming sense, but that gets back to the pragmatism mentioned in the article of trying to get that 80/20 benefit of enough FP purity to reap as many benefits without insisting on the investment to get 100% purity. (Arguably why Typescript beat Flow in the marketplace.)

sevensor 8 hours ago [-]
> you still can't really rely on it for much

And yet, type annotations in Python are a tremendous improvement and they catch a lot of bugs before they ever appear. Even if I could rely on the type system for nothing it would still catch the bugs that it catches. In fact, there are places where I rely on the type system because I know it does a good job: pure functions on immutable data. And this leads to a secondary benefit: because the type checker is so good at finding errors in pure functions on immutable data, you end up pushing more of your code into those functions.

d0mine 12 hours ago [-]
It may be the exact opposite. You can't express (at least you shouldn't try to avoid Turing tarpit-like issues) all the desired constraints for your problem domain using just the type system (you need a readable general purpose programming language for that).

If you think your type system is both readable and powerful then why would you need yet another programming language? (Haskell comes to mind as an example of such language--don't know how true it is). The opposite (runtime language used at compile time) may also be successful eg Zig.

Gradual typing in Python provides the best of both worlds: things that are easy to express as types you express as types. On the other hand, you don't need to bend over backwards and refactor half your code just to satisfy your compiler (Rust comes to mind). You can choose the trade off suitable for your project and be dynamic where it is beneficial. Different projects may require a different boundary. There is no size fits all.

P.S. As I understand it, the article itself is about "pragmatism beats purity."

aabhay 10 hours ago [-]
On the other hand, if you think of a programming language as a specialized tool then you choose the tool for the job and don’t reach for your swiss army knife to chop down a tree.

The problem with gradually typed languages is that there are few such trees that should be chopped by their blunt blades. At least Rust is the best for a number of things instead of mediocre at all of them.

One counterpoint to this is local self exploratory programming. For that a swiss army knife is ideal, but in those cases who cares about functional programming or abstractions?

taylorallred 14 hours ago [-]
I mostly agree with the sentiments in this article. I was once an extremely zealous FP acolyte but eventually I realized that there are a few lessons worth taking from FP and applying to more conventional programming: 1. Pure functions are great to use when you have the opportunity. 2. Recursion works great for certain problems and with pure functions. 3. Higher order functions can be a useful shorthand (but please only use them with pure functions). Otherwise, I think simple, procedural programming is generally a better default paradigm.
Tainnor 8 hours ago [-]
> Otherwise, I think simple, procedural programming is generally a better default paradigm.

I think this is almost the opposite conclusion from the one TFA (and in particular the longer form article linked elsewhere here), which is more like: most standard imperative programming is bad and standard (pure) FP is at least slightly better, but people generally don't draw the right conclusions about how to apply FP lessons to imperative languages.

brandonspark 13 hours ago [-]
I was hoping this article would be a little more concrete, but it seems that it largely is talking about the takeaways about functional programming in a philosophical, effort-in vs value-out kind of way. This is valuable, but for people unfamiliar with functional programming I'm not sure that it gives much context for understanding.

I agree with the high-level, though. I find that people (with respect to programming languages) focus far too much on the specific, nitpicky details (whether I use the `let` keyword, whether I have explicit type annotations or type inference). I find the global, project-level benefits to be far more tangible.

jerf 13 hours ago [-]
This is the conclusion of https://jerf.org/iri/blogbooks/functional-programming-lesson... . The concreteness precedes it, this is just the wrap up and summary.
brandonspark 9 hours ago [-]
I see. This is indeed the in-depth breakdown I was looking for, thank you.
Tainnor 10 hours ago [-]
It's a shame that this is not the link that was submitted because that (long) article was a really interesting read that gave me some food for thought and also articulated a bunch of things rather clearly that I've been thinking in a similar form for a while. I'm not sure that I agree with all of it, though (I still prefer maps, folds etc. over explicit loops in many cases, but do agree that this is less important than the overall code architecture).
zactato 14 hours ago [-]
I've always thought that there should be mutability of objects within the function that created them, but immutability once the object is returned.

Ultimately one of the major goals of immutability is isolation of side effects.

tremon 13 hours ago [-]
How does this work out for functions in the middle of the call stack? Can the objects a function creates be mutated by functions they call? Phrased differently, can functions modify their input parameters? If a function returns one of their input parameters (modified or not), does that mean the calling function can no longer mutate it?

Maybe I'm discarding this too readily, but I don't think this idea of "local mutability" has much value -- if an object is mutable, the compiler and runtime has to support mutation and many optimizations are impossible because every object is mutable somewhere during their lifetime (and for objects created in main, they're mutable for the lifetime of the program).

jerf 13 hours ago [-]
If we include as a axiom for the purposes of this conversation that we must be able to refactor out any part of the "constructor" and that the refactored function must have the same mutation "privileges" as the original creator, which I think is fairly reasonable, this leads you in the direction of something like Rust I think, which can construct objects and lend bits and pieces of it out in a controlled manner to auxiliary functions, but the value being returned can still have constraints on it.
williamdclt 9 hours ago [-]
I have to say I don’t understand your point! The parents comment is both clear and a reasonable, common approach of programming

> Can the objects a function creates be mutated by functions they call?

No

> can functions modify their input parameters?

No

> If a function returns one of their input parameters (modified or not), does that mean the calling function can no longer mutate it?

No. Because the called function isn’t allowed to mutate its inputs, there’s no problem for the caller to mutate it. It’s irrelevant whether the input was also an output of the called function as it cannot mutate it anyway.

I suppose you can get into race conditions between caller and callee if your language provides asynchronicity and no further immutability guarantees. Still, you eliminated a whole lot of potential bugs

chowells 12 hours ago [-]
Local mutability is fantastic and practical... In a language like Haskell where the type system tracks exactly what values are mutable and all mutation is precisely scoped by functions that freeze the value they generate in a way that prevents leaking.

In a language that isn't so precise, it's a lot harder to get value from the idea.

Tainnor 11 hours ago [-]
You can get value out of local mutability in languages like Scala or Kotlin. Variables can be declared mutable in the scope of a function, and their value can then later be assigned to a non-mutable variable, for example. Collections also come in mutable and immutable variants (although this has some pitfalls in Kotlin).
taeric 13 hours ago [-]
I mean, this isn't that different from any number of things you do in real life? You took a car to some destination. It is assumed you didn't change the engine. Took it to a mechanic, maybe they did?

More, many modifications are flat out expected. You filled out a job application, you probably don't expect to change the job it is for. But you do expect that you can fill out your pieces, such that it is a modifiable document while you have it. (Back to the car example, it is expected that you used fuel and caused wear on the tires.)

As annoying as they were to deal with, the idea of having "frozen" objects actually fits really well with how many people want to think of things. You open it to get it ready for use. This will involve a fair bit of a setup. When done, you expect that you can freeze those and pass off to something else.

Transactions can also get into this. Not surprising, as they are part of the vocabulary we have built on how to deal with modifications. Same for synchronization and plenty of other terms. None of them go away if you just choose to use immutable objects.

kikimora 10 hours ago [-]
I realized I don’t understand idea of pure functions anymore. If a function fetches a web page it is not pure because it modifies some state. But if function modified EAX register it is still pure. How creating a socket or changing a buffer is different from changing a register value considering that in all cases outside observers would never know?
williamdclt 10 hours ago [-]
Let’s put your uncertainty to rest: at the extreme, any function execution spends both time and energy, both of which are observable side-effects.

So yes, you’re right that there no such thing as an absolutely pure function. “Pure” always assumes that all dependencies are pure themselves. Where it’s a reasonable assumption and whether it’s still useful or not depends on your program: assuming an API call to be pure is certainly not reasonable for many use cases, but it is reasonable enough to be useful for others.

entropicdrifter 10 hours ago [-]
If a function is pure, it can take in and output 100% unmodifiable values. There will never be any side-effects in a pure function.

In other words, if you need to modify the contents of a variable for a function to run, that's not a pure function. Taking something in and outputting something just like it but with some modifications is allowed, so long as the input is unmodified.

Does that make more sense? You can't modify anything inside a function that originated from outside of the function's scope.

betenoire 10 hours ago [-]
I think they understand that, and are referring to more nuanced side effects. Logging, for an example, is a side effect, same with even using a date function. Hitting an API endpoint without cache may be functional if the response never changes, but do you want that? Usually we want a cache, which is skirting idempotency. The closer you look, the more side effects you see
jerf 9 hours ago [-]
It so happens that this was the topic of one of the posts in this series: https://jerf.org/iri/post/2025/fp_lessons_purity/#purity-is-...

I'm assuming from your post you haven't come from there, and we just coincidentally picked a similar example...

tines 10 hours ago [-]
Purity is relative to a given level of abstraction?
your_fin 9 hours ago [-]
The author addresses this nicely in an earlier part of the blog book: https://jerf.org/iri/post/2025/fp_lessons_purity/
ninetyninenine 13 hours ago [-]
web development today is literally just massive massive mutation operations on databases.

Functional programming can't stop it, it just sort of puts a fence around it. The fence makes sense if it's just 10% of your program that you want to fence off. But the database is literally the core of our application then it's like putting a fence around 90% of the house and you have 10% of pure functional programming.

Most operations are located at the database level. That's where the most bugs occur and that's where most operations occur. You can't really make that functional and pure.

This high level stuff is actually wrong. At the highest level web apps are NOT functional.

I get where he's coming from but he missed the point. Why do functional paradigms fail or not matter so much at the micro level? Because web applications are PRIMARILY not functional by nature. That's why the stuff doesn't matter. You're writing and changing for loops into recursion/maps in the 10% of your pure code that's fenced off from the 90% core of the application.

You want to make functional patterns that are applicable at the high level? You need to change the nature of reality to do that. Make it so we don't need to mutate a database ever and create a DSL around that is functional. SQL is not really functional. Barring that you can keep a mutating database but create a DSL around it that hides the mutation.

meltyness 10 hours ago [-]
Would you consider TLA+ functional? It sounds like the tension you're describing might be how most distributed consensus protocols are implemented as imperative code, and part of the Raft excursion involved writing a TLA+ proof of the protocol.

https://github.com/ongardie/raft.tla

Tainnor 13 hours ago [-]
There must be something wrong in the way FP is taught if the takeaway that people have is that it prevents or is somehow opposed to mutation.

On the one hand you have bunch of FP languages that don't care in the least bit about "purity" (i.e. being side-effect free) or are more pragmatic around it, such as various LISPs, OCaml, Scala or even parts of the JS ecosystem. And on the other hand, there's a lot of research and discussion in e.g. the Haskell community about how to properly deal with side effects. The idea here is not that side effects are bad, but that we want to know where they happen and combine them safely.

Joker_vD 12 hours ago [-]
> The idea here is not that side effects are bad, but that we want to know where they happen and combine them safely.

Yeah, the idea is not that people gathering together in groups more than two and/or past the 21:00 is bad, but that we want to know where it happens and ensure safety for all. Now, your papers, please or we'll apply the type checker (we'll apply it to y'all anyhow, of course, but we'd like you to cooperate with the inference process).

chowells 12 hours ago [-]
I don't understand why people get so angry when a compiler points out that their code is broken. Is it better if runs and does the wrong thing instead?
ninetyninenine 11 hours ago [-]
No you missed the point. I completely get the meaning of segregating IO/mutation away from pure logic.

And my point is, what is the purpose of all of this is 90% of what your app does is mutation and side effects? Functional shell, imperative core indeed, but the shell is literally just thin layer of skin. The imperative core is a massive black hole.

Functional programming can't save you from black hole.

Tainnor 11 hours ago [-]
Obviously the answer to "this doesn't solve my problems" is "don't use it then". If your problem domain literally is nothing but API calls and DB updates, then you may not benefit from this.

OTOH, in my experience a lot of people underestimate how much pure business logic exists (or can be extracted) in many applications. In apps I've worked on I've found a lot of value in isolating these parts more cleanly. The blogbook series by the author of TFA (linked further upthread) goes into some detail about how to do that even without going fully down the "pure functional programming" rabbithole.

ninetyninenine 7 hours ago [-]
>Obviously the answer to "this doesn't solve my problems" is "don't use it then". If your problem domain literally is nothing but API calls and DB updates, then you may not benefit from this.

This is like 99% of web development today. And web development is like 99% of development. It's all very IO heavy and mutation heavy. You can't run from it.

It's why FP is mostly ineffective on the smaller scale because you're already walled off from doing anything that matters in web. Your server is stateless anyway so anything you do in this arena doesn't even matter.

Tainnor 53 minutes ago [-]
> This is like 99% of web development today.

You've claimed this several times now and I fundamentally disagree. In my 11 years of experience working across some 7 companies, web development has always been more than just 99% side effects. Obviously your experiences may be different, but this generalisation is silly.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 07:13:06 GMT+0000 (Coordinated Universal Time) with Vercel.