This was a really compelling article Dan, and I say that as a long time l advocate of "traditional" server side rendering like Rails of old.
I think your checklist of characteristics frames things well. it reminds me of Remix's introduction to the library
https://remix.run/docs/en/main/discussion/introduction
> Building a plain HTML form and server-side handler in a back-end heavy web framework is just as easy to do as it is in Remix. But as soon as you want to cross over into an experience with animated validation messages, focus management, and pending UI, it requires a fundamental change in the code. Typically, people build an API route and then bring in a splash of client-side JavaScript to connect the two. With Remix, you simply add some code around the existing "server side view" without changing how it works fundamentally
it was this argument (and a lot of playing around with challengers like htmx and JSX like syntax for Python / Go) that has brought me round to the idea that RSCs or something similar might well be the way to go.
Bit of a shame seeing how poor some of the engagement has been on here and Reddit though. I thought the structure and length of the article was justified and helpful. Concerning how many peoples' responses are quite clearly covered in TFA they didn't read...
swyx 3 days ago [-]
its absolutely ridiculous and sad the level of responses failing basic comprehension and this is a topic i happen to know well... makes you wonder how much to trust the avg hn comment where i am NOT knowledgeable...
There are a couple of "red flag" quips that if I hear them coming out of my mouth (or feel the urge to do so), I have to do a quick double take and reconsider my stance. "Everything old is new again" is one of them — usually, that means I'm missing some of the progress that has happened in the meantime.
specialist 2 days ago [-]
Sometimes I imagine "progress" as movement along a coil.
In 2D, it seems like you're just reinventing the wheel. But in 3D, you can see that some hack or innovation allowed you to take a new stab at the problem.
Other times I imagine trilemmas, as depicted in Scott McCloud's awesome book Understanding Comics.
There's a bounded design (solution) space, with concerns anchoring each corner. Like maybe fast, simple, and correct. Or functional, imperative, and declarative. Or weight, durability, and cost. Or...
Our job is to divine a solution that lands somewhere in that space, balancing those concerns, as best appropriate for the given context.
By extension, there's no one-size fits all perfect solution. (Though there are "good enough" general purpose solutions.)
The beauty of experiencing many, many different cuts at a problem, is that one can start to intuit things. Like quickly understand how a new product fits in the space. Like quickly narrowing the likely solution space for the current project. Comparing and contrasting stuff in an open-minded semi-informed way.
Blah, blah, blah.
parthdesai 2 days ago [-]
Not aware of remix, but how do you manage connection pooling, read vs write queries in these use cases?
esprehn 3 days ago [-]
The big challenge with the approach not touched on in the post is version skew. During a deploy you'll have some new clients talk to old servers and some old clients talk to new servers. The ViewModel is a minimal representation of the data and you can constrain it with backwards compatibility guarantees (ex. Protos or Thrift), while the UI component JSON and their associated JS must be compatible with the running client.
I do wonder how many people will use the new React features and then have short outages during deploys like the FOUC of the past. Even their Pro plan has only 12 hours of protection so if you leave a tab open for 24 hours and then click a button it might hit a server where the server components and functions are incompatible.
yawaramin 3 days ago [-]
Wouldn't this be easy to fix by injecting a a version number field in every JSON payload and if the expected version doesn't match the received one, just force a redirect/reload?
pfhayes 3 days ago [-]
Forcing a reload is a regression compared to the "standard" method proposed at the start of the article. If you have a REST API that requests attributes about a model, and the client is responsible for the presentation of that model, then it is much easier to support outdated clients (perhaps outdated by weeks or months, in the case of mobile apps) without interruption, because their pre-existing logic continues to work
yawaramin 2 days ago [-]
Arguable that it's a 'regression'...loading pages is kinda the normal behaviour in a web browser. You can try to paper over that basic truth but you can't abstract it away forever. Also, the original comment I replied to said it would be a 'big challenge', but if you accept that the web is the web and sometimes pages can load or even reload, then it's not really a 'challenge' any more at all.
presentation 2 days ago [-]
Vercel's skew protection feature keeps old versions alive for a while and routes requests that come from an old client to that old version, with some API endpoints to forcibly kill old versions if need be, etc. I find it works reasonably well.
yawaramin 2 days ago [-]
Wouldn't a solution that works perfectly be better than one that works 'reasonably well'?
tantalor 3 days ago [-]
Thrashing is why
yawaramin 2 days ago [-]
Sorry what do you mean by 'thrashing' in this context?
tantalor 1 days ago [-]
Reload causes skew causes reload
yawaramin 1 days ago [-]
How does reload cause skew? Reload will just load the latest version of the webapp. That's the point.
tantalor 1 days ago [-]
If you force a reload before the rollout is complete, the user will still experience skew, because you haven't finished the rollout. The website will be completely unusable for a significant fraction of users. You might as well turn off the website during the rollout. This is the main concern of skew - how to keep the website usable at all times for all users across versions.
If your rollout times are very short then skew is not a big concern for you, because it will impact very few users. If it lasts hours, then you have to solve it.
After the rollout is complete, then reload is fine. It's a bit user hostile but they will reload into a usable state.
ricardobeat 1 days ago [-]
Stickiness at the load balancer level helps mitigate these issues.
hu3 3 days ago [-]
Random JSX nugget:
JSX is a descendant of a PHP extention called XHP [1] [2]
Internally at Facebook you could also just call React components from XHP. Not very relevant on what you see on Facebook now as a user, but in older internal tools built with XHP it made it very easy to just throw in React components.
When you'd annotate a React component with ReactXHP (if I remember correctly), some codegen would generate an equivalent XHP components that takes the same props, and can just be used anywhere in XHP. It worked very well when I last used it!
Slightly less related but still somewhat, they have an extension to GraphQL as well that allows you to call/require React components from within GraphQL. If you look at a random GraphQL response there's a good chance you will see things like `"__dr": "GroupsCometHighlightStoryAlbumAttachmentStyle.react"`. I never looked into the mechanics of how these worked.
lioeters 2 days ago [-]
> you could also just call React components from XHP
Fascinating, I didn't know there was such a close integration between XHP and React. I imagined the history like XHP being a predecessor or prior art, but now I see there was an overlap of both being used together, long enough to have special language constructs to "bind" the two worlds.
"ReactXHP" didn't turn up anything, but XHP-JS sounds like it.
> We have a rapidly growing library of React components, but sometimes we’ll want to render the same thing from a page that is mostly static. Rewriting the whole page in React is not always the right decision, but duplicating the rendering code in XHP and React can lead to long-term pain.
> XHP-JS makes it convenient to construct a thin XHP wrapper around a client-side React element, avoiding both of these problems. This can also be combined with XHPAsync to prefetch some data, at the cost of slightly weakening encapsulation.
This is from ten years ago, and it's asking some of the same big questions as the posted article, JSX over the Wire. How to efficiently serve a mixture of static and dynamic content, where the same HTML templates and partials are rendered on server and client side. How to fetch, refresh data, and re-hydrate those templates.
With this historical context, I can understand better the purpose of React Server Components, what it's supposed to accomplish. Using the same language for both client/server-side rendering solves a large swath of the problem space. I haven't finished reading the article, so I'll go enjoy the rest of it.
zarzavat 3 days ago [-]
I'm annoyed to learn that even the original PHP version had `class=` working.
MrJohz 3 days ago [-]
In fairness, `className` makes a lot of sense given that the native DOM uses the `className` attribute rather than `class`. In that sense, it's a consistent choice, just a consistent choice with the DOM rather than with HTML.
The bigger issue is the changes to events and how they get fired, some of which make sense, others of which just break people's expectations of how Javascript should work when they move to non-React projects.
littlecranky67 3 days ago [-]
Preact fixed that years ago and you can just use class=
MrJohz 2 days ago [-]
It's not about "fixing" it, it's about choosing what you want to be consistent with. You can either be consistent with the DOM API (e.g. `document.getElementById().className = "hello"`) or with HTML (i.e. `class=...`). Both are valid choices — I personally prefer className because this is Javascript, so consistency with the DOM makes more sense, but JSX is designed to be an HTML-like syntax so I can see both ways.
The bigger difference that React makes from other frameworks, and from the DOM, is when it comes to events, in particular with events like `onChange` actually behaving more like the `onInput` event.
littlecranky67 15 hours ago [-]
To be fair, choice would be to allow both in your JSX like Preact does. Usually I wouldn't bother as I get your point with consitency. But from a practical standpoint, whenever you paste some HTML code from somewhere else, the first thing I need to do is search/replace class= to className=. Probably more relevant for tailwind/bootstrap users than MUI.
nsonha 1 days ago [-]
the only reason I can think of is for the dot notation assignment (not clashing with the class keyword). No one cares about consistency with DOM API in this context. Given the syntax, they most definitely expect consistency with HTML
bastawhiz 3 days ago [-]
This article doesn't mention "event handlers" a single time. Even if you get past the client and server getting out of sync and addressing each component by a unique id that's stable between deploys (unless it's been updated), this article doesn't show how you might make any of these components interactive. You can't add an onClick on the server. The best I can figure, you pass these in with a context?
Ultimately this really just smooshed around the interface without solving the problem it sets out to solve: it moves the formatting of the mail markup to the server, but you can't move all of it unless your content is entirely static (and if you're getting it from the server, SOMETHING has to be interactive).
wonnage 3 days ago [-]
you put interactivity in client components, that seemed pretty clear to me
bastawhiz 2 days ago [-]
And you just never have any handlers on the server components? The problem is that if a component is populated with data from the server, it's sending the data down as JSX. Which means that component can't react to interactivity of client components within it. Unless, of course, you draw the line further up and make more stuff client components.
Consider making a list of posts from some sort of feed. If each item in the list is a server component, you can't have the component representing the item be a server component if you need to handle any events in that item. So now you're limited to just making the list component itself a server component. Well what good is that?
The whole point of this is to move stuff off of the client. But it's not even clear that you're saving any bytes at all in this scenario, because if there's any props duplicated across items in the list, you've got to duplicate the data in the JSON: the shallower the returned JSX, the more raw data you send instead of JSX data. Which completely defats the point of going through all this trouble in the first place.
SebastianKra 2 days ago [-]
You can...
...have a client component inside the post. For example, for each post, have a server component, that contains a <ClientDeleteButton postId={...} />.
...have a wrapper client component that takes a server components as a child. Eg. if you want to show a hover-card for each post:
> props duplicated across items in the list, you've got to duplicate the data in the JSON
I'm pretty sure gzip would just compress that.
bastawhiz 2 days ago [-]
> I'm pretty sure gzip would just compress that.
Bytes on the wire aren't nearly as important in this case. That value still has to be decompressed into a string and that string needs to be parsed into objects and that's all before you pump it into the renderer.
> have a wrapper client component that takes a server components as a child.
That doesn't work for the model defined in this post. Because now each post is a request to the server instead of one single request that returns a rendered list of posts. That's literally the point of doing this whole roundabout thing: to offload as much work as possible to the server.
> For example, for each post, have a server component, that contains a <ClientDeleteButton postId={...} />.
And now only the delete button reacts to being pressed. You can't remove the post from the page. You can't make the post semi transparent. You can't disable the other buttons on the post.
Without making a mess with contexts, state and interactivity can only happen in the client component islands.
And you know what? If you're building a page that's mostly static on a site that sees almost no code changes or deployments, this probably works great for certain cases. But it's far from an ideal practice for anything that's even mildly interactive.
Even just rendering the root of your render tree is problematic, because you probably want to show loading indicators and update the page title or whatever, and that means loading client code to load server code that runs more client code. At least with good old fashioned SSR, by the time code in the browser starts running, everything is already ready to be fully interactive.
SebastianKra 2 days ago [-]
> That doesn't work for the model defined in this post. Because now each post is a request to the server instead of one single request that returns a rendered list of posts.
Thats where you’re wrong. The JSX snippet that I posted above gets turned into:
{
type: "src/ClientHoverCard.js#ClientHoverCard",
props: {
preview: // this is already rendered on the server
children: // this is already rendered on the server
}
}
If you wanted to fade the entire post when pressing the delete button without contexts, you’d create a client component like this:
It's not really the scope of the article, but what about adding a client directive [0] and dropping in your event handler? Just like that, you're back in a familiar CSR React world, like in the "old" days.
Really like this pattern, it’s a new location of the curve of “how much rendering do you give the client”. In the described architecture, JSX-as-JSON provides versatility once you’ve already shipped all the behavior to the client (a bunch of React components in a static JS that can be cached, the React Native example really demonstrated this well)
One way to decide if this architecture is for you, is to consider where your app lands on the curve of “how much rendering code should you ship to client vs. how much unhydrated data should you ship”. On that curve you can find everything from fully server-rendered HTML to REST APIs and everything in between, plus some less common examples too.
Fully server-rendered HTML is among the fastest to usefulness - only relying on the browser to render HTML. By contrast in traditional React server rendering is only half of the story. Since after the layout is sent a great many API calls have to happen to provide a fully hydrated page.
Your sweet spot on that curve is different for every app and depends on a few factors - chiefly, your app’s blend of rate-of-change (maintenance burden over time) and its interactivity.
If the app will not be interactive, take advantage of fully-backend rendering of HTML since the browser’s rendering code is already installed and wicked fast.
If it’ll be highly interactive with changes that ripple across the app, you could go all the way past plain React to a Redux/Flux-like central client-side data store.
And if it’ll be extremely interactive client-side (eg. Google Docs), you may wish to ship all the code to the client and have it update its local store then sync to the server in the background.
But this React Server Components paradigm is surprisingly suited to a great many CRUD apps. Definitely will consider it for future projects - thanks for such a great writeup!
_heimdall 1 days ago [-]
> from fully server-rendered HTML to REST APIs and everything in between
Fully server-rendered HTML is the REST API. Anything feeding back json is a form of RPC call, the consumer has to be deeply familiar with what is in the response and how it can be used.
modal-soul 3 days ago [-]
I like this article a lot more than the previous one; not because of length.
In the previous article, I was annoyed a bit by some of the fluffiness and redefinition of concepts that I was already familiar with. This one, however, felt much more concrete, and grounded in the history of the space, showing the tradeoffs and improvements in certain areas between them.
The section that amounted to "I'm doing all of this other stuff just to turn it into HTML. With nice, functional, reusable JSX components, but still." really hit close to how I've felt.
My question is: When did you first realize the usefulness of something like RSC? If React had cooked a little longer before gaining traction as the client-side thing, would it have been for "two computers"?
I'm imagining a past where there was some "fuller stack" version that came out first, then there would've been something that could've been run on its own. "Here's our page-stitcher made to run client-side-only".
acemarke 3 days ago [-]
Sounds like another one of Dan's talks, "React from Another Dimension", where he imagines a world in which server-side React came first and then extracted client functionality:
Great talk, thanks for reminding me about this Mark!
hcarvalhoalves 3 days ago [-]
> REST (or, rather, how REST is broadly used) encourages you to think in terms of Resources rather than Models or ViewModels. At first, your Resources start out as mirroring Models. But a single Model rarely has enough data for a screen, so you develop ad-hoc conventions for nesting Models in a Resource. However, including all the relevant Models (e.g. all Likes of a Post) is often impossible or impractical, so you start adding ViewModel-ish fields like friendLikes to your Resources.
So, let's assume the alternative universe, where we did not mess up and got REST wrong.
There's no constraint saying a resource (in the hypermedia sense) has to have the same shape as your business data, or anything else really. A resource should have whatever representation is most useful to the client. If your language is "components" because you're making an interactive app – sure, go ahead and represent this as a resource. And we did that for a while, with xmlhttprequest + HTML fragments, and PHP includes on the server side.
What we were missing all along was a way to decouple the browser from a single resource (the whole document), so we could have nested resources, and keep client state intact on refresh?
yawaramin 3 days ago [-]
And this is exactly what we get with htmx.
h14h 3 days ago [-]
Excellent read! This is the first time I feel like I finally have a good handle on the "what" & "why" of RSCs.
It has also sparked a strong desire to see RSCs compared and contrasted with Phoenix LiveView.
The distinction between RSCs sending "JSX" over the Wire, and LiveViews sending "minimal HTML diffs"[0] over the wire is fascinating to me, and I'm really curious how the two methodologies compare/contrast in practice.
It'd be especially interesting to see how client-driven mutations are handled under each paradigm. For example, let's say an "onClick" is added to the `<button>` element in the `LikeButton` client component -- it immediately brings up a laundry list of questions for me:
1. Do you update the client state optimistically?
2. If you do, what do you do if the server request fails?
3. If you don't, what do you do instead? Intermediate loading state?
4. What happens if some of your friends submit likes the same time you do?
5. What if a user accidentally "liked", and tries to immediately "unlike" by double-clicking?
6. What if a friend submitted a like right after you did, but theirs was persisted before yours?
(I'll refrain from adding questions about how all this would work in a globally distributed system (like BlueSky) with multiple servers and DB replicas ;))
Essentially, I'm curious whether RSCs offer potential solutions to the same sorts of problems Jose Valim identified here[1] when looking at Remix Submission & Revalidation.
Overall, LiveView & RSCs are easily my top two most exciting "full stack" application frameworks, and I love seeing how radically different their approaches are to solving the same set of problems.
I have used RSCs only in Next.js, but to answer your questions:
1./2.: You can update it optimistically. [0]
3.: Depends on the framework's implementation. In Next.js, you'd invalidate the cache. [1][2]
4.: In the case of the like button, it would be a "form button" [3] which would have different ways [4] to show a pending state. It can be done with useFormStatus, useTransition or useActionState depending on your other needs in this component.
5.: You block the double request with useTransition [5] to disable the button.
6.: In Next, you would invalidate the cache and would see your like and the like of the other user.
React offers a useOptimistic Hook that is designed for client-side optimistic updates and automatically handles reverting the update upon failure, etc: https://react.dev/reference/react/useOptimistic
kassner 3 days ago [-]
I feel the article could have ended after Step 1. It makes the point that you don’t have to follow REST and can build your own session-dependent API endpoints, and use them to fetch data from a component.
I don’t see a point in making that a server-side render. You are now coupling backend to frontend, and forcing the backend to do something that is not its job (assuming you don’t do SSR already).
One can argue that its useful if you would use the endpoint for ESI/SSI (I loved it in my Varnish days) but that’s only a sane option if you are doing server-side renders for everything. Mixing CSR and SSR is OK, but that’s a huge amount of extra complexity that you could avoid by just picking one, and generally adding SSRs is mostly for SEO-purposes, which session-dependent content is excluded anyway.
My brain much prefers the separation of concerns. Just give me a JSON API, and let the frontend take care of representation.
barrkel 3 days ago [-]
The point of doing a server-side render follows from two other ideas:
* that the code which fetches data required for UI is much more efficiently executed on the server-side, especially when there's data dependencies - when a later bit of data needs to be fetched using keys loaded in a previous load
* that the code which fetches and assembles data for the UI necessarily has the same structure as the UI itself; it is already tied to the UI semantically. It's made up out of front end concerns, and it changes in lockstep with the front end. Logically, if it makes life easier / faster, responsibility may migrate between the client and server, since this back end logic is part of the UI.
The BFF thing is a place to put this on the server. It's specifically a back end service which is owned by the front end UI engineers. FWIW, it's also a pattern that you see a lot in Google. Back end services serve up RPC endpoints which are consumed by front end services (or other back end services). The front end service is a service that runs server-side, and assembles data from all the back end services so the client can render. And the front end service is owned by the front end team.
moqizhengz 2 days ago [-]
BFF is in practice a pain in the ass, it is large enterprise like Google's compromise but many ppl are trying to follow what Google does without Google's problem scope and well-developed infra.
Dan's post somehow reinforces the opinion that SSR frameworks are not full-stack, they can at most do some BFF jobs and you need an actual backend.
barrkel 2 days ago [-]
The alternative really is as Dan says: you end up with a bunch of REST endpoints that either serve up too much, or have configuration flags to control how much they serve, simply to satisfy front end concerns while avoiding adding round trip latency. You see this in much smaller apps than Google scale. It's a genuine tension.
Usually the endpoints get too fat, then there's a performance push to speed them up, then you start thinking about fat and thin versions. I've seen it happen repeatedly.
kassner 2 days ago [-]
> The front end service is a service that runs server-side, and assembles data from all the back end services so the client can render. And the front end service is owned by the front end team.
Congratulations, you reinvented GraphQL. /s
Jokes apart, I don’t care much about the technology, but what exactly are we optimizing here? Does this BFF connect directly to the (relational/source of truth) DB to fetch the data with a massaged up query, or it just uses the REST API that the backend team provides? If the latter, we’re just shifting complexity around, and if the former, even if the it connects to a read-replica, you still have to coordinate schema upgrades (which is harder than coordinating a JSON endpoint).
Just let the session-dependent endpoint live in the backend. If data structure needs changes, backend team is in the best position to keep it up to date, and they can do it without waiting for the front end team to be ready to handle it on their BFF. A strong contract between both ends (ideally with an OpenAPI spec) goes a really long way.
jauntywundrkind 3 days ago [-]
Dan Abramov (author) also recently wrote a related post, React for Two Computers.
Very well written (as expected) argument for RSC. It's interesting to see the parallels with Inertia.js.
(a bit sad to see all the commenters that clearly haven't read the article though)
jeppester 3 days ago [-]
I was immediately thinking of inertia.js.
Inertia is "dumb" in that a component can't request data, but must rely on that the API knows which data it needs.
RSC is "smarter", but also to it's detriment in my opinion. I have yet to see a "clean" Next project using RSC. Developers end up confused about which components should be what (and that some can be both), and "use client" becomes a crutch of sorts, making the projects messy.
Ultimately I think most projects would be better off with Inertia's (BFF) model, because of its simplicity.
dzonga 2 days ago [-]
inertia is the 'pragmatic' way. your controller endpoints in your backend - just pass the right amount of data to your inertia view.
& every interaction is server driven.
android521 3 days ago [-]
Very well written. It is rare to see these kinds of high quality articles these days.
danabramov 3 days ago [-]
Thanks!
skydhash 3 days ago [-]
Everything old is new again, and I'm not even that old to know that you can return HTML fragments from AJAX call. But this is worse from any architectural point view. Why?
The old way was to return HTML fragments and add them to the DOM. There was still a separation of concern as the presentation layer on the server didn't care about the interface presented on the client. It was just data generally composed by a template library. The advent of SPA makes it so that we can reunite the presentation layer (with the template library) on the frontend and just send the data to be composed down with the request's response.
The issue with this approach is to again split the frontend, but now you have two template libraries to take care of (in this case one, but on the two sides). The main advantages of having a boundary is that you can have the best representation of data for each side's logic, converting only when needs. And the conversion layer needs to be simple enough to not introduce complexity of its own. JSON is fine as it's easy to audit a parser and HTML is fine, because it's mostly used as is on the other layer. We also have binary representation, but they also have strong arguments for their use.
With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
danabramov 3 days ago [-]
It feels like you haven't read the article and commented on the title.
>The old way was to return HTML fragments and add them to the DOM.
>JSON is fine [..] With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
I really don't know what you mean; the transport literally is JSON. We're not literally sending JSX anywhere. That's also in the article. The JSON output is shown about a dozen times throughout, especially in the third part. You can search for "JSON" on the page. It appears 97 times.
skydhash 3 days ago [-]
From the article:
Replacing innerHTML wasn’t working out particularly well—especially for the highly interative Ads product—which made an engineer (who was not me, by the way) wonder whether it’s possible to run an XHP-style “tags render other tags” paradigm directly on the client computer without losing state between the re-renders.
HTML is still a document format, and while there's a lot of features added to browsers over the year, we still have this as the core of any web page. It's always a given that state don't survive renders. In desktop software, the process is alive while the UI is shown, so that's great for having state, but web pages started as documents, and the API reflects that. So saying that it's an issue, it's the same as saying a fork is not great for cutting.
React is an abstraction over the DOM for having a better API when you're trying not to re-render. And you can then simplify the format for transferring data between server and client. Net win on both side.
But the technique described in the article is like having an hammer and seeing nails everywhere. I don't see the advantages of having JSX representation of JSON objects on the server side.
danabramov 3 days ago [-]
>I don't see the advantages of having JSX representation of JSON objects on the server side.
That's not what we're building towards. I'm just using "breaking JSON apart" as a narrative device to show that Server Components componentize the UI-specific parts of the API logic (which previously lived in ad-hoc ViewModel-like parts of REST responses, or in the client codebase where REST responses get massaged).
It blends the previous "JSON-building" into components.
skydhash 3 days ago [-]
I'm pointing out that this particular pattern (Server Components) is engendering more complexity than necessary.
If you have a full blown SPA on the client side, you shouldn't use ViewModels as that will ties your backend API to the client. If you go for a mixed approach, then your presentation layer is on the server and it's not an API.
HTMX is cognizant of this fact. What it adds are useful and nice abstractions on the basis that the interface is constructed on one end and used on the other. RSC is a complex solution for a simple problem.
danabramov 3 days ago [-]
>you shouldn't use ViewModels as that will ties your backend API to the client.
Note “instead of replacing your existing REST API, you can add…”. It’s a thing people do these days! Recognizing the need for this layer has plenty of benefits.
As for HTMX, I know you might disagree, but I think it’s actually very similar in spirit to RSC. I do like it. Directives are like very limited Client components, server partials of your choice are like very limited Server components. It’s a good way to get a feel for the model.
megaman821 3 days ago [-]
With morphdom (or one day native DOM diffing), wouldn't HTMX fufill 80% of your wishlist?
I personally find HTMX pairs well with web components for client components since their lifecycle runs automatically when they get added to the DOM.
pier25 3 days ago [-]
What if the internal state of the web component has changed?
Wouldn't an HTMX update stomp over it and reset the component to its initial state?
megaman821 2 days ago [-]
Not when using morphdom or the new moveBefore method. Although you would have to give your element a stable id.
pier25 2 days ago [-]
Even when using shadow DOM?
What about internal JS state that isn't reflected in the DOM?
megaman821 2 days ago [-]
If it is new to the DOM it will get added. If it is present in the DOM (based on id and other attributes when the id is not present) it will not get recreated. It may be left alone or it may have its attributes merged. There are a ton of edge cases though, which is why there is no native DOM diffing yet.
pier25 2 days ago [-]
Thanks I need to look closer at this!
whalesalad 3 days ago [-]
to be fair this post is enormous. if i were to try and print it on 8.5x11 it comes out to 71 pages
danabramov 3 days ago [-]
I mean sure but not commenting is always an option. I don't really understand the impulse to argue with a position not expressed in the text.
phpnode 3 days ago [-]
it happens because people really want to participate in the conversation, and that participation is more important to them than making a meaningful point.
pier25 3 days ago [-]
Maybe add a TLDR section?
danabramov 3 days ago [-]
I don't think it would do justice to the article. If I could write a good tldr, I wouldn't need to write a long article in the first place. I don't think it's important to optimize the article for a Hacker News discussion.
That said, I did include recaps of the three major sections at their end:
Look, it's your article Dan, but it would be in your best interest to provide a tldr with the general points. It would help so that people don't misjudge your article (this has already happened). It could maybe make the article more interesting to people that initially discarded reading something so long too. And providing some kind of initial framework might help following along the article too for those that are actually reading it.
rwieruch 3 days ago [-]
Feed it to a LLM and let it give you the gist :)
yanndinendal 3 days ago [-]
The 3 tl;dr he just linked seem fine.
pier25 3 days ago [-]
the fact that he needed to link to those in a HN comment proves my point...
swyx 3 days ago [-]
it really doesn't. stop trying to dumb him down for your personal tastes. he's much better at this than the rest of us
owebmaster 3 days ago [-]
> he's much better at this than the rest of us
That is not a good reason to make the content unnecessarily difficult for its target audience. Being smart also means being able to communicate with those who aren't as brilliant (or just don't have the time).
retropragma 1 days ago [-]
The content isn't difficult. People are just lazy
pier25 3 days ago [-]
> stop trying to dumb him down for your personal tastes
That's unfair.
If anything you're the one dumbing down what I wrote for your personal taste.
3 days ago [-]
pixl97 3 days ago [-]
Yet because of that the issue they were concerned about was shown to the thread readers without having to read 75 pages of text.
Quite often people read the form thread first before wasting their life on some large corpus of text that might be crap. High quality discussions can point out poor quality (or at least fundamentally incorrect) posts and the reasons behind them enlightening the rest of the readers.
retropragma 1 days ago [-]
Delegate your thinking to HN comments at your own peril
aylmao 3 days ago [-]
> The main advantages of having a boundary is that you can have the best representation of data for each side's logic, converting only when needs.
RSC doesn't impede this. In fact it improves it. Instead of having your ORM's objects, to be converted to JSON, sent, parsed, and finally manipulated to your UIs needs, you skip the whole "convert to JSON" part. You can go straight from your ORM objects (best for data operations) to UI (best for rendering) and skip having to think about how the heck you'll serialize this to be serialized over the wire.
> With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
JSX is syntactic sugar for a specific format of JavaScript object. It's a pretty simple format really. From ReactJSXElement.js, L242 [1]:
element = {
// This tag allows us to uniquely identify this as a React Element
$$typeof: REACT_ELEMENT_TYPE,
// Built-in properties that belong on the element
type,
key,
ref,
props,
};
As far as I'm aware, TC39 hasn't yet specified which shape of literal is "ok" and which one is "wrong" to run on a computer, depending on wether that computer has a screen or not. I imagine this is why V8, JSC and SpiderMonkey, etc let you create objects of any shape you want on any environment. I don't understand what's wrong about using this shape on the server.
> Instead of having your ORM's objects, to be converted to JSON, sent, parsed, and finally manipulated to your UIs needs, you skip the whole "convert to JSON" part. You can go straight from your ORM objects (best for data operations) to UI (best for rendering) and skip having to think about how the heck you'll serialize this to be serialized over the wire.
I can't let go of the fact that you get the exact same thing if you just render html on the server. It's driving me crazy. We've really gone full circle, and I'm not sure for what benefit.
tshaddox 3 days ago [-]
> The old way was to return HTML fragments and add them to the DOM. There was still a separation of concern as the presentation layer on the server didn't care about the interface presented on the client.
I doubt there were many systems where the server-generated HTML fragments were generic enough that the server and client HTML documents didn't need to know anything about each other's HTML. It's conceivable to build such a system, particularly if it's intended for a screen-reader or an extremely thinly-styled web page, but in either of those cases HTML injection over AJAX would have been an unlikely architectural choice.
In practice, all these systems that did HTML injection over AJAX were tightly coupled. The server made strong assumptions about the HTML documents that would be requesting HTML fragments, and the HTML documents made strong assumptions about the shape of the HTML fragments the server would give it.
skydhash 3 days ago [-]
> where the server-generated HTML fragments were generic enough that the server and client HTML documents didn't need to know anything about each other's HTML.
> all these systems that did HTML injection over AJAX were tightly coupled
That's because the presentation layer originated on the server. What the server didn't care about was the transformation that alters the display of the HTML on the client. So you can add add an extension to your browser that translate the text to another language and it wouldn't matter to the server. Or inject your own styles. Even when you do an AJAX request, you can add JS code that discards the response.
rapnie 3 days ago [-]
> Everything old is new again
An age ago I took interest in KnockoutJS based on Model-View-ViewModel and found it pragmatic and easy to use. It was however at the beginning of the mad javascript framework-hopping marathon, so it was considered 'obsolete' after a few months. I just peeked, Knockout still exists.
Knockout was a huge leap in developer experience at the time. It's worth noting that Ryan Carniato, the creator of SolidJS, was a huge fan of Knockout. It's a major influence of SolidJS.
kilroy123 3 days ago [-]
I was a big fan of knockoutjs back in the day! An app I built with it is still in use today.
brap 3 days ago [-]
Dan, I've been reading some of your posts and watching some of your talks since Redux, and I really love how passionate you are about this stuff. I think the frontend world is lucky to have someone like you who spends a lot of time thinking about these things enthusiastically.
Anyway, it's hard to deny that React dev nowadays is an ugly mess. Have you given any thought to what a next-gen framework might look like (I'm sure you have)?
csbartus 3 days ago [-]
What happened to the very elegant GraphQL? Where the client _declares_ its data needs, and _that's all_, all the rest is taken care by the framework?
Compared to GraphQL, Server Components are a big step back: you have to do manually on the server what was given by default by GraphQL
eadmund 3 days ago [-]
> the very elegant GraphQL
The GraphQL which ‘elegantly’ returns a 200 on errors? The GraphQL which ‘elegantly’ encodes idempotent reads as mutating POSTS? The GraphQL which ‘elegantly’ creates its own ad hoc JSON-but-not-JSON language?
The right approach, of course, is HTMX-style real REST (incidentally there needs to be a quick way to distinguish real REST from fake OpenAPI-style JSON-as-a-service). E.g., the article says: ‘your client should be able to request all data for a specific screen at once.’ Yes, of course: the way to request a page is to (wait for it, JavaScript kiddies): request a page.
The even better approach is to advance the state of the art beyond JavaScript, beyond HTML and beyond CSS. There is no good reason for these three to be completely separate syntaxes. Fortunately, there is already a good universal syntax for trees of data: S-expressions. The original article mentions SDUI as ‘essentially it’s just JSON endpoints that return UI trees’: in a sane web development model the UI trees would be S-expressions macro-expanded into SHTML.
N+1 is a solved problem at the framework level
If GraphQL actually affects your performance, congratulations, your application is EXTREMELY popular, more so than Facebook, and they use graphql. There are also persisted queries etc.
Not sure about caching, if anything, graphql offers a more granular level of caching so it can be reused even more?
The only issue I see with graphql is the tooling makes it much harder to get it started on a new project, but the recent projects such as gql.tada makes it much easier, though still could be easier.
rwieruch 3 days ago [-]
I have been out of touch with the GraphQL ecosystem for a while. What are the status quo solutions to the problems stated above?
What about the other things? I remember that Stitching and E2E type safety, for example, were pretty brittle in 2018.
YuukiRey 3 days ago [-]
We use the dataloader pattern (albeit an in-house Golang implementation) and it has solved all our N+1 problems.
E2E type safety in our case is handled by Typescript code generation. It works very well. I also happen to have to work in a NextJS codebase, which is the worst piece of technology I have ever had the displeasure of working with, and I don't really see any meaningful difference on a day to day basis between the type sharing in the NextJS codebase (where server/client is a very fuzzy boundary) and the other code base that just uses code generation and is a client only SPA.
For stitching we use Nautilus and I've never observed any issues with it. We had one outage because of some description that was updated in some dependency and that sucked but for the most part it just works. Our usage is probably relatively simple though.
on top of http level caching, you can do any type of caching (redis / fs / etc) just like a regular rest but at a granular level, for ex: user {comments(threadId: abc, page: 1, limit: 20) { body, postedAt} is requested and then cached, another request can come in thread(id: abc) {comments(page: 1, limit: 20) {body, postedAt} you can share the cache.
but of course, there is always the classic dataloader as well.
I am not saying that use graphql and all the problems will be solved, but i am saying that the problem that OP proposed has been solved in an arguably "better" way, as it does not tie the presentation (HTML) with the data for cases of multiplatform apps like web, or native apps.
csbartus 3 days ago [-]
That's a backend issue I guess ...
5Qn8mNbc2FNCiVV 1 days ago [-]
That's the thing, this brings the benefits of GraphQL without requiring GraphQL (+Relay). This was one of the main drivers of RSC (afaik).
Obviously if you have a GraphQL backend, you could care less and the only benefit you'd get is reducing bundle size f.e. for content heavy static pages. But you'll lose client-side caching, so you can't have your cake and eat it too.
Just a matter of trade-offs
anentropic 3 days ago [-]
Couldn't you have both?
I assumed RSC was more concerned with which end did the rendering, and GraphQL with how to fetch just the right data in one request
hyuuu 3 days ago [-]
I was just going to say, all of this has been solved with graphql, elegantly.
chacham15 3 days ago [-]
The main thing that confuses me is that this seems to be PHP implemented in React...and talks about how to render the first page without a waterfall and all that makes sense, but the main issue with PHP was that reactivity was much harder. I didnt see / I dont understand how this deals with that.
When you have a post with a like button and the user presses the like button, how do the like button props update? I assume that it would be a REST request to update the like model. You could make the like button refetch the like view model when the button is clicked, but then how do you tie that back to all the other UI elements that need to update as a result? E.g. what if the UI designer wants to put a highlight around posts which have been liked?
On the server, you've already lost the state of the client after that first render, so doing some sort of reverse dependency trail seems fragile. So the only option would be to have the client do it, but then you're back to the waterfall (unless you somehow know the entire state of the client on the server for the server to be able to fully re-render the sub-tree, and what if multiple separate subtrees are involved in this?). I suppose that it is do-able if there exists NO client side state, but it still seems difficult. Am I missing something?
danabramov 3 days ago [-]
>When you have a post with a like button and the user presses the like button, how do the like button props update?
Right, so there's actually a few ways to do this, and the "best" one kind of depends on the tradeoffs of your UI.
Since Like itself is a Client Component, it can just hit the POST endpoint and update its state locally. I.e. without "refreshing" any of the server stuff. It "knows" it's been liked. This is the traditional Client-only approach.
Another option is to refetch UI from the server. In the simplest case, refetching the entire screen. Then yes, new props would be sent down (as JSON) and this would update both the Like button (if it uses them as its source of truth) and other UI elements (like the highlights you mentioned). It'll just send the entire thing down (but it will be gracefully merged into the UI instead of replacing it). Of course, if your server always returns an unpredictable output (e.g. a Feed that's always different), then you don't want to do that. You could get more surgical with refreshing parts of the tree (e.g. a subroute) but going the first way (Client-only) in this case would be easier.
In other words, the key thing that's different is that the client-side things are highly dynamic so they have agency in whether to do a client change surgically or to do a coarse roundtrip.
spellboots 3 days ago [-]
This feels a lot like https://inertiajs.com/ which I've really been enjoying using recently
danabramov 3 days ago [-]
Yeah, there is quite a bit of overlap!
tillcarlos 3 days ago [-]
This. We started using it with Rails and it’s been great.
I do like scrappy rails views that can be assembled fast - but the React views our FE dev is putting on top of existing rails controllers have a much better UX.
chrisvenum 3 days ago [-]
I am a huge fan of Inertia.
I always felt limited by Blade but drained by the complexity of SPAs.
Inertia makes using React/Vue feel as simple as old-school Laravel app.
Long live the monolith.
altbdoor 3 days ago [-]
IMO this feels like Preact "render to string" with Express, though I might be oversimplifying things, and granted it wouldn't have all the niceties that React offers.
Feels like HTMX, feels like we've come full circle.
danabramov 3 days ago [-]
In my checklist (https://overreacted.io/jsx-over-the-wire/#dans-async-ui-fram...), that would satisfy only (2), (3) if it supports async/await in components, and (4). It would not satisfy (1) or (5) because then you'd have to hydrate the components on the client, which you wouldn't be able to do with Preact if they had server-only logic.
altbdoor 3 days ago [-]
Thanks for the reply Dan. That was a great write up, if I might add.
And yeap, you're right! If we need a lot more client side interactivity, just rendering JSX on server side won't cut it.
low_tech_punk 3 days ago [-]
The X in JSX stands for HTMX.
danabramov 3 days ago [-]
Yes
recursivedoubts 3 days ago [-]
unfathomably based
scop 3 days ago [-]
I can't help but read this in a baritone blustering-with-spittle transatlantic voice.
WuxiFingerHold 2 days ago [-]
Nothing can replace good engineering. Good engineering means using the right methods or tools for the given situation.
If you're fetching 10s of raw models (corresponding to a table) and extract (or even join!) the data needed to display in the view, it's clearly not the best engineering decision. But fetching 2 or 3 well shaped views in your component and doing the last bit of correlation to the view in the component is acceptable.
Same for deciding a render strategy: Traditional SSR (maybe with HTMX) vs. isomorphic (Next and friends) vs. SPA. Same for Redux vs MobX. Or what I think is often neglected by the frontend folks: Running Node on the backend vs. Java vs. Go vs. C# vs. Rust.
If you're already in the spot where React Server Components are a good fit, the ideas in the article are compelling. But IMO not enough to be convincing to switch to or chose React / Next when you're better of with traditional SSR or SPA, which IME are the best fits for the vast majority of apps.
nop_slide 3 days ago [-]
Just use Django/HTMX, Rails/Hotwire, or Laravel/Livewire
cpursley 3 days ago [-]
LiveView is the OG and absolutely smokes those in terms of performance (and DX), but ecosystem is lacking. Anyways, I’d rather use full stack React/Typescript over slow and untyped Rails or Python and their inferior ORMs.
pier25 3 days ago [-]
Phoenix/Liveviews
Fresh/Partials
Astro/HTMX with Partials
jonathanhefner 3 days ago [-]
RSC is indeed very cool. It also serves as a superior serialization format compared to JSON. For example, it can roundtrip basic types such as `Date` and `Map` with no extra effort.
One thing I would like to see more focus on in React is returning components from server functions. Right now, using server functions for data fetching is discouraged, but I think it has some compelling use cases. It is especially useful when you have components that need to fetch data dynamically, but you don't want the fetch / data tied to the URL, as it would be with a typical server component. For example, when fetching suggestions for a typeahead text input.
I would love to see something like it integrated into React proper.
exceptione 2 days ago [-]
Though provoking article @danabramov, thanks.
I am wondering: What are the gains of RSC over a Fat Resource (with expand, sort, select and filter) where responses for (expand,sort,select) are cached? Most applications are READ-heavy, so even a fat response is easily returned to the client and might not need a refetch that often.
The article briefly mentions that you need $expand and $select then, but why/when is that not a valid approach?
The other point I have is that I really do not like to have JS on my server. If my business logic runs on a better runtime, we have 3 (actually 4) layers to pass:
Storage layer (DB)
-> business logic in C# (server)
-> ViewModel layer in TS/JS (server)
-> React in TS/JS (client).
Managing changes gets really complex, with each layer needing type safety.
aabbcc1241 3 days ago [-]
Hey, thanks for sharing "JSX Over the Wire"! As the creator of ts-liveview, I’m thrilled to see Dan’s ideas on server-side JSX rendering and minimal client updates—they mesh so well with my work.
ts-liveview is a TypeScript framework I built (grab it as a starter project on GitHub[1]) for real-time, server-rendered apps. It uses JSX/TSX to render HTML server-side and, in WebSocket mode, updates the DOM by targeting specific CSS selectors (document.querySelector) over WebSockets or HTTP/2 streaming. This keeps client-side JavaScript light, delivering fast, SEO-friendly pages and reactive UIs, much like Dan’s “JSX over the wire” vision.
What’s your take on this server-driven approach? Could it shake up how we build apps compared to heavy client-side frameworks? Curious if you’ve tried ts-liveview yet—it’s been a fun project to dig into these ideas!
> But putting ViewModels in Resources also doesn’t work very well. ViewModels are not abstract concepts like “a post”; each ViewModel describes a specific piece of UI. As a result, the shape of your “Post” Resource grows to encompass the needs of every screen displaying a post.
I don't see the issue with adding an endpoint per viewmodel. Treating viewmodels as resources seems perfectly fine. Then again, I'm already on the HATEOAS and HTMX bandwagon, so maybe that just seems obvious, as it's no worse than returning HTML or JSX that could be constantly changing. If you actually need stable API endpoints for others to consume for other purposes, that's a separate consideration. This seems to be the direction the rest of the article goes.
3 days ago [-]
bk496 3 days ago [-]
Another great post!
I like the abstraction of server components but some of my co-workers seem to prefer HTMX (sending HTML rather than JSON) and can't really see any performance benefit from server components.
Maybe OP could clear up
- Whether HTML could be sent instead (depending on platform), there is a brief point about not losing state but if your component does not have input elements or can have it state thrown away then maybe raw HTML could work?
- prop size vs markup/component size. If you send a component down with a 1:9 dynamic to static content component. Then wouldn't it be better to have the the 90% static preloaded in the client, then only 10% of the data transmitted? Any good heuristic options here?
- "It’s easy to make HTML out of JSON, but not the inverse". What is intrinsic about HTML/XML?
--
Also is Dan the only maintainer on the React team who does these kind of posts? do other members write long form. would be interesting to have a second angle.
tbeseda 3 days ago [-]
A second angle from the same team?
Or reference the 2+ decades written about the same pattern in simpler, faster, less complex implementations.
curtisblaine 3 days ago [-]
Or you can have your "backend for frontend"... on the frontend, so you don't have an additional layer, it's always written in the frontend language and it's always synced to the frontend needs. The lengths we go to reinvent the squared wheel.
AstroBen 3 days ago [-]
This all seems to be relatively simple concepts for an experienced programmer to understand, but it's being communicated in a very complex way due to the React world of JSX and Components
What if we just talked about it only in terms of simple data structures and function composition?
jacobobryant 3 days ago [-]
The framework checklist[1] makes me think of Fulcro: https://fulcro.fulcrologic.com/. To a first approximation you could think of it like defining a GraphQL query alongside each of your UI components. When you load data for one component (e.g. a top-level page component), it combines its own query with the queries from its children UI components.
Yes, another case of old school web dev making a comeback. “HTML over the wire” is basically server-rendered templates (php, erb, ejs, jinja), sent asynchronously as structured data and interpreted by React to render the component tree.
What’s being done here isn’t entirely new. Turbo/Hotwire [1], Phoenix LiveView, even Facebook’s old Async XHP explored similar patterns. The twist is using JSX to define the component tree server-side and send it as JSON, so the view model logic and UI live in the same place. Feels new, but super familiar, even going back to CGI days.
Agree there's echoes of "old" in "new" but there are also distinct new things too :)
3 days ago [-]
gavmor 3 days ago [-]
Right? Right. I had similar thoughts (API that's the parent of the view? You mean a controller?), and quit very early into the post. Didn't realize it was Dan Abramov, or I might've at least skimmed the 70% and 99% marks, but there's no going back now.
Who is this written for? A junior dev? Or, are we minting senior devs with no historical knowledge?
I still can't get over how the "API" in "REST API" apparently originally meant "a website".
gherkinnn 3 days ago [-]
There is a part of my brain that is intrigued by React Server Components. I kinda get it.
And yet, I see nothing but confusion around this topic. For two years now. I see Next.js shipping foot guns, I see docs on these rendering modes almost as long as those covering all of Django, and I see blog lengthy blog posts like this.
When the majority of problems can be solved with Django, why tie yourself in to knots like this? At what point is it worth it?
danabramov 3 days ago [-]
I think the rollout is a bit messy (especially because it wasn't introduced as a new thing but kind of replaced an already highly used but different thing). There are pros and cons to that kind of rollout. The tooling is also yet to mature. And we're still figuring out how to educate people on it.
That said, I also think the basic concepts or RSC itself (not "rendering modes" which are a Next thing) are very simple and "up there" with closures, imports, async/await and structured programming in general. They deserve to be learned and broadly understood.
grncdr 3 days ago [-]
> Since XHP executes on a server that emits HTML, the most that you can do relatively seamlessly is to replace parts of an existing markup with the newly generated HTML markup from the server by updating innerHTML of some DOM node.
It’s a very long post so maybe I missed it, but does Dan ever address morphdom and its descendants? I feel like that’s a very relevant point in the design space explored in the article.
_heimdall 2 days ago [-]
I always come back to the idea that I want to render HTML where the state lives rather than shipping both a rendering engine and all the necessary data to a client.
In most cases that means rendering HTML on the server, where most of the data lives, and using a handful of small components in the frontend for state that never goes to the backend.
yawaramin 3 days ago [-]
I skimmed over this and imho it would be better to cut like 30% of the exposition and split it up into a series of articles tackling each style separately. Just my 2c.
danabramov 3 days ago [-]
I'm hoping someone will do something like that. I try to write with the audience of writers in mind.
andrewstuart 2 days ago [-]
IMO:
1: APIs should return JSON because endpoints do often get reused throughout an application.
2: it really is super easy to get the JSON into client side HTML with JSX
3: APIs should not return everything needed for a component, APIs should return one thing only. Makes back and front end more simple and flexible and honestly who cares about the extra network requests
valtism 3 days ago [-]
Really appreciate the quality you put into expressing these things. It was nice just to see a well laid-out justification of how trying to tie a frontend to a backend can get messy quickly. I'm definitely going to remember the "ungrounded abstraction" as a useful concept here.
alejalapeno 3 days ago [-]
I've represented JSX/the component hierarchy as JSON for CMS composition of React components. If you think of props as CMS inputs and children as nesting components then all the CMS/backend has to do is return the JSON representation and the frontend only needs to loop over it with React.createElement().
cstew913 3 days ago [-]
It reminds me of when I sent HTML back from my Java Servlets.
It's exciting to see server side rendering come back around.
no_wizard 3 days ago [-]
I believe there is a project (not sure if it’s active) called JSX2 that treated this as exact problem as a first class concern. It was pretty fast too. Emulated the React API for the time quite well. This was 4-5 years ago at least
_heimdall 2 days ago [-]
Is Dan reinventing Astro?
The biggest draw that pulled me to Astro early on was the fact that it uses JSX for a, in my opinion, better server side templating system.
3 days ago [-]
yawaramin 2 days ago [-]
From the final code, this is called for every rendered post:
const post = await getPost(postId);
But...we should basically never be doing this. This is totally inefficient. Suppose this is making a network call to your Postgres database to get the post data. It will make the network call N number of times. You are right back at the N+1 query problem.
Of course if you're using SQLite on a local disk then you're good. If you have some data loader middleware that batches and combines all these requests then you're good. But if you're just naively making these requests directly...then you're setting up your app for massive performance problems in the near future.
The known solution to the N+1 query problem is to bulk load all the data you need. So you need to render a list of posts, you bulk load all their data with a single query. Now you can just pass the data in directly to the rendering components. They don't load their own data. And the need for RSC is gone.
I'm sure RSC is good for some narrow set of cases where the data loading efficiency problems are already taken care of, but that's definitely not most cases.
agos 2 days ago [-]
some of the presented problems of the "good ol' REST endpoint" approach feel a tiny bit of a strawman, like "you can't add endpoints because /get/posts is taken", but having to cobble together a response from multiple calls (and all it entails, such as loading states) because is a very real and I feel very shared pain. And in my experience, too, GraphQL has been an unsatisfactory solution.
A BFF is indeed a possible solution and yeah if you have a BFF made in JS for your React app the natural conclusion is that you might as well start returning JSX.
But. BUT. "if you have a BFF made in JS" and "if your BFF is for your React app" are huge, huge ifs. Running another layer on the server just to solve this specific problem for your React app might work but it's a huge tradeoff and a non starter (or at least a very hard sale) for many teams; and this tradeoff is not stated, acknowledged, explored in any way in this writing (or in most writing pushing RSCs, in my experience).
And a minor point but worth mentioning nonetheless, writing stuff like "Directly calling REST APIs from the client layer ignores the realities of how user interfaces evolve" sounds like the author thinks people using REST APIs are naive simpletons who are so unskilled they are missing a fundamental point of software development. People directly calling REST APIs are not cavemen, they know about the reality of evolving UI, they just chose a different approach to the problem.
We don't have to go crazy. Let's just meet at MVC and call it a day, deal?
icemelt8 3 days ago [-]
I knew this post would eventually peddle me nextJS, and it did!
revskill 3 days ago [-]
The power is in react context for children to refer to parent state. Rsc completely solved the restful thesis. A rsc returns a spa with streaming data. It also solved the microfrontend architecturally. It is the end game.
Spa developers missed the point totally by reinventing broken abstractions in their frameworks. The mising points is in code over convention. Stop enforcing your own broken convention and let developers use their own abstractions. Things are interpreted at runtime, not compile time. Bundler is for bundling, do not cross its boundary.
nsonha 2 days ago [-]
The "Pay what you like" button right under the title when I have not read any thing is really off putting
lerp-io 2 days ago [-]
isn’t this same thing as graphql?
3 days ago [-]
wild_egg 3 days ago [-]
Deja vu with this blog. Another overengineered abstraction recreating things that already exist.
Misunderstanding REST only to reinvent it in a more complex way. If your API speaks JSON, it's not REST unless/until you jump through all of these hoops to build a hypermedia client on top of it to translate the bespoke JSON into something meaningful.
Everyone ignores the "hypermedia constraint" part of REST and then has to work crazy magic to make up for it.
Instead, have your backend respond with HTML and you get everything else out of the box for free with a real REST interface.
danabramov 3 days ago [-]
>Another overengineered abstraction recreating things that already exist.
>Everyone ignores the "hypermedia constraint" part of REST and then has to work crazy magic to make up for it.
Right, that's why I've linked to https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi... the moment we started talking about this. The post also clarifies multiple times that I'm talking about how REST is used in practice, not its "textbook" interpretation that nobody refers to except in these arguments.
Strawmanning the alternative as CGI with shell scripts really makes the entire post that much weaker.
> nobody refers to except in these arguments.
Be the change, maybe? People use REST like this because people write articles like this which uses REST this way.
danabramov 3 days ago [-]
>Strawmanning the alternative as CGI with shell scripts really makes the entire post that much weaker.
I wasn't trying to strawman it--I was genuinely trying to show the historical progression. The snark was intended for the likely HN commenter who'd say this without reading, but the rest of the exploration is sincere. I tried to do it justice but lmk if I missed the mark.
I think I've sufficiently motivated why that response isn't HTML originally; however, it can be turned into HTML which is also mentioned in the article.
timw4mail 3 days ago [-]
The hypermedia constraint is crazy magic itself. It's not like HATEOAS is fewer steps on the application and server side.
nsonha 2 days ago [-]
I have yet to see these mythical HATEOAS compliant applications, they must be so amazing and simple, example anyone?
And if it turns out that there is no such thing, should I conclude that all these people talking about it really just base their opinion on some academic talking points and are actually full of shit?
nsonha 2 days ago [-]
> have your backend respond with HTML and you get everything else out of the box for free with a real REST interface.
speak like someone who's never made a real product. Please enlighten us on how you add interactivity to your client, which flavour of spaghetti js? How do you handle client states, conveniently everything's on the backend?
aylmao 3 days ago [-]
We already have a way one way to render things on the browser, everyone. Wrap it up, there's definitely no more to explore here.
And while we're at it, I'd like to know, why are people still building new and different game engines, programming languages, web browsers, operating systems, shells, etc, etc. Don't they know those things already exist?
/s
Joking aside, what's wrong with finding a new way of doing something? This is how we learn and discover things.
williamcotton 2 days ago [-]
We’re in the nineties, okay?
Whee!
#!/usr/bin/perl
$ENV{'REQUEST_METHOD'} =~ tr/a-z/A-Z/;
if ($ENV{'REQUEST_METHOD'} eq "GET") {
$buffer = $ENV{'QUERY_STRING'};
}
print "Content-type: text/html\n\n";
$post_id = $buffer;
$post_id =~ s/&.*//; # Get first parameter (before any &)
$post_id =~ s/[^a-zA-Z0-9\._-]//g; # Sanitize input
$truncate = ($buffer =~ /truncateContent=true/) ? 1 : 0;
$title = `mysql -u admin -p'password' -D blog --skip-column-names -e "SELECT title FROM posts WHERE url='$post_id'"`;
chomp($title);
$content = `mysql -u admin -p'password' -D blog --skip-column-names -e "SELECT content FROM posts WHERE url='$post_id'"`;
chomp($content);
if ($truncate) {
# Extract first paragraph (everything before the first blank line)
$first_paragraph = $content;
$first_paragraph =~ s/\n\n.*//s;
print "<h3><a href=\"/$post_id.html\">$title</a></h3>\n";
print "<p>$first_paragraph [...]</p>\n";
} else {
print "<h1>$title</h1>\n";
print "<p>\n";
print "$content\n";
print "</p>\n";
}
reactsux 3 days ago [-]
[dead]
whalesalad 3 days ago [-]
[flagged]
yawaramin 3 days ago [-]
It's the standard dose of Abramov.
danabramov 3 days ago [-]
This is what happens when I don't write for a few years
emmanueloga_ 3 days ago [-]
Hey, thanks for sharing your thoughts! I appreciate you putting this out there.
One bit of hopefully constructive feedback: your previous post ran about 60 printed pages, this one's closer to 40 (just using that as a rough proxy for time-to-read). I’ve only skimmed both for now, but I found it hard to pin down the main purpose or takeaway. An abstract-style opening and a clear conclusion would go a long way, like in academic papers. I think that makes dense material way more digestible.
I don't think I can compress it further. Generally speaking I'm counting on other people carrying useful things out of my posts and finding more concise formats for those.
emmanueloga_ 3 days ago [-]
From my perspective, the article seems primarily focused on promoting React Server Components, so you could mention that at the very top. If that’s not the case, then a clearer outline of the article’s objectives would help. In technical writing, it’s generally better to make your argument explicit rather than leave it open to reader interpretation or including a "twist" at the end.
An outline doesn't have to be a compressed version, I think more like a map of the content, which tells me what to expect as I make progress through the article. You might consider using a structure like SCQA [1] or similar.
I appreciate the suggestions but that’s just not how I like to write. There’s plenty of people who do so you might find their writing more enjoyable. I’m hoping some of them will pick something useful in my writing too, which would help it reach a wider audience.
I think your checklist of characteristics frames things well. it reminds me of Remix's introduction to the library
https://remix.run/docs/en/main/discussion/introduction > Building a plain HTML form and server-side handler in a back-end heavy web framework is just as easy to do as it is in Remix. But as soon as you want to cross over into an experience with animated validation messages, focus management, and pending UI, it requires a fundamental change in the code. Typically, people build an API route and then bring in a splash of client-side JavaScript to connect the two. With Remix, you simply add some code around the existing "server side view" without changing how it works fundamentally
it was this argument (and a lot of playing around with challengers like htmx and JSX like syntax for Python / Go) that has brought me round to the idea that RSCs or something similar might well be the way to go.
Bit of a shame seeing how poor some of the engagement has been on here and Reddit though. I thought the structure and length of the article was justified and helpful. Concerning how many peoples' responses are quite clearly covered in TFA they didn't read...
https://en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect
In 2D, it seems like you're just reinventing the wheel. But in 3D, you can see that some hack or innovation allowed you to take a new stab at the problem.
Other times I imagine trilemmas, as depicted in Scott McCloud's awesome book Understanding Comics.
There's a bounded design (solution) space, with concerns anchoring each corner. Like maybe fast, simple, and correct. Or functional, imperative, and declarative. Or weight, durability, and cost. Or...
Our job is to divine a solution that lands somewhere in that space, balancing those concerns, as best appropriate for the given context.
By extension, there's no one-size fits all perfect solution. (Though there are "good enough" general purpose solutions.)
The beauty of experiencing many, many different cuts at a problem, is that one can start to intuit things. Like quickly understand how a new product fits in the space. Like quickly narrowing the likely solution space for the current project. Comparing and contrasting stuff in an open-minded semi-informed way.
Blah, blah, blah.
Vercel fixes this for a fee: https://vercel.com/docs/skew-protection
I do wonder how many people will use the new React features and then have short outages during deploys like the FOUC of the past. Even their Pro plan has only 12 hours of protection so if you leave a tab open for 24 hours and then click a button it might hit a server where the server components and functions are incompatible.
If your rollout times are very short then skew is not a big concern for you, because it will impact very few users. If it lasts hours, then you have to solve it.
After the rollout is complete, then reload is fine. It's a bit user hostile but they will reload into a usable state.
JSX is a descendant of a PHP extention called XHP [1] [2]
[1] https://legacy.reactjs.org/blog/2016/09/28/our-first-50000-s...
[2] https://www.facebook.com/notes/10158791323777200/
When you'd annotate a React component with ReactXHP (if I remember correctly), some codegen would generate an equivalent XHP components that takes the same props, and can just be used anywhere in XHP. It worked very well when I last used it!
Slightly less related but still somewhat, they have an extension to GraphQL as well that allows you to call/require React components from within GraphQL. If you look at a random GraphQL response there's a good chance you will see things like `"__dr": "GroupsCometHighlightStoryAlbumAttachmentStyle.react"`. I never looked into the mechanics of how these worked.
Fascinating, I didn't know there was such a close integration between XHP and React. I imagined the history like XHP being a predecessor or prior art, but now I see there was an overlap of both being used together, long enough to have special language constructs to "bind" the two worlds.
"ReactXHP" didn't turn up anything, but XHP-JS sounds like it.
> We have a rapidly growing library of React components, but sometimes we’ll want to render the same thing from a page that is mostly static. Rewriting the whole page in React is not always the right decision, but duplicating the rendering code in XHP and React can lead to long-term pain.
> XHP-JS makes it convenient to construct a thin XHP wrapper around a client-side React element, avoiding both of these problems. This can also be combined with XHPAsync to prefetch some data, at the cost of slightly weakening encapsulation.
https://engineering.fb.com/2015/07/09/open-source/announcing...
This is from ten years ago, and it's asking some of the same big questions as the posted article, JSX over the Wire. How to efficiently serve a mixture of static and dynamic content, where the same HTML templates and partials are rendered on server and client side. How to fetch, refresh data, and re-hydrate those templates.
With this historical context, I can understand better the purpose of React Server Components, what it's supposed to accomplish. Using the same language for both client/server-side rendering solves a large swath of the problem space. I haven't finished reading the article, so I'll go enjoy the rest of it.
The bigger issue is the changes to events and how they get fired, some of which make sense, others of which just break people's expectations of how Javascript should work when they move to non-React projects.
The bigger difference that React makes from other frameworks, and from the DOM, is when it comes to events, in particular with events like `onChange` actually behaving more like the `onInput` event.
Ultimately this really just smooshed around the interface without solving the problem it sets out to solve: it moves the formatting of the mail markup to the server, but you can't move all of it unless your content is entirely static (and if you're getting it from the server, SOMETHING has to be interactive).
Consider making a list of posts from some sort of feed. If each item in the list is a server component, you can't have the component representing the item be a server component if you need to handle any events in that item. So now you're limited to just making the list component itself a server component. Well what good is that?
The whole point of this is to move stuff off of the client. But it's not even clear that you're saving any bytes at all in this scenario, because if there's any props duplicated across items in the list, you've got to duplicate the data in the JSON: the shallower the returned JSX, the more raw data you send instead of JSX data. Which completely defats the point of going through all this trouble in the first place.
...have a client component inside the post. For example, for each post, have a server component, that contains a <ClientDeleteButton postId={...} />.
...have a wrapper client component that takes a server components as a child. Eg. if you want to show a hover-card for each post:
https://nextjs.org/docs/app/building-your-application/render...> props duplicated across items in the list, you've got to duplicate the data in the JSON
I'm pretty sure gzip would just compress that.
Bytes on the wire aren't nearly as important in this case. That value still has to be decompressed into a string and that string needs to be parsed into objects and that's all before you pump it into the renderer.
> have a wrapper client component that takes a server components as a child.
That doesn't work for the model defined in this post. Because now each post is a request to the server instead of one single request that returns a rendered list of posts. That's literally the point of doing this whole roundabout thing: to offload as much work as possible to the server.
> For example, for each post, have a server component, that contains a <ClientDeleteButton postId={...} />.
And now only the delete button reacts to being pressed. You can't remove the post from the page. You can't make the post semi transparent. You can't disable the other buttons on the post.
Without making a mess with contexts, state and interactivity can only happen in the client component islands.
And you know what? If you're building a page that's mostly static on a site that sees almost no code changes or deployments, this probably works great for certain cases. But it's far from an ideal practice for anything that's even mildly interactive.
Even just rendering the root of your render tree is problematic, because you probably want to show loading indicators and update the page title or whatever, and that means loading client code to load server code that runs more client code. At least with good old fashioned SSR, by the time code in the browser starts running, everything is already ready to be fully interactive.
Thats where you’re wrong. The JSX snippet that I posted above gets turned into:
If you wanted to fade the entire post when pressing the delete button without contexts, you’d create a client component like this: And pass it a server component like this:[0] https://react.dev/reference/rsc/use-client
One way to decide if this architecture is for you, is to consider where your app lands on the curve of “how much rendering code should you ship to client vs. how much unhydrated data should you ship”. On that curve you can find everything from fully server-rendered HTML to REST APIs and everything in between, plus some less common examples too.
Fully server-rendered HTML is among the fastest to usefulness - only relying on the browser to render HTML. By contrast in traditional React server rendering is only half of the story. Since after the layout is sent a great many API calls have to happen to provide a fully hydrated page.
Your sweet spot on that curve is different for every app and depends on a few factors - chiefly, your app’s blend of rate-of-change (maintenance burden over time) and its interactivity.
If the app will not be interactive, take advantage of fully-backend rendering of HTML since the browser’s rendering code is already installed and wicked fast.
If it’ll be highly interactive with changes that ripple across the app, you could go all the way past plain React to a Redux/Flux-like central client-side data store.
And if it’ll be extremely interactive client-side (eg. Google Docs), you may wish to ship all the code to the client and have it update its local store then sync to the server in the background.
But this React Server Components paradigm is surprisingly suited to a great many CRUD apps. Definitely will consider it for future projects - thanks for such a great writeup!
Fully server-rendered HTML is the REST API. Anything feeding back json is a form of RPC call, the consumer has to be deeply familiar with what is in the response and how it can be used.
In the previous article, I was annoyed a bit by some of the fluffiness and redefinition of concepts that I was already familiar with. This one, however, felt much more concrete, and grounded in the history of the space, showing the tradeoffs and improvements in certain areas between them.
The section that amounted to "I'm doing all of this other stuff just to turn it into HTML. With nice, functional, reusable JSX components, but still." really hit close to how I've felt.
My question is: When did you first realize the usefulness of something like RSC? If React had cooked a little longer before gaining traction as the client-side thing, would it have been for "two computers"?
I'm imagining a past where there was some "fuller stack" version that came out first, then there would've been something that could've been run on its own. "Here's our page-stitcher made to run client-side-only".
- https://www.youtube.com/watch?v=zMf_xeGPn6s
So, let's assume the alternative universe, where we did not mess up and got REST wrong.
There's no constraint saying a resource (in the hypermedia sense) has to have the same shape as your business data, or anything else really. A resource should have whatever representation is most useful to the client. If your language is "components" because you're making an interactive app – sure, go ahead and represent this as a resource. And we did that for a while, with xmlhttprequest + HTML fragments, and PHP includes on the server side.
What we were missing all along was a way to decouple the browser from a single resource (the whole document), so we could have nested resources, and keep client state intact on refresh?
It has also sparked a strong desire to see RSCs compared and contrasted with Phoenix LiveView.
The distinction between RSCs sending "JSX" over the Wire, and LiveViews sending "minimal HTML diffs"[0] over the wire is fascinating to me, and I'm really curious how the two methodologies compare/contrast in practice.
It'd be especially interesting to see how client-driven mutations are handled under each paradigm. For example, let's say an "onClick" is added to the `<button>` element in the `LikeButton` client component -- it immediately brings up a laundry list of questions for me:
1. Do you update the client state optimistically? 2. If you do, what do you do if the server request fails? 3. If you don't, what do you do instead? Intermediate loading state? 4. What happens if some of your friends submit likes the same time you do? 5. What if a user accidentally "liked", and tries to immediately "unlike" by double-clicking? 6. What if a friend submitted a like right after you did, but theirs was persisted before yours?
(I'll refrain from adding questions about how all this would work in a globally distributed system (like BlueSky) with multiple servers and DB replicas ;))
Essentially, I'm curious whether RSCs offer potential solutions to the same sorts of problems Jose Valim identified here[1] when looking at Remix Submission & Revalidation.
Overall, LiveView & RSCs are easily my top two most exciting "full stack" application frameworks, and I love seeing how radically different their approaches are to solving the same set of problems.
[0]: <https://www.phoenixframework.org/blog/phoenix-liveview-1.0-r...> [1]: <https://dashbit.co/blog/remix-concurrent-submissions-flawed>
1./2.: You can update it optimistically. [0]
3.: Depends on the framework's implementation. In Next.js, you'd invalidate the cache. [1][2]
4.: In the case of the like button, it would be a "form button" [3] which would have different ways [4] to show a pending state. It can be done with useFormStatus, useTransition or useActionState depending on your other needs in this component.
5.: You block the double request with useTransition [5] to disable the button.
6.: In Next, you would invalidate the cache and would see your like and the like of the other user.
[0] https://react.dev/reference/react/useOptimistic
[1] https://nextjs.org/docs/app/api-reference/functions/revalida...
[2] https://nextjs.org/docs/app/api-reference/directives/use-cac...
[3] https://www.robinwieruch.de/react-form-button/
[4] https://www.robinwieruch.de/react-form-loading-pending-actio...
[5] https://react.dev/reference/react/useTransition
I don’t see a point in making that a server-side render. You are now coupling backend to frontend, and forcing the backend to do something that is not its job (assuming you don’t do SSR already).
One can argue that its useful if you would use the endpoint for ESI/SSI (I loved it in my Varnish days) but that’s only a sane option if you are doing server-side renders for everything. Mixing CSR and SSR is OK, but that’s a huge amount of extra complexity that you could avoid by just picking one, and generally adding SSRs is mostly for SEO-purposes, which session-dependent content is excluded anyway.
My brain much prefers the separation of concerns. Just give me a JSON API, and let the frontend take care of representation.
* that the code which fetches data required for UI is much more efficiently executed on the server-side, especially when there's data dependencies - when a later bit of data needs to be fetched using keys loaded in a previous load
* that the code which fetches and assembles data for the UI necessarily has the same structure as the UI itself; it is already tied to the UI semantically. It's made up out of front end concerns, and it changes in lockstep with the front end. Logically, if it makes life easier / faster, responsibility may migrate between the client and server, since this back end logic is part of the UI.
The BFF thing is a place to put this on the server. It's specifically a back end service which is owned by the front end UI engineers. FWIW, it's also a pattern that you see a lot in Google. Back end services serve up RPC endpoints which are consumed by front end services (or other back end services). The front end service is a service that runs server-side, and assembles data from all the back end services so the client can render. And the front end service is owned by the front end team.
Dan's post somehow reinforces the opinion that SSR frameworks are not full-stack, they can at most do some BFF jobs and you need an actual backend.
Usually the endpoints get too fat, then there's a performance push to speed them up, then you start thinking about fat and thin versions. I've seen it happen repeatedly.
Congratulations, you reinvented GraphQL. /s
Jokes apart, I don’t care much about the technology, but what exactly are we optimizing here? Does this BFF connect directly to the (relational/source of truth) DB to fetch the data with a massaged up query, or it just uses the REST API that the backend team provides? If the latter, we’re just shifting complexity around, and if the former, even if the it connects to a read-replica, you still have to coordinate schema upgrades (which is harder than coordinating a JSON endpoint).
Just let the session-dependent endpoint live in the backend. If data structure needs changes, backend team is in the best position to keep it up to date, and they can do it without waiting for the front end team to be ready to handle it on their BFF. A strong contract between both ends (ideally with an OpenAPI spec) goes a really long way.
https://overreacted.io/react-for-two-computers/ https://news.ycombinator.com/item?id=43631004 (66 points, 6 days ago, 54 comments)
(a bit sad to see all the commenters that clearly haven't read the article though)
Inertia is "dumb" in that a component can't request data, but must rely on that the API knows which data it needs.
RSC is "smarter", but also to it's detriment in my opinion. I have yet to see a "clean" Next project using RSC. Developers end up confused about which components should be what (and that some can be both), and "use client" becomes a crutch of sorts, making the projects messy.
Ultimately I think most projects would be better off with Inertia's (BFF) model, because of its simplicity.
& every interaction is server driven.
The old way was to return HTML fragments and add them to the DOM. There was still a separation of concern as the presentation layer on the server didn't care about the interface presented on the client. It was just data generally composed by a template library. The advent of SPA makes it so that we can reunite the presentation layer (with the template library) on the frontend and just send the data to be composed down with the request's response.
The issue with this approach is to again split the frontend, but now you have two template libraries to take care of (in this case one, but on the two sides). The main advantages of having a boundary is that you can have the best representation of data for each side's logic, converting only when needs. And the conversion layer needs to be simple enough to not introduce complexity of its own. JSON is fine as it's easy to audit a parser and HTML is fine, because it's mostly used as is on the other layer. We also have binary representation, but they also have strong arguments for their use.
With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
>The old way was to return HTML fragments and add them to the DOM.
Yes, and the problem with that is described at the end of this part: https://overreacted.io/jsx-over-the-wire/#async-xhp
>JSON is fine [..] With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
I really don't know what you mean; the transport literally is JSON. We're not literally sending JSX anywhere. That's also in the article. The JSON output is shown about a dozen times throughout, especially in the third part. You can search for "JSON" on the page. It appears 97 times.
React is an abstraction over the DOM for having a better API when you're trying not to re-render. And you can then simplify the format for transferring data between server and client. Net win on both side.
But the technique described in the article is like having an hammer and seeing nails everywhere. I don't see the advantages of having JSX representation of JSON objects on the server side.
That's not what we're building towards. I'm just using "breaking JSON apart" as a narrative device to show that Server Components componentize the UI-specific parts of the API logic (which previously lived in ad-hoc ViewModel-like parts of REST responses, or in the client codebase where REST responses get massaged).
The change-up happens at this point in the article: https://overreacted.io/jsx-over-the-wire/#viewmodels-revisit...
If you're interested in the "final" code, it's here: https://overreacted.io/jsx-over-the-wire/#final-code-slightl....
It blends the previous "JSON-building" into components.
If you have a full blown SPA on the client side, you shouldn't use ViewModels as that will ties your backend API to the client. If you go for a mixed approach, then your presentation layer is on the server and it's not an API.
HTMX is cognizant of this fact. What it adds are useful and nice abstractions on the basis that the interface is constructed on one end and used on the other. RSC is a complex solution for a simple problem.
It doesn’t because you can do this as a layer in front of the backend, as argued here: https://overreacted.io/jsx-over-the-wire/#backend-for-fronte...
Note “instead of replacing your existing REST API, you can add…”. It’s a thing people do these days! Recognizing the need for this layer has plenty of benefits.
As for HTMX, I know you might disagree, but I think it’s actually very similar in spirit to RSC. I do like it. Directives are like very limited Client components, server partials of your choice are like very limited Server components. It’s a good way to get a feel for the model.
I personally find HTMX pairs well with web components for client components since their lifecycle runs automatically when they get added to the DOM.
Wouldn't an HTMX update stomp over it and reset the component to its initial state?
What about internal JS state that isn't reflected in the DOM?
That said, I did include recaps of the three major sections at their end:
- https://overreacted.io/jsx-over-the-wire/#recap-json-as-comp...
- https://overreacted.io/jsx-over-the-wire/#recap-components-a...
- https://overreacted.io/jsx-over-the-wire/#recap-jsx-over-the...
That is not a good reason to make the content unnecessarily difficult for its target audience. Being smart also means being able to communicate with those who aren't as brilliant (or just don't have the time).
That's unfair.
If anything you're the one dumbing down what I wrote for your personal taste.
Quite often people read the form thread first before wasting their life on some large corpus of text that might be crap. High quality discussions can point out poor quality (or at least fundamentally incorrect) posts and the reasons behind them enlightening the rest of the readers.
RSC doesn't impede this. In fact it improves it. Instead of having your ORM's objects, to be converted to JSON, sent, parsed, and finally manipulated to your UIs needs, you skip the whole "convert to JSON" part. You can go straight from your ORM objects (best for data operations) to UI (best for rendering) and skip having to think about how the heck you'll serialize this to be serialized over the wire.
> With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.
JSX is syntactic sugar for a specific format of JavaScript object. It's a pretty simple format really. From ReactJSXElement.js, L242 [1]:
As far as I'm aware, TC39 hasn't yet specified which shape of literal is "ok" and which one is "wrong" to run on a computer, depending on wether that computer has a screen or not. I imagine this is why V8, JSC and SpiderMonkey, etc let you create objects of any shape you want on any environment. I don't understand what's wrong about using this shape on the server.[1] https://github.com/facebook/react/blob/e71d4205aed6c41b88e36...
I can't let go of the fact that you get the exact same thing if you just render html on the server. It's driving me crazy. We've really gone full circle, and I'm not sure for what benefit.
I doubt there were many systems where the server-generated HTML fragments were generic enough that the server and client HTML documents didn't need to know anything about each other's HTML. It's conceivable to build such a system, particularly if it's intended for a screen-reader or an extremely thinly-styled web page, but in either of those cases HTML injection over AJAX would have been an unlikely architectural choice.
In practice, all these systems that did HTML injection over AJAX were tightly coupled. The server made strong assumptions about the HTML documents that would be requesting HTML fragments, and the HTML documents made strong assumptions about the shape of the HTML fragments the server would give it.
> all these systems that did HTML injection over AJAX were tightly coupled
That's because the presentation layer originated on the server. What the server didn't care about was the transformation that alters the display of the HTML on the client. So you can add add an extension to your browser that translate the text to another language and it wouldn't matter to the server. Or inject your own styles. Even when you do an AJAX request, you can add JS code that discards the response.
An age ago I took interest in KnockoutJS based on Model-View-ViewModel and found it pragmatic and easy to use. It was however at the beginning of the mad javascript framework-hopping marathon, so it was considered 'obsolete' after a few months. I just peeked, Knockout still exists.
https://knockoutjs.com/
Btw, I wouldn't hop back, but better hop forward, like with Datastar that was on HN the other day: https://news.ycombinator.com/item?id=43655914
Anyway, it's hard to deny that React dev nowadays is an ugly mess. Have you given any thought to what a next-gen framework might look like (I'm sure you have)?
Compared to GraphQL, Server Components are a big step back: you have to do manually on the server what was given by default by GraphQL
The GraphQL which ‘elegantly’ returns a 200 on errors? The GraphQL which ‘elegantly’ encodes idempotent reads as mutating POSTS? The GraphQL which ‘elegantly’ creates its own ad hoc JSON-but-not-JSON language?
The right approach, of course, is HTMX-style real REST (incidentally there needs to be a quick way to distinguish real REST from fake OpenAPI-style JSON-as-a-service). E.g., the article says: ‘your client should be able to request all data for a specific screen at once.’ Yes, of course: the way to request a page is to (wait for it, JavaScript kiddies): request a page.
The even better approach is to advance the state of the art beyond JavaScript, beyond HTML and beyond CSS. There is no good reason for these three to be completely separate syntaxes. Fortunately, there is already a good universal syntax for trees of data: S-expressions. The original article mentions SDUI as ‘essentially it’s just JSON endpoints that return UI trees’: in a sane web development model the UI trees would be S-expressions macro-expanded into SHTML.
Not sure about caching, if anything, graphql offers a more granular level of caching so it can be reused even more?
The only issue I see with graphql is the tooling makes it much harder to get it started on a new project, but the recent projects such as gql.tada makes it much easier, though still could be easier.
N+1 I just remember the dataloader https://github.com/graphql/dataloader Is it still used?
What about the other things? I remember that Stitching and E2E type safety, for example, were pretty brittle in 2018.
E2E type safety in our case is handled by Typescript code generation. It works very well. I also happen to have to work in a NextJS codebase, which is the worst piece of technology I have ever had the displeasure of working with, and I don't really see any meaningful difference on a day to day basis between the type sharing in the NextJS codebase (where server/client is a very fuzzy boundary) and the other code base that just uses code generation and is a client only SPA.
For stitching we use Nautilus and I've never observed any issues with it. We had one outage because of some description that was updated in some dependency and that sucked but for the most part it just works. Our usage is probably relatively simple though.
or prisma
https://pothos-graphql.dev/docs/plugins/prisma/relations
N+1 and nested queries etc were still problems last I checked (few years ago).
I’m sure there are solutions. Just that it’s not as trivial as “use graphql” and your problems are solved.
Please correct me if I’m wrong
on top of http level caching, you can do any type of caching (redis / fs / etc) just like a regular rest but at a granular level, for ex: user {comments(threadId: abc, page: 1, limit: 20) { body, postedAt} is requested and then cached, another request can come in thread(id: abc) {comments(page: 1, limit: 20) {body, postedAt} you can share the cache.
N+1 is solved by a tighter integration with the database, for ex: https://www.graphile.org/postgraphile/performance/#how-is-it... or maybe a more popular flavor using prisma: https://pothos-graphql.dev/docs/plugins/prisma/relations#:~:...
but of course, there is always the classic dataloader as well.
I am not saying that use graphql and all the problems will be solved, but i am saying that the problem that OP proposed has been solved in an arguably "better" way, as it does not tie the presentation (HTML) with the data for cases of multiplatform apps like web, or native apps.
Obviously if you have a GraphQL backend, you could care less and the only benefit you'd get is reducing bundle size f.e. for content heavy static pages. But you'll lose client-side caching, so you can't have your cake and eat it too.
Just a matter of trade-offs
I assumed RSC was more concerned with which end did the rendering, and GraphQL with how to fetch just the right data in one request
When you have a post with a like button and the user presses the like button, how do the like button props update? I assume that it would be a REST request to update the like model. You could make the like button refetch the like view model when the button is clicked, but then how do you tie that back to all the other UI elements that need to update as a result? E.g. what if the UI designer wants to put a highlight around posts which have been liked?
On the server, you've already lost the state of the client after that first render, so doing some sort of reverse dependency trail seems fragile. So the only option would be to have the client do it, but then you're back to the waterfall (unless you somehow know the entire state of the client on the server for the server to be able to fully re-render the sub-tree, and what if multiple separate subtrees are involved in this?). I suppose that it is do-able if there exists NO client side state, but it still seems difficult. Am I missing something?
Right, so there's actually a few ways to do this, and the "best" one kind of depends on the tradeoffs of your UI.
Since Like itself is a Client Component, it can just hit the POST endpoint and update its state locally. I.e. without "refreshing" any of the server stuff. It "knows" it's been liked. This is the traditional Client-only approach.
Another option is to refetch UI from the server. In the simplest case, refetching the entire screen. Then yes, new props would be sent down (as JSON) and this would update both the Like button (if it uses them as its source of truth) and other UI elements (like the highlights you mentioned). It'll just send the entire thing down (but it will be gracefully merged into the UI instead of replacing it). Of course, if your server always returns an unpredictable output (e.g. a Feed that's always different), then you don't want to do that. You could get more surgical with refreshing parts of the tree (e.g. a subroute) but going the first way (Client-only) in this case would be easier.
In other words, the key thing that's different is that the client-side things are highly dynamic so they have agency in whether to do a client change surgically or to do a coarse roundtrip.
I do like scrappy rails views that can be assembled fast - but the React views our FE dev is putting on top of existing rails controllers have a much better UX.
Feels like HTMX, feels like we've come full circle.
And yeap, you're right! If we need a lot more client side interactivity, just rendering JSX on server side won't cut it.
If you're fetching 10s of raw models (corresponding to a table) and extract (or even join!) the data needed to display in the view, it's clearly not the best engineering decision. But fetching 2 or 3 well shaped views in your component and doing the last bit of correlation to the view in the component is acceptable.
Same for deciding a render strategy: Traditional SSR (maybe with HTMX) vs. isomorphic (Next and friends) vs. SPA. Same for Redux vs MobX. Or what I think is often neglected by the frontend folks: Running Node on the backend vs. Java vs. Go vs. C# vs. Rust.
If you're already in the spot where React Server Components are a good fit, the ideas in the article are compelling. But IMO not enough to be convincing to switch to or chose React / Next when you're better of with traditional SSR or SPA, which IME are the best fits for the vast majority of apps.
Fresh/Partials
Astro/HTMX with Partials
One thing I would like to see more focus on in React is returning components from server functions. Right now, using server functions for data fetching is discouraged, but I think it has some compelling use cases. It is especially useful when you have components that need to fetch data dynamically, but you don't want the fetch / data tied to the URL, as it would be with a typical server component. For example, when fetching suggestions for a typeahead text input.
(Self-promotion) I prototyped an API for consuming such components in an idiomatic way: https://github.com/jonathanhefner/next-remote-components. You can see a demo: https://next-remote-components.vercel.app/.
To prove the idea is viable beyond Next.js, I also ported it to the Waku framework (https://github.com/jonathanhefner/twofold-remote-components) and the Twofold framework (https://github.com/jonathanhefner/twofold-remote-components).
I would love to see something like it integrated into React proper.
I am wondering: What are the gains of RSC over a Fat Resource (with expand, sort, select and filter) where responses for (expand,sort,select) are cached? Most applications are READ-heavy, so even a fat response is easily returned to the client and might not need a refetch that often.
The article briefly mentions that you need $expand and $select then, but why/when is that not a valid approach?
The other point I have is that I really do not like to have JS on my server. If my business logic runs on a better runtime, we have 3 (actually 4) layers to pass:
Managing changes gets really complex, with each layer needing type safety.ts-liveview is a TypeScript framework I built (grab it as a starter project on GitHub[1]) for real-time, server-rendered apps. It uses JSX/TSX to render HTML server-side and, in WebSocket mode, updates the DOM by targeting specific CSS selectors (document.querySelector) over WebSockets or HTTP/2 streaming. This keeps client-side JavaScript light, delivering fast, SEO-friendly pages and reactive UIs, much like Dan’s “JSX over the wire” vision.
What’s your take on this server-driven approach? Could it shake up how we build apps compared to heavy client-side frameworks? Curious if you’ve tried ts-liveview yet—it’s been a fun project to dig into these ideas!
[1] https://github.com/beenotung/ts-liveview
I don't see the issue with adding an endpoint per viewmodel. Treating viewmodels as resources seems perfectly fine. Then again, I'm already on the HATEOAS and HTMX bandwagon, so maybe that just seems obvious, as it's no worse than returning HTML or JSX that could be constantly changing. If you actually need stable API endpoints for others to consume for other purposes, that's a separate consideration. This seems to be the direction the rest of the article goes.
I like the abstraction of server components but some of my co-workers seem to prefer HTMX (sending HTML rather than JSON) and can't really see any performance benefit from server components.
Maybe OP could clear up - Whether HTML could be sent instead (depending on platform), there is a brief point about not losing state but if your component does not have input elements or can have it state thrown away then maybe raw HTML could work? - prop size vs markup/component size. If you send a component down with a 1:9 dynamic to static content component. Then wouldn't it be better to have the the 90% static preloaded in the client, then only 10% of the data transmitted? Any good heuristic options here? - "It’s easy to make HTML out of JSON, but not the inverse". What is intrinsic about HTML/XML?
--
Also is Dan the only maintainer on the React team who does these kind of posts? do other members write long form. would be interesting to have a second angle.
Or reference the 2+ decades written about the same pattern in simpler, faster, less complex implementations.
What if we just talked about it only in terms of simple data structures and function composition?
[1] https://overreacted.io/jsx-over-the-wire/#dans-async-ui-fram...
What’s being done here isn’t entirely new. Turbo/Hotwire [1], Phoenix LiveView, even Facebook’s old Async XHP explored similar patterns. The twist is using JSX to define the component tree server-side and send it as JSON, so the view model logic and UI live in the same place. Feels new, but super familiar, even going back to CGI days.
[1] https://hotwired.dev
Right, that's why it's in the post: https://overreacted.io/jsx-over-the-wire/#async-xhp
Likewise with CGI: https://overreacted.io/jsx-over-the-wire/#html-ssi-and-cgi
Agree there's echoes of "old" in "new" but there are also distinct new things too :)
Who is this written for? A junior dev? Or, are we minting senior devs with no historical knowledge?
I still can't get over how the "API" in "REST API" apparently originally meant "a website".
And yet, I see nothing but confusion around this topic. For two years now. I see Next.js shipping foot guns, I see docs on these rendering modes almost as long as those covering all of Django, and I see blog lengthy blog posts like this.
When the majority of problems can be solved with Django, why tie yourself in to knots like this? At what point is it worth it?
That said, I also think the basic concepts or RSC itself (not "rendering modes" which are a Next thing) are very simple and "up there" with closures, imports, async/await and structured programming in general. They deserve to be learned and broadly understood.
It’s a very long post so maybe I missed it, but does Dan ever address morphdom and its descendants? I feel like that’s a very relevant point in the design space explored in the article.
In most cases that means rendering HTML on the server, where most of the data lives, and using a handful of small components in the frontend for state that never goes to the backend.
1: APIs should return JSON because endpoints do often get reused throughout an application.
2: it really is super easy to get the JSON into client side HTML with JSX
3: APIs should not return everything needed for a component, APIs should return one thing only. Makes back and front end more simple and flexible and honestly who cares about the extra network requests
It's exciting to see server side rendering come back around.
The biggest draw that pulled me to Astro early on was the fact that it uses JSX for a, in my opinion, better server side templating system.
Of course if you're using SQLite on a local disk then you're good. If you have some data loader middleware that batches and combines all these requests then you're good. But if you're just naively making these requests directly...then you're setting up your app for massive performance problems in the near future.
The known solution to the N+1 query problem is to bulk load all the data you need. So you need to render a list of posts, you bulk load all their data with a single query. Now you can just pass the data in directly to the rendering components. They don't load their own data. And the need for RSC is gone.
I'm sure RSC is good for some narrow set of cases where the data loading efficiency problems are already taken care of, but that's definitely not most cases.
A BFF is indeed a possible solution and yeah if you have a BFF made in JS for your React app the natural conclusion is that you might as well start returning JSX.
But. BUT. "if you have a BFF made in JS" and "if your BFF is for your React app" are huge, huge ifs. Running another layer on the server just to solve this specific problem for your React app might work but it's a huge tradeoff and a non starter (or at least a very hard sale) for many teams; and this tradeoff is not stated, acknowledged, explored in any way in this writing (or in most writing pushing RSCs, in my experience).
And a minor point but worth mentioning nonetheless, writing stuff like "Directly calling REST APIs from the client layer ignores the realities of how user interfaces evolve" sounds like the author thinks people using REST APIs are naive simpletons who are so unskilled they are missing a fundamental point of software development. People directly calling REST APIs are not cavemen, they know about the reality of evolving UI, they just chose a different approach to the problem.
Spa developers missed the point totally by reinventing broken abstractions in their frameworks. The mising points is in code over convention. Stop enforcing your own broken convention and let developers use their own abstractions. Things are interpreted at runtime, not compile time. Bundler is for bundling, do not cross its boundary.
Misunderstanding REST only to reinvent it in a more complex way. If your API speaks JSON, it's not REST unless/until you jump through all of these hoops to build a hypermedia client on top of it to translate the bespoke JSON into something meaningful.
Everyone ignores the "hypermedia constraint" part of REST and then has to work crazy magic to make up for it.
Instead, have your backend respond with HTML and you get everything else out of the box for free with a real REST interface.
This section is for you: https://overreacted.io/jsx-over-the-wire/#html-ssi-and-cgi
>Everyone ignores the "hypermedia constraint" part of REST and then has to work crazy magic to make up for it.
Right, that's why I've linked to https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi... the moment we started talking about this. The post also clarifies multiple times that I'm talking about how REST is used in practice, not its "textbook" interpretation that nobody refers to except in these arguments.
Strawmanning the alternative as CGI with shell scripts really makes the entire post that much weaker.
> nobody refers to except in these arguments.
Be the change, maybe? People use REST like this because people write articles like this which uses REST this way.
I wasn't trying to strawman it--I was genuinely trying to show the historical progression. The snark was intended for the likely HN commenter who'd say this without reading, but the rest of the exploration is sincere. I tried to do it justice but lmk if I missed the mark.
>Be the change, maybe?
That's what I'm trying to do :-) This article is an argument for hypermedia as the API. See the shape of response here: https://overreacted.io/jsx-over-the-wire/#the-data-always-fl...
I think I've sufficiently motivated why that response isn't HTML originally; however, it can be turned into HTML which is also mentioned in the article.
And if it turns out that there is no such thing, should I conclude that all these people talking about it really just base their opinion on some academic talking points and are actually full of shit?
speak like someone who's never made a real product. Please enlighten us on how you add interactivity to your client, which flavour of spaghetti js? How do you handle client states, conveniently everything's on the backend?
And while we're at it, I'd like to know, why are people still building new and different game engines, programming languages, web browsers, operating systems, shells, etc, etc. Don't they know those things already exist?
/s
Joking aside, what's wrong with finding a new way of doing something? This is how we learn and discover things.
Whee!
One bit of hopefully constructive feedback: your previous post ran about 60 printed pages, this one's closer to 40 (just using that as a rough proxy for time-to-read). I’ve only skimmed both for now, but I found it hard to pin down the main purpose or takeaway. An abstract-style opening and a clear conclusion would go a long way, like in academic papers. I think that makes dense material way more digestible.
- https://overreacted.io/jsx-over-the-wire/#recap-json-as-comp...
- https://overreacted.io/jsx-over-the-wire/#recap-components-a...
- https://overreacted.io/jsx-over-the-wire/#recap-jsx-over-the...
I don't think I can compress it further. Generally speaking I'm counting on other people carrying useful things out of my posts and finding more concise formats for those.
An outline doesn't have to be a compressed version, I think more like a map of the content, which tells me what to expect as I make progress through the article. You might consider using a structure like SCQA [1] or similar.
--
1: https://analytic-storytelling.com/scqa-what-is-it-how-does-i...