The *.home.arpa domain in RFC 8375 has been approved for local use since 2018, which is long enough ago that most hardware and software currently in use should be able to handle it.
johnmaguire 5 days ago [-]
RFC 8375 seems to have approved it specifically to use in Home Networking Control Protocol, though it also states "it is not intended that the use of 'home.arpa.' be restricted solely to networks where HNCP is deployed. Rather, 'home.arpa.' is intended to be the correct domain for uses like the one described for '.home' in [RFC7788]: local name service in residential homenets."
Anyone familiar with HNCP? Are there any concerns of conflicts if HNCP becomes "a thing"? I have to say, .home.arpa doesn't exactly roll of the tongue like .internal. Some macOS users seem to have issues with .home.arpa too: https://www.reddit.com/r/MacOS/comments/1bu62do/homearpa_is_...
onre 5 days ago [-]
> I have to say, .home.arpa doesn't exactly roll of the tongue like .internal.
In my native language (Finnish) it's even worse, or better, depending on personal preference - it translates directly to .mildew.lottery-ticket.
morjom 5 days ago [-]
It would be more like .mold.ticket
onre 5 days ago [-]
Thanks, I always mix up mold and mildew. However, "arpa" is specifically a lottery ticket, whereas there are tickets for concerts, tickets to ride, tickets in Jira etc...
morjom 4 days ago [-]
Arpa is used for all kinds of random chance things, not specifically for lottery. I feel like ticket would still be the equivalent but I guess that would be more transliteration and opinion than direct translation? Also my view for lottery may be skewed due to the Finnish lottery culture, and how lottery has more meanings in English. Sorry turned ranty.
DrBazza 4 days ago [-]
Arguably, Jira ‘has issues’.
AndyMcConachie 5 days ago [-]
Check the errata for RFC 7788. .home being listed in it is a mistake. .home has never been designated for this purpose.
home.arpa is for HNCP.
Use .internal.
johnmaguire 4 days ago [-]
I simply quoted RFC 8375. It specifically called out that while RFC 7788 mentions ".home" (quoted below), it wasn't reserved, which ".home.arpa" aims to fix. But while you say "home.arpa is for HNCP", I also quoted RFC 8375 stating it's available for other uses as well.
> A network-wide zone is appended to all single labels or unqualified zones in order to qualify them. ".home" is the default; [...]
fc417fc802 4 days ago [-]
I have been commandeering .home for the boxes on my LAN since forever. Why change it?
If I were going to do a bunch of extra work messing with configs I'd be far more inclined to switch all my personal stuff over to GNS for security and privacy reasons.
Mountain_Skies 5 days ago [-]
It's ugly and clunky, which is why after seven years it's had very little adoption. Home users aren't network engineers so these things actually do matter even if it seems silly in a technical sense.
styfle 5 days ago [-]
Why use that over *.localhost which has been available since 1999 (introduced in RFC 2606)
bravetraveler 5 days ago [-]
From RFC 2606:
The ".localhost" TLD has traditionally been statically defined in
host DNS implementations as having an A record pointing to the
loop back IP address and is reserved for such use
The RFC 8375 suggestion (*.home.arpa) allows for more than a single host in the domain. If not in name/feeling, but the strictest readings [and adherence] too.
5 days ago [-]
alexvitkov 5 days ago [-]
Too much typing, and Chromium-based browsers don't understand it yet and try to search for mything.internal instead, which is annoying - you have to type out the whole http://mything.internal.
This can be addressed by hijacking an existing TLD for private use, e.g. mything.bb :^)
thaumasiotes 4 days ago [-]
> Chromium-based browsers don't understand it yet and try to search for mything.internal instead, which is annoying
That's hardly the only example of annoying MONOBAR behavior.
This problem could have been avoided if we had different widgets for doing different things. Someone should have thought of that.
nsteel 5 days ago [-]
Isn't just typing the slash at the end enough to avoid it searching? e.g. mything/
jeroenhd 4 days ago [-]
mything/ will make the OS resolve various hosts: mything., mything.local (mDNS), mything.whateverdomainyourhomenetworkuses. (which may be what you wanted).
If you want to be sure, use mything./ : the . at the end makes sure no further domains are appended during DNS lookup, and the / makes the browser try to access to resource without Googling it.
tepmoc 5 days ago [-]
eh, you can just add search domain via dhcp or static configuration and just type out http://mything/ no need to enter whole domain unless you need todo ssl
5 days ago [-]
codetrotter 5 days ago [-]
In that case I would prefer naming as
<virtual>.<physical-host>.internal
So for example
phpbb.mtndew.internal
And I’d probably still add
phpbb.localhost
To /etc/hosts on that host like OP does
nodesocket 5 days ago [-]
I wrote a super basic DNS server in go (mostly fun and go practice) which allows you to specify hosts and ips in a json config file. This eliminates the need for editing your /etc/hosts file. If it matches a host in the json config file it returns that ip, else uses Cloudflare public DNS resolver as a fallback. Please; easy on my go code :-). I am a total beginner with go.
*.localhost is reserved for accessing the loopback interface. It is literally the perfect use for it. In fact on many operating systems (apparently not macOS) anything.localhost already resolves to the loopback address.
candiddevmike 5 days ago [-]
It would be great if there was an easy way to get trusted certificates for reserved domains without rolling out a CA. There are a number of web technologies that don't work without a trusted HTTPS origin, and it's such a pain in the ass to add root CAs everywhere.
GoblinSlayer 5 days ago [-]
You can configure them to send requests through http proxy.
MaKey 5 days ago [-]
It seems like it has not been standardized yet:
> As of March 7, 2025, the domain has not been standardized by the Internet Engineering Task Force (IETF), though an Internet-Draft describing the TLD has been submitted.
> Resolved (2024.07.29.06), the Board reserves .INTERNAL from delegation in the DNS root zone permanently to provide for its use in private-use applications.
g0db1t 4 days ago [-]
> Resolved (2024.07.29.06) ...
I'm too tired, I read it as a IPv4 adress...
sdwolfz 5 days ago [-]
Note: browsers also give you a Secure Context for .localhost domains.
So you don't need self signed certs for HTTPS on local if you want to, for example, have a backend API and a frontend SPA running at the same time talking to eachother on your machine (authentication for example requires a secure context if doing OAuth2).
c-hendricks 4 days ago [-]
> if you want to, for example, have a backend API and a frontend SPA running at the same time talking to eachother on your machine
Won't `localhost:3000` and `localhost:3001` also both be secure contexts? Just starting a random vite project, which opens `locahost:3000`, `window.isSecureContext` returns true.
sdwolfz 4 days ago [-]
This is used for scenarios where you don't want to hardcode port numbers, like when running multiple projects on your machine at the same time.
Usually you'd have a reverse proxy running on port 80 that forwards traffic to the appropoiate service, and an entry in /etc/hosts for each domain, or a catch all in dnsmasq.
Example: a docker compose setup using traefik as a reverse proxy can have all internal services running on the same port (eg. 3000) but have a different domain. The reverse proxy will then forward traffic based on Host. As long as the host is set up properly, you could have any number of backends and frontends started like this, via docker compose scaling, or by starting the services of another project. Ports won't conflict with eachother as they're only exposed internally.
Now, wether you have a use for such a setup or not is up to you.
bolognafairy 5 days ago [-]
Well shit. TIL. Time to go reduce the complexity of our dev environment.
jrvieira 4 days ago [-]
you should never trust browsers default behavior
1. not all browsers are the same
2. there is no official standard
3. even if there was, standards are often ignored
4. what is true today can be false tomorrow
5. this is mitigation, not security
wutwutwat 4 days ago [-]
1. not all browsers are the same
they are all aiming to implement the same html spec
2. there is no official standard
there literally is
> A context is considered secure when it meets certain minimum standards of authentication and confidentiality defined in the Secure Contexts specification
major browsers wouldn't be major browsers if this was the case
4. what is true today can be false tomorrow
standards take a long time to become standard and an even longer time to be phased out. this wouldn't sneak up on anyone
5. this is mitigation, not security
this is a spec that provides a feature called "secure context". this is a security feature. it's in the name. it's in the spec.
jrvieira 3 days ago [-]
Secure contexts, not a part of the html spec, are described in the w3c candidate recommendation, which I can assume you are calling the official standard, which states:
>5.1. Incomplete Isolation > >The secure context definition in this document does not completely isolate a "secure" view on an origin from a "non-secure" view on the same origin. Exfiltration will still be possible via increasingly esoteric mechanisms such as the contents of localStorage/sessionStorage, storage events, BroadcastChannel, and others.
>5.2. localhost > >Section 6.3 of [RFC6761] lays out the resolution of localhost. and names falling within .localhost. as special, and suggests that local resolvers SHOULD/MAY treat them specially. For better or worse, resolvers often ignore these suggestions, and will send localhost to the network for resolution in a number of circumstances. > >Given that uncertainty, user agents MAY treat localhost names as having potentially trustworthy origins if and only if they also adhere to the localhost name resolution rules spelled out in [let-localhost-be-localhost] (which boil down to ensuring that localhost never resolves to a non-loopback address).
>6. Privacy Considerations > >The secure context definition in this document does not in itself have any privacy impact. It does, however, enable other features which do have interesting privacy implications to lock themselves into contexts which ensures that specific guarantees can be made regarding integrity, authenticity, and confidentiality. > >From a privacy perspective, specification authors are encouraged to consider requiring secure contexts for the features they define.
This does not qualify as the "this" in my original comment.
kbolino 4 days ago [-]
Notably, assuming conformance to this standard, a browser might still not treat localhost domains as trustworthy if it has reason to believe they can be resolved remotely. However, I'm not sure in what environments this is likely to be the case, especially with browsers implementing their own DNS over HTTPS.
TingPing 4 days ago [-]
Recently browsers hardcode localhost to never resolve over dns.
sigil 5 days ago [-]
This nginx local dev config snippet is one-and-done:
# Proxy to a backend server based on the hostname.
if (-d vhosts/$host) {
proxy_pass http://unix:vhosts/$host/server.sock;
break;
}
Your local dev servers must listen on a unix domain socket, and you must drop a symlink to them at eg /var/lib/nginx/vhosts/inclouds.localhost/server.sock.
Not a single command, and you still have to add hostname resolution. But you don't have to programmatically edit config files or restart the proxy to stand up a new dev server!
hn92726819 5 days ago [-]
I'm not that familiar with nginx config. Does this protect against path traversal? Ex: host=../../../docker.sock
Chrome and i think Firefox resolve all <name>.localhost domains to localhost per default, so you don't have to add them to the hosts file. I setup a docker proxy on port 80 that resolves all requests from <containername>.localhost to the first exposed port of that container (in order of appearing in the docker compose file) automatically which makes everything smooth without manual steps for docker compose based setups.
globular-toast 4 days ago [-]
Source for this? Are you sure it's not your system resolver doing it?
TingPing 4 days ago [-]
There is a draft spec over it, Ill find it later, but they do hardcode it now and never touch dns.
kbolino 4 days ago [-]
It's probably both. Browsers now have built-in DoH so they usually do their own resolving. Only if you disable "secure DNS" (or you use group policies) will you fall back to the system resolver anymore.
jFriedensreich 4 days ago [-]
Pretty sure its hard coded in the browser and never touches any resolvers. It does not work the same in safari for example.
breck 5 days ago [-]
[dead]
peterldowns 5 days ago [-]
If you’re interested in doing local web development with “real” domain names, valid ssl certs, etc, you may enjoy my project Localias. It’s built on top of Caddy and has a nice CLI and config file format that you can commit to your team’s shared repo. It also has some nice features like making .local domain aliases available to any other device on your network, so you can more easily do mobile device testing on a real phone. It also syncs your /etc/hosts so you never need to edit it manually.
Check it out and let me know what you think! (Free, MIT-licensed, single-binary install)
Basically, it wraps up the instructions in this blogpost and makes everything easy for you and your team.
Yup, mkcert is used by caddy which is used by localias :)
CodesInChaos 5 days ago [-]
How do valid certs for localhost work? Does that require installing an unconstraint root certificate to sign the dev certs? Or is there a less risky way (name constraints?)
sangeeth96 5 days ago [-]
It's mentioned in the README:
- If Caddy has not already generated a local root certificate:
- Generate a local root certificate to sign TLS certificates
- Install the local root certificate to the system's trust stores, and the Firefox certificate store if it exists and an be accessed.
But is this an unconstraint root, or does it use name constraints to limit it to localhost domains/IPs? And how does it handle/store the private key associated with that root?
peterldowns 4 days ago [-]
What's your threat model here? The way this works is that on your development machine, localias (through caddy/mkcert) generates a root cert and the per-site certs and installs them to your development machine's trust store. All of the certs live entirely on your device and never leave. You have full control over them and can remove them at any time.
The certs and keys live in the localias application state directory on your machine:
The whole nicety of localias is that you can create domain aliases for any domain you can think of, not just ".localhost". For instance, on my machine right now, the aliases are:
> Install the local root certificate to the system's trust stores
I really wish there was a safer way to do this, i.e. a way to tag a trusted CA as "valid for localhost use only". The article mentions this in passing
> The sudo version of the above command with the -d flag also works but it adds the certificate to the System keychain for all users. I like to limit privileges wherever possible.
Maybe this could be done using the name constraint extension marked as critical?
worewood 5 days ago [-]
I think an alternative to local root certs would be to use a public cert + dnsmasq on your LAN to resolve the requests to a local address.
WhyNotHugo 5 days ago [-]
Any subdomain of .localhost works out-of-the-box on Linux, OpenBSD and plenty of other platforms.
Of note, it doesn't work on macOS. I recall having delivered a coding assignment for a job interview long ago, and the reviewer said it didn't work for them, although the code all seemed correct to them.
It turned out on macOS, you need to explicitly add any subdomains of .localhost to /etc/hosts.
I'm still surprised by this; I always thought that localhost was a highly standard thing covered in the RFC long long ago… apparently it isn't, and macOS still doesn't handle this TLD.
telotortium 5 days ago [-]
It's easy to be tricked into thinking macOS supports it, because both Chrome and Curl support it. However, ping does not, nor do more basic tools like Python's request library (and I presume urllib as well).
jwilk 5 days ago [-]
> Any subdomain of .localhost works out-of-the-box on Linux
No, not here.
jchw 5 days ago [-]
This usually happens because you have a Linux setup that doesn't use systemd-resolved and it also doesn't have myhostname early enough in the list of name resolvers. Not sure how many Linux systems default to this, but if you want this behavior, adjust your NSS configuration, most likely.
oulipo 5 days ago [-]
Just did that on my mac and it seems to work?
$ ping hello.localhost
PING hello.localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.057 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms
tedunangst 5 days ago [-]
That's because your DNS server sends back 127.0.0.1. The query isn't resolved locally.
I think I had mixed results on mac depending on the network I was connected to. I think it has something to do with ipv4 vs ipv6
g0db1t 4 days ago [-]
Ironically for me it works on my MacOS laptop but not on my Debian 12 VPS
parasti 5 days ago [-]
I am doing this on macOS with no problem.
octagons 5 days ago [-]
Against much well-informed advice, I use a vanity domain for my internal network at home. Through a combination Smallstep CA, CoreDNS, and Traefik, any services I host in my Docker Swarm cluster automatically are immediately issued a signed SSL certificate, load-balanced, and resolvable. Traefik also allows me to configure authentication for any services that I may not wish to expose without such.
That said, I do recommend the use of the internal. zone for any such setup, as others have commented. This article provides some good reasons why (at least for .local) you should aim to use a standards-compliant internal zone: https://community.veeam.com/blogs-and-podcasts-57/why-using-...
hobo_mark 5 days ago [-]
I added a fake .com record in my internal DNS that resolves to my development server. All development clients within that network have an mkcert-generated CA installed.
Not so different from you, but without even registering the vanity domain. Why is this such a bad idea?
szszrk 5 days ago [-]
For home it's not that bad, but there could be conflicts at some point. Your clients will send data to the Internet unknowingly when dns is missconfigured.
It's better to use domain you control.
I'm a fan of buying cheapest to extend (like .ovh, great value) and use real Let's Encrypt (via dns challenge) to register any subdomain/wildcard. So that any device will have "green padlock" for totally local service.
octagons 5 days ago [-]
To be clear, I didn’t register anything. I just have a configuration that serves records for a zone like “artichoke.” on my DNS server. Internal hosts are then accessible via https://gitlab.artichoke, for example.
thot_experiment 5 days ago [-]
I alias home.com to my local house stuff. I don't really understand why anyone thinks it's a bad idea either.
matthewaveryusa 5 days ago [-]
It's not a terrible idea. On a large scale it can lead to the corp.com issue:
Honestly for USD5/year why don't you just buy yourself a domain and never have to deal with the problem?
5 days ago [-]
kreetx 5 days ago [-]
I run a custom (unused) tld with mkcert the same way, with nginx virtual hosts set up for each app.
tbyehl 5 days ago [-]
What's the argument against using one's own actual domain? In these modern times where every device and software wants to force HTTPS, being able to get rid of all the browser warnings is nice.
waynesonfire 5 days ago [-]
I think this is ideal. You make a great point that even if you were to use .internal TLD that is reserved for internal use, you wouldn't be able to use letsencrypt to get a SSL certificate for it. Not sure if there are other ssl options for .internal. But, self-signed is a PITA.
I guess the lesson is to deploy a self-signed root ca in your infra early.
octagons 5 days ago [-]
Check out Smallstep’s step-ca server [0]. It still requires some work, but it allows you to run your own CA and ACME server. I have nothing against just hosting records off of a subdomain and using LE as mentioned, but I personally find it satisfying to host everything myself.
OP: If you're already using Caddy, why not just use a purchased domain (you can get some for a few dollars) with a DNS-01 challenge? This way you don't need to add self-signed certificates to your trust store and browsers/devices don't complain. You'll still keep your services private to your internal network, and Caddy will automatically keep all managed certificates renewed so there's no manual intervention once everything is set up.
whatevaa 5 days ago [-]
So basically pay protection money? We have engineered such a system that the only way to use your own stuff is to pay a tax for it and rely on centralized system, even though you don't need to be public at all?
smjburton 5 days ago [-]
If you really want to keep things local without paying any fees, you could also use Smallstep (https://smallstep.com/) to issue certificates for your services. This way you only need to add one CA to your trust store on your devices, and the certificates still renew periodically and satisfy the requirements for TLS.
I suggested using a domain given they already have Caddy set up and it's inexpensive to acquire a cheap domain. It's also less of a headache in my experience.
egoisticalgoat 5 days ago [-]
If you're already adding a CA to your trust store, you can just use caddy! [0] Add their local CA to your store (CA cert is valid for 10 years), and it'll generate a new cert per local domain every day.
Actually, now that I've linked the docs, it seems they use smallstep internally as well haha
I was on a similar thought process, but this leaves you only with the option to set the A record of the public DNS entry to 127.0.0.1, if you want to use it on the go.
Though you could register a name like ch.ch and get a wildcard certificate for *.ch.ch, and insert local.ch.ch in the hosts file and use the certificate in the proxy, that would even work on the go.
shadowpho 5 days ago [-]
> You'll still keep your services private to your internal network,
Is that a new thing? I heard previously that if you wanted to do DNS/domain for local network you had to expose the list external.
smjburton 5 days ago [-]
It's not, just a different way of satisfying the certificate challenge. Look into a DNS-01 challenge vs a HTTP-01 challenge. Let's Encrypt has a good breakdown: https://letsencrypt.org/docs/challenge-types/.
shadowpho 3 days ago [-]
Gotcha and that lets us avoid to expose internals? that seems like a win win win, I should totally do this!
BTW you can actually give every locally-hosted app a separate IP address if you want. The entire 127.0.0/24 is yours, so you can resolve 127.0.0.2, 127.0.0.3, etc as separate "hosts" in /etc/hosts or in your dnsmasq config.
Yes, this also works under macOS, but I remember there used to be a need to explicitly add these addresses to the loopback interface. Under Linux and (IIRC) Windows these work out of the box.
justin_oaks 5 days ago [-]
I'd recommend using some other reserved IP address block like 169.254.0.0/16 or 100.64.0.0/16 and assigning it to your local loopback interface. (Nitpick: you can actually use all of 127.0.0.0/8 instead of just 127.0.0.0/24).
I previously used differing 127.0.0.0/8 addresses for each local service I ran on my machine. It worked fine for quite a while but this was in pre-Docker days.
Later on I started using Docker containers. Things got more complicated if I wanted to access an HTTP service both from my host machine and from other Docker containers. Instead of having your services exposed differently inside a docker network and outside of it, you can consistently use the IP and Ports you expose/map.
If you're 127.0.0.0/8 addresses then this won't work. The local loopback addresses aren't routed to the host computer when sent from a Docker container; they're routed to the container. In other words, 127.0.0.1 inside Docker means "this container" not "this machine".
For that reason I picked some other unused IP block [0] and assigned that block to the local loopback interface. Now I use those IPs for assigning to my docker containers.
I wouldn't recommend using the RFC 1918 IP blocks since those are frequently used in LANs and within Docker itself. You can use something like the link-local IP block (169.254.0.0/16) which I've never seen used outside of the AWS EC2 metadata service. Or you can use the carrier-grade NAT IP block (100.64.0.0/16). Or even some IP block that's assigned for public use, but is never used, although that can be risky.
I use Debian Bookworm. I can bind 100.64.0.0/16 to my local loopback interface by creating a file under /etc/network/interfaces.d/ with the following
I have no idea why this is not the default solution nor why Docker can not engage in it?
lima 5 days ago [-]
On my Linux machine with systemd-resolved, this even works out the box:
$ resolvectl query foo.localhost
foo.localhost: 127.0.0.1 -- link: lo
::1 -- link: lo
Another benefit is being able to block CSRF using the reverse proxy.
jchw 5 days ago [-]
Yeah, I've been using localhost domains on Linux for a while. Even on machines without systemd-resolved, you can still usually use them if you have the myhostname module in your NSS DNS module list.
(There are lots of other useful NSS modules, too. I like the libvirt ones. Not sure if there's any good way to use these alongside systemd-resolved.)
aib 5 days ago [-]
I ended up writing a similar plugin[1] after searching in vain for a way to add temporary DNS entries.
The ability to add host entries via an environment variable turned out to be more useful than I'd expected, though mostly for MITM(proxy) and troubleshooting.
I use .localhost for all my projects. Just one annoying note: Safari doesn't recognize the TLD localhost so it will try to perform a search. Adding a slash at the end will fix this; ie example.localhost/
tapete 5 days ago [-]
Luckily the easy fix is available: Do not use Safari.
subculture 5 days ago [-]
When Apple's MobileMe came out I snagged the localhost@me.com email address, thinking how clever I was. But because filtering tools weren't as good back then I was never able to use it because of the truly massive amount of spam and test emails I'd get.
leshokunin 5 days ago [-]
Thanks for the laugh. I wonder what test@gmail.com gets hahaha
isleyaardvark 5 days ago [-]
For anyone unaware, the domain 'example.com' is specifically reserved for the purpose of testing, so you don't have to worry about some rando reading emails sent to "test@gmail.com"
watusername 5 days ago [-]
I don't get it. What does gmail.com have to do with example.com?
mbreese 4 days ago [-]
Is a public service announcement. If you’re using test@gmail.com to actually test something, you should probably be using test@example.com. Not everyone knows that example.com exists for these purposes.
That is of course unless you really intend to send an email to someone at test@gmail.com.
thorvaldsson 5 days ago [-]
In my case I just setup a subdomain 'local.<domain>' to my personal domain and had Let's Encrypt create valid certificates for it via Traefik.
Each service is then exposed via '<service>.local.<domain>'.
This has been working flawlessly for me for some time.
mohsen1 5 days ago [-]
Avoid using `.local`. In my experience Chrome does not like it with HTTPS. It takes much much longer to resolve. I found a Chrome bug relating to this but do not have it handy to share. `.localhost` makes more sense for local development anyways.
rcarmo 5 days ago [-]
.local is mDNS/Rendezvous/Bonjour territory. In some cases it takes longer to resolve because your machine will multicast a query for the owner of the name.
I use it extensively on my LAN with great success, but I have Macs and Linux machines with Avahi. People who don't shouldn't mess with it...
zamadatix 5 days ago [-]
The reason is .local is a special case TLD for link-local networking with name resolution through things like mdns, by trying to hijack it for other use things might not go as you intend. Alternatively, .localhost is just a reserved TLD so it has no other usage to check.
Actually, MacOS gives your computer a .local domain on DHCP and Bonjour usually
Pxtl 5 days ago [-]
Honestly, if I had my druthers there would be a standardized exception for .local domains that self-signed HTTPS certs would be accepted without known roots. It's insane how there's no good workflow for HTTPS on LAN-only services.
Spooky23 5 days ago [-]
It’s actually gotten worse, you need to run a CA or use a public domain where it’s easy to get your internal naming schemes in a transparency log.
jeroenhd 5 days ago [-]
The easy workaround I've seen companies use for that is by using a basic wildcard certificate (*.local.mydomain.biz).
jeroenhd 5 days ago [-]
Technically speaking you could use DANE with mDNS. Nobody does it, browser don't implemented it, but you can follow the spec if you'd like.
Practically speaking, HTTPS on LAN is essentially useless, so I don't see the benefits. If anything, the current situation allows the user to apply TOFU to local devices by adding their unsigned certs to the trust store.
1wheel 5 days ago [-]
Browsers won't use http2 unless https is on — chrome only allows six concurrent requests to the same domain if you're not using https!
mohsen1 5 days ago [-]
Some more modern browser APIs only work in HTTPS. That's why I had to do it.
jeroenhd 4 days ago [-]
Modern browsers only enable those APIs because of security concerns, and those security concerns aren't lifted just because you're connected locally.
The existing exception mechanisms already work for this, all you need to do is click the "continue anyway" button.
Pxtl 5 days ago [-]
> HTTPS on LAN is essentially useless
Public wifi isn't a thing? Nobody wants to admin the router on a wifi network where there might be untrusted machines running around?
jeroenhd 4 days ago [-]
Sure, but you can connect those devices to a real domain and use Let's Encrypt on them, or you can TOFU and add the self-signed cert to your browser; after you've verified that you're not being MitM'd by one of those untrusted devices, of course (I dunno, by printing the public key on the side of the device or something?).
In practice, you probably want an authorized network for management, and an open network with the management interface locked out, just in case there's a vulnerability in the management interface allowing auth bypass (which has happened more often than anyone would like).
Pxtl 4 days ago [-]
The former just aren't practical for small business and home consumers, though. Browsers just don't have good workflow for TOFU.
I agree on the latter, but that means your IoT devices being accessible through both networks and being able to discriminate which requests are coming from the insecure interface and which are coming from secure admin, which isn't practical for lay users to configure as well. I mean, a router admin screen can handle that but what about other devices?
I know it seems pedantic, but this UI problem is one of many reasons why everything goes through the Cloud instead of our own devices living on our own networks, and I don't like that controlling most IoT devices (except router admin screens) involves going out to the Internet and then back to my own network. It's insecure and stupid and violates basic privacy sensibilities.
Ideally I want end users to be able to buy a consumer device, plug it into their router, assign it a name and admin-user credentials (or notify it about their credential server if they've got one), and it's ready and secure without having to do elaborate network topology stuff or having to install a cert onto literally every LAN client who wants to access its public interface.
justin_oaks 5 days ago [-]
I recommend using the .test TLD.
* It's reserved so it's not going to be used on the public internet.
* It is shorter than .local or .localhost.
* On QWERTY keyboards "test" is easy to type with one hand.
jeroenhd 5 days ago [-]
I use .local all the time and it works just fine. For TLS I use my existing personal CA, but HTTP links don't cause issues for me.
That said, I do use mDNS/Bonjour to resolve .local addresses (which is probably what breaks .local if you're using it as a placeholder for a real domain). Using .local as a imaginary LAN domain is a terrible idea. These days, .internal is reserved for that.
defraudbah 5 days ago [-]
if you add your CA to the list of trusted certificate, everything will be fine. I do not recommend using custom certificates and would stick to http, unless you really know what you are doing
5 days ago [-]
silvanocerza 5 days ago [-]
I went a different way for my internal network, I use tv.it for my server and rt.it for the router. All two characters .it domains are non registrable so you risk no clash, the only existing one is q8.it.
For internal networks the `internal` tld is reserved
silvanocerza 5 days ago [-]
I know.
Though I wanted a short URL, that's why I used .it any way.
austin-cheney 5 days ago [-]
My preference for local TLDs is just .x because it takes less time to enter on mobile devices. An example is www.x or video.x.
Yes, it does require a cert for TLS and that cert will not be trusted by default. I have found that with OpenSSL and a proper script you can spin up a cert chain on the fly and you can make these certs trusted in both Windows and Linux with an additional script. A script cannot make for trusted certs in Safari on OSX though.
I figured all this out in a prior personal app. In my current web server app I just don’t bother with trust. I create the certs and just let the browser display its page about high risk with the accept the consequences button. It’s a one time choice.
rockmeamedee 5 days ago [-]
I too made a version of this (just a small Go DNS resolver + port forwarding proxy) that lets you do a similar thing: https://gitlab.com/amedeedabo/zoxy
I used the .z domain bc it's quick to type and it looks "unusual" on purpose. The dream was to set up a web UI so you wouldn't need to configure it in the terminal and could see which apps are up and running.
Then I stopped working the job where I had to remember 4 different port numbers for local dev and stopped needing it lol.
Ironically, for once it's easier to set this kind of thing up on MacOS than on Linux, bc configuring a local DNS resolver on linux (cf this taiscale blog post "The Sisyphean Task Of DNS Client Config on Linux" https://tailscale.com/blog/sisyphean-dns-client-linux). Whereas on Mac it's a couple commands.
I think Tailscale should just add this to their product, they already do all the complicated DNS setup with their Magic DNS, they could sprinkle in port forwarding and be done. It'd be a real treat.
globular-toast 5 days ago [-]
You might not need the hosts file hack if your DNS supports *.localhost as a wildcard. I think most GNU/Linux distros (in particular the systemd ones) and Mac OS do. You can test it by seeing if `host test.localhost` already resolves to 127.0.0.1 (or ::1).
If you are using other systems then you can set this up fairly easily in your network DNS resolver. If you use dnsmasq (used by pihole) then the following config works:
Rather than do all this manually each time and worry about port numbers you just add labels to docker containers. No ports, just names (at least for http stuff).
joey_spaztard 5 days ago [-]
This is an ok way of doing things but you don't need Caddy server (or similar), you can put all the http servers on different localhost ip addresses eg 127.0.0.1, 127.0.0.2, etc. They can all use port 80 but on different ip addresses.
A possible disadvantage is that specifying a single ip to listen on means the http server won't listen on your LAN ip address, which you might want.
jrockway 5 days ago [-]
The last time I tried this, it works on Linux but not on Mac OS. Given all the discussion about launchd, I'm guessing they need it to work on Macs.
riffic 5 days ago [-]
> I then run and configure Caddy to redirect traffic from 127.0.0.1 to the right port for the domain.
That's not redirection per se, a word that's needlessly overloaded to the point of confusion. It's a smart use of a reverse proxy.
It would be nice if you all reserved the word "redirect" for something like HTTP 3xx behavior.
andrewstuart2 5 days ago [-]
I haven't done it for a while (I've mostly just used 127.*), but I found the best one to use for dev purposes was the IETF-reserved `.test` TLD [0]. The main benefit at the time I was messing with this (10ish years ago now) was that all the browsers I needed to test on would actually attempt to resolve `.test`. If I remember correctly, firefox seemed to have issues with `localhost` being anything other than 127.0.0.1 (and would simply go to that address ignoring whatever was in /etc/hosts or DNS, again IIRC). It's been a while, though, so that behavior might have changed.
.test seems like an excellent choice for testing/debugging/developing applications, but for running services you want to use I'd stick to .internal these days, as it was reserved for local domains last year.
numbsafari 5 days ago [-]
This is what I've done for years: doing app development using the .test TLD, and .internal for, well, internal services that are more like "production IT".
I've had nothing but trouble with .local and .localhost. Specifically, .local is intended for other purposes (multicast DNS) and .localhost has a nasty habit of turning into just "localhost" and in some cases, resolvers don't like to allow that to point to anything other than 127.0.0.1.
More recently, I've stuck to following the advice of RFC 6762 and use an actual registered TLD for internal use, and then sub-domain from there. I don't use my "production" TLD, but some other, unrelated TLD. For example, if my company is named FOO and our corporate homepage is at foo.com, I'll have a separate TLD like bar.com that I'll use for app development (and sub-domain as dev.bar.com, qa.bar.com, and maybe otherqa.bar.com as needed for different environments). That helps avoid any weirdness around .localhost, works well in larger dev/qa environments that aren't running on 127.0.0.1, and allows me to do things like have an internal CA for TLS instead of using self-signed certs with all of their UX warts.
For "local network" stuff, I stick to ".internal" because that's now what IANA recommends. But I would distinguish between how I use ".internal" to mean "things on my local network" from "this is where my team deploys our QA environment", because we likely aren't on the same network as where that QA environment is located and my ".internal" might overlap with your ".internal", but the QA environment is shared.
lima 5 days ago [-]
For development, "localhost" has a convenience bonus: it has special treatment in browsers. Many browser APIs like Service Workers are only available on pages with a valid WebPKI cert, except for localhost.
djanowski 5 days ago [-]
Recently I started to work on a very simple tool to do this with a single command: it'll start all your projects in a given directory and expose them via HTTPS on https://[project].localhost
I've been using it for myself so it's lacking documentation and features. For example, it expects to run each project using `npm run dev`, but I want to add Procfile support.
Hopefully other people find it useful. Contributions very much welcome!
jdprgm 5 days ago [-]
I do something similar with Caddy but add dns-sd to broadcast on mDNS so i can just hit myapp.local from anywhere on my network and don't have to do anything with hosts and it just works. Been on my todo list to wrap this into a tiny mac menubar app.
mmanfrin 5 days ago [-]
The comments here all suggesting different arcane and complicated stacks of different devops solutions and certificates and configurations and services has me somewhat despairing that such a COMMON usecase is still so annoyingly obtuse.
staticelf 5 days ago [-]
It's easy with Caddy to setup it in a way that you can have your api and SPA app in under the same domain in order to avoid CORS-issues.
myapp.localhost {
tls internal
# Serve /api from localhost:3000 (your API)
@api path /api/*
handle @api {
# Remove the leading "/api" portion of the path
uri strip_prefix /api
reverse_proxy 127.0.0.1:3000
}
# Fallback: proxy everything else to Vite's dev server on 5173
handle {
reverse_proxy 127.0.0.1:5173
}
}
You're welcome.
flowerthoughts 5 days ago [-]
I just use https://myapp.localhost:2948 so I don't need to remember the port number. The browser autocomplete handles that, so I don't really see the need for server-side help.
This is cool, but it only seems to work on the host that has the /etc/hosts loopback redirect to *.localhost. I run my app on a home server and access it from multiple PCs on the LAN. I have several apps, each associated with a different port number. Right now, I rely on a start page (https://github.com/dh1011/start-ichi) to keep track of all those ports. I’m wondering if there’s an easy way to resolve custom domains to each of those apps?
globular-toast 5 days ago [-]
You need to install a DNS server in your network and configure everything to use it (probably via DHCP). Then you can configure whatever you like. Dnsmasq is quite easy to get started. There are handy integrated solutions like pihole that combine DHCP and DNS. I run things like this on my router (opnsense).
AStonesThrow 5 days ago [-]
I pondered the question of local-only domains since long ago, and in consultation with Chris Siebenmann, I determined that the most courteous way was actually to subdomain from my ISP.
That’s right: I invented a fictitious subdomain under one my ISP controlled and I never registered it or deployed public DNS for it. It worked great, for my dumb local purposes.
Thus it was easy to remember, easy to add new entries, and was guaranteed to stay out of the way from any future deployments, as long as my ISP never chose to also use the unique subdomain tag I invented...
I have a small go binary that uses caddy and dns-sd on mac to have any kind of domain names on my local network (uses mdns) with https. Really nice for accessing websites from my phone.
I think you don't really need the /etc/hosts entry, I use this since google started using .dev domains and switched to using .localhost for everything local.
Never needed the entry
oulipo 5 days ago [-]
and you're still setting up a proxy for the port forwarding?
_def 5 days ago [-]
I recently tried to use the dnsmasq method mentioned at the very end but had some issues with fallthrough, as most of my dns traffic went trough my dev setup then first, which I didn't want. In the end I configured a "real" domain and let it point to 127.0.0.1, because I need arbitrary wildcard subdomains.. but I'm still not very happy with it because it feels like an unecessary dependency.
accrual 5 days ago [-]
Neat setup! I do something similar on OpenBSD. I have a CSV file that maps IP, MAC address, and hostname for various devices on my LAN. A shell script reads the file and creates a matching hosts file, dhcpd config, and unbound config, then restarts dhcpd and unbound (caching DNS server).
Whenever a host requests a DHCP lease it receives its assigned IP which matches the unbound record, then I can always access it by hostname.
VikingCoder 5 days ago [-]
Tailscale is another neat way. You can have ephemeral nodes. I want to learn how to do it with Docker, but apparently it's not too bad.
politelemon 5 days ago [-]
This feels like more work than necessary, I'm not seeing an advantage. Ideally any kind of dev setup should be as contained/localized as possible, but if I'm having to modify OS components, then that feels like sprawl. Or, it's like /etc/hosts but with extra steps.
sabslikesobs 4 days ago [-]
Yes, I felt like this too. I like that it's cute and how it removes the need to inject ports into any "this is my URL" environment variables, but I agree it does sound like "sprawl." Browser bookmarks serve the same purpose.
5 days ago [-]
pwdisswordfishz 5 days ago [-]
Or you could have /etc/hosts resolve them to other addresses in the 127.0/8 block.
mrweasel 5 days ago [-]
We have a separate domain registered, where you can add any wildcard subdomain, so webui.company-test.com and that will resolve to 127.0.0.1. Then we can do pretty much the same.
I'm not entirely sure how I feel about it, but at least it's on a completely separate domain.
whalesalad 5 days ago [-]
I have a public domain that resolves to a static lease in my internal network, which is running nginx proxy manager.
When I add a new site to my local setup, I just define a CNAME in Cloudflare and add an entry in Nginx proxy manager. It handles SSL via wildcard cert.
EGreg 5 days ago [-]
Wow! Today I learned that you can have subdomains of localhost. Never realized it!
It's a neat trick, but it comes with some caveats. For instance, `localhost` often resolves to both 127.0.0.1 and ::1, but `.localhost` is described in RFC2606 as "traditionally been statically defined in host DNS implementations as having an A record pointing to the loop back IP address and is reserved for such use". In other words, your server may be binding to ::1 but your browser may be resolving 127.0.0.1. I'm sure later RFCs rectify the lack of IPv6 addressing, but I wouldn't assume everyone has updated to support those.
Another neat trick to combine with .localhost is using 127.0.0.0/8. There's nothing preventing you from binding server/containers to 127.0.0.2, 127.1.2.3, or 127.254.254.1. Quite useful if you want to run multiple different web servers together.
EGreg 5 days ago [-]
But is "foo.localhost" a valid domain name, for cookies and such?
jeroenhd 5 days ago [-]
The RFC treats .localhost as a full TLD. I believe Windows does as well, as does Ubuntu (using default systemd-resolved), but macOS doesn't seem to resolve .localhost by default, necessitating the host file trickery.
Of course, in the early internet, the difference between a TLD and a host name weren't quite as clear as they are right now.
sebazzz 5 days ago [-]
> I believe Windows does as well
I cannot ping xyz.localhost because it doesn't resolve it.
EGreg 5 days ago [-]
We still need the Public Suffix List because of how inconsistent it was
stuaxo 4 days ago [-]
Nice, I've been wanting this - was just today talking about it.
Would be good to have the config all somewhere in my user's dir too.
Per user subdomains for their apps on localhost.
vlod 5 days ago [-]
Anyone care to shed why myapp.localhost:3000 (for the webapp I'm developing) is something that's useful for me rather than localhost:3000 ?
EDIT: on linux and don't use launchd, so I'd still the port number
jeroenhd 5 days ago [-]
Using real domain names lets you experience the web as it is in production. Localhost has a bunch of exceptions (i.e. HTTP URLs are treated as secure, CORS acts funny, etc.). Using domain names disables special handling of localhost URLs that'll help you spot problems before they hit production.
ghoshbishakh 5 days ago [-]
Trick:
edit yuor /etc/hosts file and add a domain name.
Self sign a certificate and add it to your trusted certificate list.
Thanks. This is the reason I wanted rather then convenience of not typing a port number (which I'd use a bookmark for, so I really don't care)
cyral 5 days ago [-]
Note the use of the Caddy webserver (you could also use nginx or whatever), which proxies to the port, so it's just myapp.localhost. I like this because it mirrors our production site. We can have subdomain.myapp.localhost and subdomain.myapp.com so links and everything work properly in both environments (assuming you have an env variable for the base domain)
oulipo 5 days ago [-]
Could there be a way to setup the Caddy server dynamically using eg direnv so that it's only launched when I'm in my dev folder?
mrweasel 5 days ago [-]
Maybe you have a stack of applications that needs to communicate. Seeing db.localhost is a little easier to read that db:3360, but especially if you have multiple web applications. It's easier to read sso.localhost, api.localhost, and www.localhost.
They also show having the webserver to the TLS, that might be helpful.
csciutto 5 days ago [-]
The comparison is `myapp.localhost` vs `localhost:3000`. This is especially useful when you have web servers permanently running on your computer on ports, not just for momentaneous local development.
gwd 5 days ago [-]
It's `myapp.localhost` (without the port number). It's more useful because it's easier to allocate and remember a unique name than a unique port number.
tgpc 5 days ago [-]
maybe you're running a reverse proxy? it can direct you differently depending on how you refer to it
5 days ago [-]
5 days ago [-]
5 days ago [-]
jbverschoor 5 days ago [-]
Orbstack does all that pretty automatically. It also understands project structure / compose.
I hope Orbstack is also advertising those hostnames on mDNS, because using .local (or, seemingly worse, _relying_ on .local) will conflict with resolver logic on all kinds of devices.
jbverschoor 4 days ago [-]
I'm not seeing any broadcasts.
What it does is it has a private network for the containers and itself (all containers get their own unique IP, so there's no port mapping needed).
The host-names are automatically derived from the containername / directoryname / compose project, but you can add other hostnames as well by adding docker labels.
It works really well.
zsoltkacsandi 5 days ago [-]
It was an example how OrbStack puts together the domain.
egberts1 3 days ago [-]
I am old enough to remember
localhost.localdomain
mholt 5 days ago [-]
When using `.localhost` in the Caddyfile, you don't even need the `tls internal` line since that's assumed for that TLD.
donatj 5 days ago [-]
We add a real actual DNS record for the local. subdomain pointing to 127.0.0.1
It works really well and means no setup on our developers machines
oulipo 5 days ago [-]
Can you expand on this? you redirect *.local.example.com to 127.0.0.1, and then how do you setup the local machine so that eg myservice.local.example.com hits the correct port? I guess you still need a proxy somewhere? or you specify eg myservice.local.example.com:3000 ?
donatj 5 days ago [-]
It's not a redirect. It's an actual A record on the domain for local.example.com -> 127.0.0.1
Then we just have an entry for local.example.com in our vhosts and bam everything works as expected. No need to mess with /etc/hosts
dd_xplore 5 days ago [-]
Why do you need gzip when it's running locally and being accessed on the same system?
threatofrain 5 days ago [-]
How do JS local dev setups use nice names like `local.drizzle.studio`?
5 days ago [-]
Jnr 5 days ago [-]
That is an external domain. Javascript on that site connects to the locally running drizzle service.
delduca 5 days ago [-]
I do the same at work (on premise machines on a private LAN).
bootcat 5 days ago [-]
wow this is hitting hacker news front page ?
if we can change the hosts file - I want to propose you can have any domain name for local services.
5 days ago [-]
ipkstef 5 days ago [-]
you guys don't just memorize all your local ip's?
justin_oaks 5 days ago [-]
In my day we memorized IPs AND ports. 10.24.67.22:78342 was to access our bug tracker and 192.168.240.17:21282 was for our CVS repository!
Seriously though, one of the first things I did when I was hired as the sysadmin for a small company was to eliminate the need for memorizing/bookmarking ip-port combos. I moved everything to standard ports and DNS names.
Any services running on the same machine that needed the same ports were put behind a reverse proxy with virtual hosts to route to the right service. Each IP address was assigned an easy-to-remember DNS name. And each service was setup with TLS/SSL instead of the bare HTTP they had previously.
nhance 5 days ago [-]
As a reminder, lacolhost.com and all subdomains will forever resolve to localhost (well for as long as I'm around at least)
jeroenhd 5 days ago [-]
> (well for as long as I'm around at least)
Rather big caveat IMO. As a side note, your domain doesn't seem to have an AAAA record (which [.]localhost binds to by default on most of my machines, at least).
koolba 5 days ago [-]
> As a reminder, lacolhost.com …
I’m assuming that typo is intentional?
gijoeyguerra 5 days ago [-]
excellent job.
mjevans 5 days ago [-]
Once again, .local should _never_ have been assigned to any organization. Just like .lan should also be reserved like the private IP blocks.
WorldMaker 5 days ago [-]
.local wasn't assigned to an organization, it was assigned to mDNS: multicast DNS. mDNS is the ask everyone on the local network if they like to be called that name which used to be better known under Apple's brand/trademark Bonjour, but now is a true standard.
mjevans 5 days ago [-]
Yes, but why couldn't they have assigned .mdns for that instead? Or even better given it it's own .arpa domain? E.G. .mdns(.arpa) rather than the .local TLD? ( https://en.wikipedia.org/wiki/.arpa )
WorldMaker 5 days ago [-]
Because .local looks nice and is a better name/explainer for what mDNS does than the standard name or the old brand name? Because the old brand was already using .local even if Apple Devices were somewhat a minority at the time?
At this point a lot of TLD changes are going to step on someone's project or home/business/private network. I think .local is a good name for mDNS. I appreciate why you maybe aren't happy with it, but don't share your concern.
mjevans 5 days ago [-]
Those are both reasons that .local should be static DNS on the _network_ like localhost is a standard name for the loopback address(es).
There's no reason .mdns or .mdns.arpa couldn't have just been added to the default domains search list (the list of suffixes tried for non FQDN searches); which given it ISN'T a nice human obvious word to append wouldn't have conflicted with anyone who'd already had a .local at the time, and anyone else in the future who thinks an obvious phrase like .local would not be in use by some other resolver system.
XorNot 5 days ago [-]
Don't we have ".internal" for that?
jeroenhd 5 days ago [-]
The TLD hasn't been registered, but it has been added to the list of reserved names so effectively that's the domain you should use if you don't want to use real names.
.local also works fine, of course, if you enable mDNS and don't try to use normal DNS.
5 days ago [-]
5 days ago [-]
Rendered at 22:13:28 GMT+0000 (Coordinated Universal Time) with Vercel.
[1]: https://en.wikipedia.org/wiki/.internal
The OpenWrt wiki on Homenet suggests the project might be dead: https://openwrt.org/docs/guide-user/network/zeroconfig/hncp_...
Anyone familiar with HNCP? Are there any concerns of conflicts if HNCP becomes "a thing"? I have to say, .home.arpa doesn't exactly roll of the tongue like .internal. Some macOS users seem to have issues with .home.arpa too: https://www.reddit.com/r/MacOS/comments/1bu62do/homearpa_is_...
In my native language (Finnish) it's even worse, or better, depending on personal preference - it translates directly to .mildew.lottery-ticket.
home.arpa is for HNCP.
Use .internal.
> A network-wide zone is appended to all single labels or unqualified zones in order to qualify them. ".home" is the default; [...]
If I were going to do a bunch of extra work messing with configs I'd be far more inclined to switch all my personal stuff over to GNS for security and privacy reasons.
This can be addressed by hijacking an existing TLD for private use, e.g. mything.bb :^)
That's hardly the only example of annoying MONOBAR behavior.
This problem could have been avoided if we had different widgets for doing different things. Someone should have thought of that.
If you want to be sure, use mything./ : the . at the end makes sure no further domains are appended during DNS lookup, and the / makes the browser try to access to resource without Googling it.
https://github.com/nodesocket/godns
Ref: https://www.icann.org/en/board-activities-and-meetings/mater...
> As of March 7, 2025, the domain has not been standardized by the Internet Engineering Task Force (IETF), though an Internet-Draft describing the TLD has been submitted.
https://www.icann.org/en/board-activities-and-meetings/mater...
> Resolved (2024.07.29.06), the Board reserves .INTERNAL from delegation in the DNS root zone permanently to provide for its use in private-use applications.
https://developer.mozilla.org/en-US/docs/Web/Security/Secure...
So you don't need self signed certs for HTTPS on local if you want to, for example, have a backend API and a frontend SPA running at the same time talking to eachother on your machine (authentication for example requires a secure context if doing OAuth2).
Won't `localhost:3000` and `localhost:3001` also both be secure contexts? Just starting a random vite project, which opens `locahost:3000`, `window.isSecureContext` returns true.
Usually you'd have a reverse proxy running on port 80 that forwards traffic to the appropoiate service, and an entry in /etc/hosts for each domain, or a catch all in dnsmasq.
Example: a docker compose setup using traefik as a reverse proxy can have all internal services running on the same port (eg. 3000) but have a different domain. The reverse proxy will then forward traffic based on Host. As long as the host is set up properly, you could have any number of backends and frontends started like this, via docker compose scaling, or by starting the services of another project. Ports won't conflict with eachother as they're only exposed internally.
Now, wether you have a use for such a setup or not is up to you.
1. not all browsers are the same
2. there is no official standard
3. even if there was, standards are often ignored
4. what is true today can be false tomorrow
5. this is mitigation, not security
they are all aiming to implement the same html spec
2. there is no official standard
there literally is
> A context is considered secure when it meets certain minimum standards of authentication and confidentiality defined in the Secure Contexts specification
https://w3c.github.io/webappsec-secure-contexts/
3. even if there was, standards are often ignored
major browsers wouldn't be major browsers if this was the case
4. what is true today can be false tomorrow
standards take a long time to become standard and an even longer time to be phased out. this wouldn't sneak up on anyone
5. this is mitigation, not security
this is a spec that provides a feature called "secure context". this is a security feature. it's in the name. it's in the spec.
>5.1. Incomplete Isolation > >The secure context definition in this document does not completely isolate a "secure" view on an origin from a "non-secure" view on the same origin. Exfiltration will still be possible via increasingly esoteric mechanisms such as the contents of localStorage/sessionStorage, storage events, BroadcastChannel, and others.
>5.2. localhost > >Section 6.3 of [RFC6761] lays out the resolution of localhost. and names falling within .localhost. as special, and suggests that local resolvers SHOULD/MAY treat them specially. For better or worse, resolvers often ignore these suggestions, and will send localhost to the network for resolution in a number of circumstances. > >Given that uncertainty, user agents MAY treat localhost names as having potentially trustworthy origins if and only if they also adhere to the localhost name resolution rules spelled out in [let-localhost-be-localhost] (which boil down to ensuring that localhost never resolves to a non-loopback address).
>6. Privacy Considerations > >The secure context definition in this document does not in itself have any privacy impact. It does, however, enable other features which do have interesting privacy implications to lock themselves into contexts which ensures that specific guarantees can be made regarding integrity, authenticity, and confidentiality. > >From a privacy perspective, specification authors are encouraged to consider requiring secure contexts for the features they define.
This does not qualify as the "this" in my original comment.
Not a single command, and you still have to add hostname resolution. But you don't have to programmatically edit config files or restart the proxy to stand up a new dev server!
Check it out and let me know what you think! (Free, MIT-licensed, single-binary install)
Basically, it wraps up the instructions in this blogpost and makes everything easy for you and your team.
https://github.com/peterldowns/localias
The certs and keys live in the localias application state directory on your machine:
The whole nicety of localias is that you can create domain aliases for any domain you can think of, not just ".localhost". For instance, on my machine right now, the aliases are:I really wish there was a safer way to do this, i.e. a way to tag a trusted CA as "valid for localhost use only". The article mentions this in passing
> The sudo version of the above command with the -d flag also works but it adds the certificate to the System keychain for all users. I like to limit privileges wherever possible.
But this is a clear case of https://xkcd.com/1200/.
Maybe this could be done using the name constraint extension marked as critical?
Of note, it doesn't work on macOS. I recall having delivered a coding assignment for a job interview long ago, and the reviewer said it didn't work for them, although the code all seemed correct to them.
It turned out on macOS, you need to explicitly add any subdomains of .localhost to /etc/hosts.
I'm still surprised by this; I always thought that localhost was a highly standard thing covered in the RFC long long ago… apparently it isn't, and macOS still doesn't handle this TLD.
No, not here.
That said, I do recommend the use of the internal. zone for any such setup, as others have commented. This article provides some good reasons why (at least for .local) you should aim to use a standards-compliant internal zone: https://community.veeam.com/blogs-and-podcasts-57/why-using-...
Not so different from you, but without even registering the vanity domain. Why is this such a bad idea?
It's better to use domain you control.
I'm a fan of buying cheapest to extend (like .ovh, great value) and use real Let's Encrypt (via dns challenge) to register any subdomain/wildcard. So that any device will have "green padlock" for totally local service.
https://krebsonsecurity.com/2020/02/dangerous-domain-corp-co...
Honestly for USD5/year why don't you just buy yourself a domain and never have to deal with the problem?
I guess the lesson is to deploy a self-signed root ca in your infra early.
[0] https://smallstep.com/docs/step-ca/
I suggested using a domain given they already have Caddy set up and it's inexpensive to acquire a cheap domain. It's also less of a headache in my experience.
Actually, now that I've linked the docs, it seems they use smallstep internally as well haha
[0] https://caddyserver.com/docs/automatic-https#local-https
Though you could register a name like ch.ch and get a wildcard certificate for *.ch.ch, and insert local.ch.ch in the hosts file and use the certificate in the proxy, that would even work on the go.
Is that a new thing? I heard previously that if you wanted to do DNS/domain for local network you had to expose the list external.
Yes, this also works under macOS, but I remember there used to be a need to explicitly add these addresses to the loopback interface. Under Linux and (IIRC) Windows these work out of the box.
I previously used differing 127.0.0.0/8 addresses for each local service I ran on my machine. It worked fine for quite a while but this was in pre-Docker days.
Later on I started using Docker containers. Things got more complicated if I wanted to access an HTTP service both from my host machine and from other Docker containers. Instead of having your services exposed differently inside a docker network and outside of it, you can consistently use the IP and Ports you expose/map.
If you're 127.0.0.0/8 addresses then this won't work. The local loopback addresses aren't routed to the host computer when sent from a Docker container; they're routed to the container. In other words, 127.0.0.1 inside Docker means "this container" not "this machine".
For that reason I picked some other unused IP block [0] and assigned that block to the local loopback interface. Now I use those IPs for assigning to my docker containers.
I wouldn't recommend using the RFC 1918 IP blocks since those are frequently used in LANs and within Docker itself. You can use something like the link-local IP block (169.254.0.0/16) which I've never seen used outside of the AWS EC2 metadata service. Or you can use the carrier-grade NAT IP block (100.64.0.0/16). Or even some IP block that's assigned for public use, but is never used, although that can be risky.
I use Debian Bookworm. I can bind 100.64.0.0/16 to my local loopback interface by creating a file under /etc/network/interfaces.d/ with the following
Once that's set up I can expose the port of one Docker container at 100.64.0.2:80, another at 100.64.0.3:80, etc.[0] https://en.wikipedia.org/wiki/Reserved_IP_addresses
https://www.man7.org/linux/man-pages/man8/libnss_myhostname....
(There are lots of other useful NSS modules, too. I like the libvirt ones. Not sure if there's any good way to use these alongside systemd-resolved.)
The ability to add host entries via an environment variable turned out to be more useful than I'd expected, though mostly for MITM(proxy) and troubleshooting.
1: https://github.com/aib/nss-userhosts
That is of course unless you really intend to send an email to someone at test@gmail.com.
Each service is then exposed via '<service>.local.<domain>'.
This has been working flawlessly for me for some time.
I use it extensively on my LAN with great success, but I have Macs and Linux machines with Avahi. People who don't shouldn't mess with it...
https://en.wikipedia.org/wiki/.local
https://en.wikipedia.org/wiki/.localhost
Practically speaking, HTTPS on LAN is essentially useless, so I don't see the benefits. If anything, the current situation allows the user to apply TOFU to local devices by adding their unsigned certs to the trust store.
The existing exception mechanisms already work for this, all you need to do is click the "continue anyway" button.
Public wifi isn't a thing? Nobody wants to admin the router on a wifi network where there might be untrusted machines running around?
In practice, you probably want an authorized network for management, and an open network with the management interface locked out, just in case there's a vulnerability in the management interface allowing auth bypass (which has happened more often than anyone would like).
I agree on the latter, but that means your IoT devices being accessible through both networks and being able to discriminate which requests are coming from the insecure interface and which are coming from secure admin, which isn't practical for lay users to configure as well. I mean, a router admin screen can handle that but what about other devices?
I know it seems pedantic, but this UI problem is one of many reasons why everything goes through the Cloud instead of our own devices living on our own networks, and I don't like that controlling most IoT devices (except router admin screens) involves going out to the Internet and then back to my own network. It's insecure and stupid and violates basic privacy sensibilities.
Ideally I want end users to be able to buy a consumer device, plug it into their router, assign it a name and admin-user credentials (or notify it about their credential server if they've got one), and it's ready and secure without having to do elaborate network topology stuff or having to install a cert onto literally every LAN client who wants to access its public interface.
* It's reserved so it's not going to be used on the public internet.
* It is shorter than .local or .localhost.
* On QWERTY keyboards "test" is easy to type with one hand.
That said, I do use mDNS/Bonjour to resolve .local addresses (which is probably what breaks .local if you're using it as a placeholder for a real domain). Using .local as a imaginary LAN domain is a terrible idea. These days, .internal is reserved for that.
I have a more in depth write up here: https://www.silvanocerza.com/posts/my-home-network-setup/
Though I wanted a short URL, that's why I used .it any way.
Yes, it does require a cert for TLS and that cert will not be trusted by default. I have found that with OpenSSL and a proper script you can spin up a cert chain on the fly and you can make these certs trusted in both Windows and Linux with an additional script. A script cannot make for trusted certs in Safari on OSX though.
I figured all this out in a prior personal app. In my current web server app I just don’t bother with trust. I create the certs and just let the browser display its page about high risk with the accept the consequences button. It’s a one time choice.
I used the .z domain bc it's quick to type and it looks "unusual" on purpose. The dream was to set up a web UI so you wouldn't need to configure it in the terminal and could see which apps are up and running.
Then I stopped working the job where I had to remember 4 different port numbers for local dev and stopped needing it lol.
Ironically, for once it's easier to set this kind of thing up on MacOS than on Linux, bc configuring a local DNS resolver on linux (cf this taiscale blog post "The Sisyphean Task Of DNS Client Config on Linux" https://tailscale.com/blog/sisyphean-dns-client-linux). Whereas on Mac it's a couple commands.
I think Tailscale should just add this to their product, they already do all the complicated DNS setup with their Magic DNS, they could sprinkle in port forwarding and be done. It'd be a real treat.
If you are using other systems then you can set this up fairly easily in your network DNS resolver. If you use dnsmasq (used by pihole) then the following config works:
There are similar configs for unbound or whatever you use.I have a ready to go docker-compose setup using Traefik here: https://github.com/georgek/traefik-local
Rather than do all this manually each time and worry about port numbers you just add labels to docker containers. No ports, just names (at least for http stuff).
A possible disadvantage is that specifying a single ip to listen on means the http server won't listen on your LAN ip address, which you might want.
That's not redirection per se, a word that's needlessly overloaded to the point of confusion. It's a smart use of a reverse proxy.
It would be nice if you all reserved the word "redirect" for something like HTTP 3xx behavior.
[0] https://datatracker.ietf.org/doc/rfc2606/
I've had nothing but trouble with .local and .localhost. Specifically, .local is intended for other purposes (multicast DNS) and .localhost has a nasty habit of turning into just "localhost" and in some cases, resolvers don't like to allow that to point to anything other than 127.0.0.1.
More recently, I've stuck to following the advice of RFC 6762 and use an actual registered TLD for internal use, and then sub-domain from there. I don't use my "production" TLD, but some other, unrelated TLD. For example, if my company is named FOO and our corporate homepage is at foo.com, I'll have a separate TLD like bar.com that I'll use for app development (and sub-domain as dev.bar.com, qa.bar.com, and maybe otherqa.bar.com as needed for different environments). That helps avoid any weirdness around .localhost, works well in larger dev/qa environments that aren't running on 127.0.0.1, and allows me to do things like have an internal CA for TLS instead of using self-signed certs with all of their UX warts.
For "local network" stuff, I stick to ".internal" because that's now what IANA recommends. But I would distinguish between how I use ".internal" to mean "things on my local network" from "this is where my team deploys our QA environment", because we likely aren't on the same network as where that QA environment is located and my ".internal" might overlap with your ".internal", but the QA environment is shared.
No daemons, and the only piece of configuration is adding a file to /etc/resolvers: https://github.com/djanowski/hostel
I've been using it for myself so it's lacking documentation and features. For example, it expects to run each project using `npm run dev`, but I want to add Procfile support.
Hopefully other people find it useful. Contributions very much welcome!
myapp.localhost { tls internal
}You're welcome.
BTW, there seems to be some confusion about *.localhost. It's been defined to mean localhost since at least 2013: https://datatracker.ietf.org/doc/html/rfc6761#section-6.3
This is cool, but it only seems to work on the host that has the /etc/hosts loopback redirect to *.localhost. I run my app on a home server and access it from multiple PCs on the LAN. I have several apps, each associated with a different port number. Right now, I rely on a start page (https://github.com/dh1011/start-ichi) to keep track of all those ports. I’m wondering if there’s an easy way to resolve custom domains to each of those apps?
That’s right: I invented a fictitious subdomain under one my ISP controlled and I never registered it or deployed public DNS for it. It worked great, for my dumb local purposes.
Example:
Thus it was easy to remember, easy to add new entries, and was guaranteed to stay out of the way from any future deployments, as long as my ISP never chose to also use the unique subdomain tag I invented...https://weblogs.asp.net/owscott/introducing-testing-domain-l...
https://github.com/DeluxeOwl/localhttps
Never needed the entry
Whenever a host requests a DHCP lease it receives its assigned IP which matches the unbound record, then I can always access it by hostname.
I'm not entirely sure how I feel about it, but at least it's on a completely separate domain.
When I add a new site to my local setup, I just define a CNAME in Cloudflare and add an entry in Nginx proxy manager. It handles SSL via wildcard cert.
It's a neat trick, but it comes with some caveats. For instance, `localhost` often resolves to both 127.0.0.1 and ::1, but `.localhost` is described in RFC2606 as "traditionally been statically defined in host DNS implementations as having an A record pointing to the loop back IP address and is reserved for such use". In other words, your server may be binding to ::1 but your browser may be resolving 127.0.0.1. I'm sure later RFCs rectify the lack of IPv6 addressing, but I wouldn't assume everyone has updated to support those.
Another neat trick to combine with .localhost is using 127.0.0.0/8. There's nothing preventing you from binding server/containers to 127.0.0.2, 127.1.2.3, or 127.254.254.1. Quite useful if you want to run multiple different web servers together.
Of course, in the early internet, the difference between a TLD and a host name weren't quite as clear as they are right now.
I cannot ping xyz.localhost because it doesn't resolve it.
Would be good to have the config all somewhere in my user's dir too.
Per user subdomains for their apps on localhost.
EDIT: on linux and don't use launchd, so I'd still the port number
Self sign a certificate and add it to your trusted certificate list.
Or - use https://pinggy.io
They also show having the webserver to the TLS, that might be helpful.
https://service.project.orb/
Forgot to add .local I see
What it does is it has a private network for the containers and itself (all containers get their own unique IP, so there's no port mapping needed).
http://orb.local simply lists all running containers.
The host-names are automatically derived from the containername / directoryname / compose project, but you can add other hostnames as well by adding docker labels.
It works really well.
It works really well and means no setup on our developers machines
Then we just have an entry for local.example.com in our vhosts and bam everything works as expected. No need to mess with /etc/hosts
Seriously though, one of the first things I did when I was hired as the sysadmin for a small company was to eliminate the need for memorizing/bookmarking ip-port combos. I moved everything to standard ports and DNS names.
Any services running on the same machine that needed the same ports were put behind a reverse proxy with virtual hosts to route to the right service. Each IP address was assigned an easy-to-remember DNS name. And each service was setup with TLS/SSL instead of the bare HTTP they had previously.
Rather big caveat IMO. As a side note, your domain doesn't seem to have an AAAA record (which [.]localhost binds to by default on most of my machines, at least).
I’m assuming that typo is intentional?
At this point a lot of TLD changes are going to step on someone's project or home/business/private network. I think .local is a good name for mDNS. I appreciate why you maybe aren't happy with it, but don't share your concern.
There's no reason .mdns or .mdns.arpa couldn't have just been added to the default domains search list (the list of suffixes tried for non FQDN searches); which given it ISN'T a nice human obvious word to append wouldn't have conflicted with anyone who'd already had a .local at the time, and anyone else in the future who thinks an obvious phrase like .local would not be in use by some other resolver system.
.local also works fine, of course, if you enable mDNS and don't try to use normal DNS.