Hacker Newsnew | past | comments | ask | show | jobs | submit | aragonite's commentslogin

In the preprint they write:

> ... we observe extreme inequality in attention distribution. The Gini coefficient of 0.89 places HN among the most unequal attention economies documented in the literature. For comparison, Zhu & Lerman (2016) reported Gini co-efficients of 0.68–0.86 across Twitter metrics. ... The bottom 80% of posts [on HN] receive less than 10% of total upvotes. (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5910263)

This could probably be explained by HN's unique exposure mechanism. Every post starts on /newest and unless it gets picked up by the smaller group of users who browse /newest, it never reaches the front page where the main audience is. In most forums/subreddits by contrast a new post (unless it gets flagged as spam) usually gets some baseline exposure with the main audience before it sinks. On HN the main audience is downstream of an early gate and missing that gate is close to being effectively invisible. IMO this fact alone could probably explain why "attention inequality" seems more extreme on HN.


This also explains how early performance can be predictive despite the lack of preferential attachment.

Some time ago I noticed that in Chrome, every time you click "Never translate $language", $language quietly gets added to the Accept-Language header that Chrome sends to every website!

My header ended up looking like a permuted version of this:

  en-US,en;q=0.9,zh-CN;q=0.8,de;q=0.7,ja;q=0.6
I never manually configured any of those extra languages in the browser settings. All I had done was tell Chrome not to translate a few pages on some foreign news sites. Chrome then turned those one-off choices into persistent signals attached to every request.

I'd be surprised if anyone in my vicinity share my exact combination of languages in that exact order, so this seems like a pretty strong fingerprinting vector.

There was even a proposal to reduce this surface area, but it wasn't adopted:

https://github.com/explainers-by-googlers/reduce-accept-lang...


This is a problem, that the software will try to guess what you mean by such things like this (it is not specific to this feature, but other features of computer programs in general; this is one specific case of that). Just because you do not want it to translate such a language (or any other langage) automatically does not necessarily mean that you can read it or that you want to request documents written in that language. Fingerprinting is not the only issue with this.


Is Chrome trying to assume that, since you don’t want it to translate those pages/languages, that you can read them/want them in your header? Interesting


I'd read it more generously than that. I think Chrome trying to stop the server choosing the language for you. By sending an accepts-language header (which your browser does regardless of what you use; it's not a Chrome thing) the server should return the page in a language you've said you'll accept. By adding the language to what you've told Chrome not to translate, it's attempting to show you pages in languages you want.

I imagine Chrome is really adding the language to your browser preferences when you choose not to translate a page, and the HTTP client in the browser is generating request headers based on your preferred languages. A small (and largely unimportant) semantic point, but it's possible that the Google translate team weren't aware of how adding a preferred language might impact user privacy. That isn't to excuse the behaviour; they should have checked.


PSA Don't use chrome.


Translating pages is literally the only thing I use Chrome for. The built-in translation works way better than other browsers, even though they also use Google Translate.


Firefox does not use Google Translate and performs the translation locally, which works great for the most common languages out there. For the less common ones you still have to go to Google Translate, but IME it's definitely not worth changing the browser to Chrome over.


Yeah I really like the Firefox translate. A rare win for recent Firefox.


I don't really like firefox translate, despite having made the switch many years ago. For a long time it didnt have the (european) language of the country I live in. Now it does have it. Every time I want it to translate I have to manually find both languages in the insanely long dropdowns. It will not save it the way I want it, but impressively seems to manage to always save it in the other direction...


> works great for the most common languages out there

Most of the time when I tried it the Firefox translations were obviously wrong or nonsense.


Ditching Chrome is something we need to teach everyone.

The DOJ is totally spineless and refuses to squash Google's absurd monopoly on the internet. We are literally the last line of defense, even though we really don't amount to much.

Perhaps we could start a grassroots movement.


You don’t need a grassroots movement when other movements doing this exact thing already exist. In fact it is likely counterproductive. Mozilla Foundation is the organization you want to support, or EFF.


> Mozilla Foundation is the organization you want to support

Mozilla Foundation is rudderless. I'm convinced the leadership are all Google plants who are keeping the "antitrust litigation sponge" from doing anything damaging to Chrome.


The new built-in translation in Firefox works pretty well! I never need to fallback to others, although forcing it to translate has weird UX.


Sorry but you're using a Google browser and Google translation service, when excellent alternatives to both exist. What did you expect regarding privacy?

A clueless person might not know any better, but you clearly do, and also you seemingly care. So why do you use Google all the same?


Safari does not use Google Translate and it works well. It even translates text on images BTW!


I don’t think safari uses google translate


There is an extension called twp or something like that for firefox. IME it is pretty good


PSA only use Mullvad or Tails which are set up to be as bland and uniform as possible


As uniform as possible is exactly the wrong way to go. It only takes one data point overlooked or newly discovered to make every person trying to look identical distinct. New fingerprinting techniques are being implemented all the time, so what's the point in taking chances when it's far easier to randomly change a browsers fingerprint for each site/connection making it much harder to track any one browser over time.


Except I don't want to be flagged as a bot when I'm just visiting some website in my browser. (I also don't want to be flagged as a bot when I'm scraping some website with a bot).


Definitely a good STEP1, but it’s not like Firefox and Safari are finger printing secure.


Firefox does pretty damn well though, especially with privacy.resistFingerprinting set to true


Every time I manually touched the "fingerprinting" about:config settings, my entropy went up. I used the EFF site to test: https://coveryourtracks.eff.org/

AFAIK some of these options are there to be used by the Tor browser, which comes with strict configuration assumptions, and it doesn't translate well to normal Firefox usage. Especially if you change the window size on a non-standardized device. Mind you, the goal is not to block fingerprinting, but to not stand out. Safari on a macbook is probably harder to fingerprint than Firefox on your soldering iron.

However, judging by the fact that every data hungry website seemingly has a huge problem with VPN usage, I'd presume they are pretty effective and fingerprinting is not.


I've had good success with tracking tool tests and resistFingerprinting. Granted, I usually use it with uMatrix/NoScript most of the time which cuts down on the available data a lot and maybe makes it an unfair test. One issue, I expect, is simply not enough people using resist fingerprinting to add variation to the mix. Since it's off by default, and only a small % of users use Firefox and an even tinier percentage use resistFingerprinting, unlike your example of Tor where probably most people on the tor network use the tor browser, it's likely that simply blocking things is a fingerprint all on its own. The solution there would be to get more people using it :)

I will say one downside to using it is far more bot detection websites freaking out over generic information being returned to them, causing some sites to break (some of their settings breaking webgl games too due to low values). Using a different profile avoids this, or explicitly whitelisting certain sites in privacy.resistFingerprinting.exemptedDomains - obviously if a site is using a generic tracking service for bot detection, that kills a fair amount of the benefit of the flag, so a separate profile might be best. I wish firefox had a container option for this.

... and. not too sure what you mean by changing window size on a non-standardised device. They do try to ensure the window sizes are at standard intervals, as if they were fullscreened at typical widths to reduce fingerprinting, but surely that applies to using Tor too? I mean, people don't use Tor on dedicated monitors at standard sizes.


Oh, and a bit of followup. I tried the EFF cover your tracks on a Firefox profile with resist fingerprinting, and almost all the bits of identifying information came from the window size (which EFF considers "brittle") and the UA (I was testing in Firefox Nightly).

Apparently you need to add the hidden pref: firefox.resistFingerprinting.letterboxing

Enabling letterboxing knocked off 5 bits of identifying information. Apparently my 1800px wide letterbox was still pretty identifiable, but, an improvement.

Setting a chrome user agent string using a user agent string manager dropped that one from 12ish bits to <4 bits. 'course, that has disadvantage of reducing firefox visibility online further, and probably being more recognisable with the other values (like mozilla in the webgl info). Using firefox stable for windows was <5bits, so probably best to use that if on linux. Although, it might conflict with the font list unless a windows font list was pulled in.


privacy.resistFingerprinting has potentially-unwanted side-effects, like wiping out most of your browser history (instead of the more sensible approach of just disabling purple links). I also recall something about it getting removed or nerfed, though I'm not sure whether that was a mere proposal.


It does not wipe your browser history. I can definitely attest to that since my generic JS active + resistFingerprinting profile has a history going back years. It does set your timezone to UTC in JS on websites. I've mostly encountered that when playing Wordle ;)


It also does (or at least used to) mess with dates, due to it attempting to hide what time zone you're in.


The browser should reasonably know what time zone you're in and what time zone you're reporting to the website and translate between them automatically.


Yeah, "should". Too bad it's unfeasible. As soon as you e.g. print the current date as part of a paragraph somewhere, the browser loses track of it, and the website can just read the element's content and parse it back.


what about duck duck go? We need a simple chart: 1. What browsers are good at resisting finger printing 2. tell for each browser, does it work on android ad ios and apple and windows and linux 3. what setting are needed to achieve this

for bonus points, is there no way to strip all headers on chrome on control it better?


This is my question also. I tend to not use apps, use DuckDuckGo browser.

I sometimes do use Safari which is a more convenient browser - it would be ironic if DDG browser is less private than Safari.


Modern Safari is pretty damned good at randomizing fingerprints with Intelligent Tracking Prevention. With IOS 26 and MacOS 26, it's enabled in both private and non private browser windows (used to be only in private mode).

All "fingerprint" tests I've run have returned good results.


Unfortunately, it's closed source and only available on Apple devices.


I haven’t tried 26, but I remember it didn’t used to be so great.


Tor Browser (based on Firefox) is.


That will just make you stand out more.


You can change the reported UA header independently of the UA you use.


If I was a fingerprinting company, I'd be cross-referencing signals between browsers for sure.

If the browser header says windows but the fonts available says linux, that's a very distinctive signal.

And if the UA says Chrome but some other signal says not-chrome, that's very distinctive as well.


Surely this is true, but if you’re a fingerprinting company aren’t you making so much money violating the privacy of the masses that it’s not worth your time going after the tiny set of Freedom Nerds trying to evade you?


They aren't specifically going after you... they just try to create a unique hash from everything they can and by doing weird things to your system you are making a truly unique hash easier


Yeah, and my passwords are so obvious and stupid, nobody's gonna guess them!

I think, you are falling for a technical fallacy. It's not costing them any more time.


You said it better than I did.


You can change the header, but browser developers are not that dumb and they added properties like "navigator.platform" which do not change and immediately give you away. Consider also writing a browser extension to patch these properties. Also, I think that DRM module (widewine), that is bundled with browsers, also can report the actual software version. Sadly it is undocumented so I don't know what information it can provide, but I notice warnings from Firefox about attempts to use DRM on various sites like Yandex Market.


The article also mentions this, and suggests the UA is not a silver bullet. That said, they didn’t go into specifics. I’m assuming there are other details that correlate to particular browsers that will betray a false UA. Plus, having a UA that says Chrome while including an extension that’s exclusive to Safari (tor example) will not only contradict the UA, but it will also be a highly distinctive datapoint for fingerprinting, in and of itself.


don't use the same browser regardless - the key is to compartmentalise.


I only use it when I want to be tracked.


Using Chrome and caring about privacy? I thought, after Google killed uBlock Origin, it had become beyond clear these two things were incompatible, https://news.ycombinator.com/item?id=41905368


Most people using chrome are also using Google's DNS servers too which hands them a list of every single domain you visit.


uBlock origin just got replaced with uBlock lite for most people


Which, by design, doesn't protect you from actual spying, https://github.com/uBlockOrigin/uBOL-home/wiki/Frequently-as...


There's a way to enforce loading UBo in Chromium but you need to download the extension by hand (git clone it from GitHub) and load it in "developer mode" in the extension settings. Also, you need to enable some legacy options related to extensions in about:flags.


Which really puts a massive spotlight on you.


How does it determine the order?

Clearly it thinks you prefer Chinese to German. Was that correlated with the frequency of your requests on Google Translate? With your browsing history? With your shopping history?


$lang_header = $lang_header + $the_lang_choice_that_was_just_made


Hmmm...YouTube has been getting confused about the language and displaying random languages for the closed captions on videos. This was happening to me across smart TVs but I access YouTube randomly from various devices and browsers...but mostly Chrome when using a browser.


> There was even a proposal to reduce this surface area, but it wasn't adopted:

>> Instead of sending a full list of the users' preferred languages from browsers and letting sites figure out which language to use, we propose a language negotiation process in the browser, which means in addition to the Content-Language header, the site also needs to respond with a header indicating all languages it supports

Who thought that made sense? Show me the website that (1) is available in multiple languages, and also (2) can't display a list of languages to the user for manual selection.


What language do you put that list in? Would you still want to show it to every visitor when you know most of them speak a particular language?

I use to do some work in this area. The first question is difficult and the second is no. We had the best results when we used various methods to detect the preferred language and then put up a language selector with a welcome message in that language. After they made a selection, it would stick on return visits.


> What language do you put that list in? Would you still want to show it to every visitor when you know most of them speak a particular language?

Judging by... a large number of websites, you make the list available in a topbar, and each language is named in itself. You don't apply one language to the entire list.

Here's the first page that popped into my head as one that would probably offer multiple languages (and it does!):

https://www.dyson.com/en

They've got the list in a page footer instead of a header, but otherwise it's an absolutely standard language selector. It does technically identify countries rather than languages. The options range from Azərbaycan to Україна. They are -- of course -- displayed to every visitor.

Why would you want to force someone to consume your website in the wrong language?

And why would the list be in a single language, again?


You’re looking at it with the perspective of someone who understands the language the site defaults to. Most non-native speakers have a hard time finding the link and they leave.


No, I'm looking at it from the perspective of someone who has needed to use that language selector in the past. Understanding the language the site defaults to wouldn't help, because the selector doesn't use that language anyway.

> Most non-native speakers have a hard time finding the link

You might notice the colorful flag right next to it.


Flags are a terrible way to indicate language. At best, they are unclear. At worst, they can be offensive.

Assuming you are a US company catering to non-English speakers in the US, which flag would you use for Spanish? Which flags would you use to differentiate between Mandarin and Cantonese? What would you do in Canada where they speak English and French? Show a French flag?


Except they're recognizable across languages. Faced with a UI in a language I don't know, going to settings -> languages -> my preferred language is a total guessing game. Meanwhile, if I'm confronted by a UI that has a tiny flag icon in the top, I know I can click on that and get to something familiar. Yes, someone looking to get offended can nitpick your flag choice, but a Spanish flag vs a Mexican flag for Spanish will at least let the user get to something closer to what they know, even though there's quite a bit of difference on the ground between Spanish in Spain and Spanish in Mexico. If your internationalization team is well funded enough to offer both, then show both flags. Same for UK English and American English, Chinese Simplify, Traditional, and Cantonese. And yes, Quebecoise French and French in France. Offer as many flags as you actually have translations for. If you can have a Chinese flag and a Hong Kong flag, users will appreciate it. Having a two level menu is also an option. Click on the Canada flag, which then offers Francaise and English is also an option.


Well, one of us has done research and work in this area. I don’t know what you’ve been doing. All of your suggestions perform poorly in the real world.


You can determine user's language from IP address location. Of course, there are users with VPNs, but they probably are used to seeing foreign content. For example, Youtube shows me advertisement in a language I don't understand despite my language header saying I only understand "en-US" and "en" languages. So this header is unnecessary, even Youtube ignores it.

Also, when using VPN, Google typically uses a language based on IP address, not my language header. I assume the header is only useful for fingerprinting today.


> You can determine user's language from IP address location.

There are reasons why it might not work (VPN is only one of them; there are others such as places with multiple languages, people traveling to foreign countries, and others), although it is also a bad idea for other reasons as well.

If the user specifies the language then you should use that one. I think it would probably be better to use the following order of figuring out which language you should want:

1. If the URL specifies the language to use, then use the language specified by the URL.

2. If the language is not specified by the URL, use the language specified by any cookies that are set for the purpose of selecting the language.

3. If the language is not specified by URL or cookies, but the user is logged in and the user account has a language setting, use the language specified by the user account. (If TLS client authentication is being used, then you might consider adding an extension into the client's X.509 certificate to select the language.)

4. If the language is not specified by URL or cookies or the user's account, or the user is not logged in, use the Accept-Language header.

5. If the language is not specified by URL or cookies or the user's account, or the user is not logged in, or the Accept-Language header is not present or cannot be parsed or does not specify any language that the request file is available in, then use the default, such as the language that it was originally written in.


> You can determine user's language from IP address location.

I live in Hyderabad, Telangana, India. I do not yet speak enough Telugu or Hindi or Urdu to be useful, and cannot read Hindi or Urdu at all; but I’m a foreigner who grew up on English only, rather rare around here, so let’s consider native Indians instead. Many can speak these languages but not read them in their native scripts, only romanised (in which case they can probably speak English tolerably). And many (many) come from other parts of India (or even Nepal) and can’t speak Telugu. Or are Muslim and at least prefer to deal in Hindi, often not having very good Telugu. And so on. It’s messy.

Some IP geolocation doesn’t even get the city right—I’ve seen Noida suggested, which is up north in Hindi territory.


More and more international audiences websites literally do this themselves, putting a language (sometimes even currency) select box option on top when they detect your settings don’t match best at first the page you are on.

Why not have this negotiation implemented at the browser level?


Because that prevents all of your users from selecting the language they want. It's a terrible idea with no upside and not-high-but-still-not-no downside.


It doesn't, because that's an optional negotiation. Try Apple.com in a different country/locale than yours, you'll see how it behaves.


It has a remarkably inconspicuous language selector, also using the names of countries rather than languages, located in the page footer. Compared to Dyson, Apple's list of country names is much more willing to use English in preference to whatever someone from that country would call it. This isn't consistent; many countries are rendered in their own language (日本 / Ελλάδα) and many aren't (Georgia / Kazakhstan).

The page defaults to the locale that you request in the URL. https://www.apple.com/ shows up in English, regardless of your country;† https://www.apple.com/bg/ shows up in Bulgarian. Switching your preferred location simply takes you to the page for that location. (Dyson does the same thing.) Some locations support more than one language; there's https://www.apple.com/lae/ for Latin America (English) and https://www.apple.com/la/ for Latin America (Spanish). If you're on the page for a location like this, a language selector (with language names) displays next to the location selector. In the case of Latin America, only two languages are supported, and the language selector automatically displays "Español" if you're on the English site and "English" if you're on the Spanish site, which makes sense but won't generalize.

Apple's selector is inconspicuous because it refuses to display flags, which I would guess is due to much higher political exposure than Dyson. So it's lower-quality in two ways, but fundamentally the same approach. The user asks for a language, and the site honors that.

Given that I presented Dyson as an example of doing language selection correctly, I'm confused about what you wanted me to see on apple.com. They're trying to do the right thing, but less effectively.

† I tested this by accessing the site(s) from Mongolia, Vietnam, and Morocco using ExpressVPN.


That was my point. Not comparing Apple/Dyson/whatever, but showing that website do have this need.

If this was designed and implemented as a standard at the browser level, we would get something better in the end, rather than re-implementations on each and every website.


No, you wouldn't. Having it done by the browser means it sucks. That is a very, very, very bad idea. You need to do it on the website.


Sure, if that suits you...


>In other words, I believe the reason this code is hard to read for many who are used to more "normal" C styles is because of its density; in just a few dozen lines, it creates many abstractions and uses them immediately, something which would otherwise be many many pages long in a more normal style.

I also spent some time with the Incunabulum and came away with a slightly different conclusion. I only really grokked it after going through and renaming the variables to colorful emojis (https://imgur.com/F27ZNfk). That made me think that, in addition to informational density, a big part of the initial difficulty is orthographic. IMO two features of our current programming culture make this coding style hard to read: (1) Most modern languages discourage or forbid symbol/emoji characters in identifiers, even though their highly distinctive shapes would make this kind of code much more readable, just as they do in mathematical notation (there's a reason APL looked the way it did!). (2) When it comes to color, most editors default to "syntax highlighting" (each different syntactic category gets a different color), whereas what's often most helpful (esp. here) is token-based highlighting, where each distinct identifier (generally) gets its own color (This was pioneered afaik by Sublime Text which calls it "hashed syntax highlighting" and is sometimes called "semantic highlighting" though that term was later co-opted by VSCode to mean something quite different.) Once I renamed the identifiers so it becomes easier to recognize them at a glance by shape and/or color the whole thing became much easier to follow.


I've experimented a few times with coloring my variables explicitly (using a prefix like R for red, hiding the letters, etc) after playing with colorforth. I agree getting color helps with small shapes, but I think the colors shouldn't be arbitrary: every character Arthur types is a choice about how the code should look, what he is going to need, and what he needs to see at the same time, and it seems like a missed opportunity to turn an important decision about what something is named (or colored) over to a random number generator.


> (1) Most modern languages discourage or forbid symbol/emoji characters in identifiers

> (2) When it comes to color,

Call me boomer if you wish, but if you can't grasp the value of having your code readable on a 24 rows by 80 columns, black and white screen, you are not a software developer. You are not even a programmer: at most, you are a prompt typist for ChatGPT.


While I agree that, if the function at hand can’t fit in a 25x80 window it most likely should be broken in smaller functions, there are kinder ways to say that.

I also joke God made the VT100 with 80 columns for a reason.


... For the reason that IBM made their 1928 card with 80 columns, in an attempt to increase the storage efficiency of Hollerith’s 45-column card without increasing its size?

That said, ~60 characters per printed line has been the typographer’s recommendation for much longer. Which is why typographers dislike Times and derivatives when used on normal-sized single-column pages, as that typeface was made to squeeze more characters into narrow newspaper columns (it’s in the name).


The fact that the claim is wrong on multiple levels (IBM punchcards, VT100 did 132 columns as well) is part of the fun.


23x75 to allow for a status bar and the possibility that the code may be quoted in an email. Also, it’s green on black. Or possibly amber.

And yet I still have a utility named "~/bin/\uE43E"


\uExxx is in the private use area. What is it?


That’s private, obviously.


Fun fact: both HN and (no doubt not coincidentally) paulgraham.com ship no DOCTYPE and are rendered in Quirks Mode. You can see this in devtools by evaluating `document.compatMode`.

I ran into this because I have a little userscript I inject everywhere that helps me copy text in hovered elements (not just links). It does:

[...document.querySelectorAll(":hover")].at(-1)

to grab the innermost hovered element. It works fine on standards-mode pages, but it's flaky on quirks-mode pages.

Question: is there any straightforward & clean way as a user to force a quirks-mode page to render in standards mode? I know you can do something like:

document.write("<!DOCTYPE html>" + document.documentElement.innerHTML);

but that blows away the entire document & introduces a ton of problems. Is there a cleaner trick?


I wish `dang` would take some time to go through the website and make some usability updates. HN still uses a font-size value that usually renders to 12px by default as well, making it look insanely small on most modern devices, etc.

At quick glance, it looks like they're still using the same CSS that was made public ~13 years ago:

https://github.com/wting/hackernews/blob/5a3296417d23d1ecc90...


I trust dang a lot; but in general I am scared of websites making "usability updates."

Modern design trends are going backwards. Tons of spacing around everything, super low information density, designed for touch first (i.e. giant hit-targets), and tons of other things that were considered bad practice just ten years ago.

So HN has its quirks, but I'd take what it is over what most 20-something designers would turn it into. See old.reddit Vs. new.reddit or even their app.


There's nothing trendy about making sure HN renders like a page from 15 years ago should. Relative font sizes are just so basic they should count as a bug fix and not "usability update".


Overall I would agree but I also agree with the above commenter. It’s ok for mobile but on a desktop view it’s very small when viewed at anything larger than 1080p. Zoom works but doesn’t stick. A simple change to the font size in css will make it legible for mobile, desktop, terminal, or space… font-size:2vw or something that scales.


It’s not ok for mobile. Misclicks all around if you don’t first pinch zoom to what you are trying to click.


Indeed, the vast majority of things I've flagged or hidden have been the accidental result of skipping that extra step of zooming.


> Zoom works but doesn’t stick.

perhaps try using a user agent that remembers your settings? e.g. firefox


Perhaps not recommend workarounds to lack of utilizing standards.


Setting aside the relative merits of 12pt vs 16pt font, websites ought to respect the user's browser settings by using "rem", but HN (mostly[1]) ignores this.

To test, try setting your browser's font size larger or smaller and note which websites update and which do not. And besides helping to support different user preferences, it's very useful for accessibility.

[1] After testing, it looks like the "Reply" and "Help" links respect large browser font sizes.


Side note: pt != px. 16px == 12pt.


You are correct, it should have been "px".


Please don’t. HN has just the right information density with its small default font size. In most browsers it is adjustable. And you can pinch-zoom if you’re having trouble hitting the right link.

None of the ”content needs white space and large fonts to breathe“ stuff or having to click to see a reply like on other sites. That just complicates interactions.

And I am posting this on an iPhone SE while my sight has started to degrade from age.


Yeah, I'm really asking for tons of whitespace and everything to breathe sooooo much by asking for the default font size to be a browser default (16px) and updated to match most modern display resolutions in 2025, not 2006 when it was created.

HN is the only site I have to increase the zoom level, and others below are doing the same thing as me. But it must be us with the issues. Obviously PG knew best in 2006 for decades to come.


On the flipside, HN is the only site I don't have to zoom out of to keep it comfortable. Most sit at 90% with a rare few at 80%.

16px is just massive.


Sounds like your display scaling is a little out of whack?


Yeah, this is like keeping a sound system equalized for one album and asserting that modern mastering is always badly equalized. Tune the system to the standard, and adjust for the oddball until it's remastered.


Except we all know what happened to the "standard" with the Loudness War.


I'm not a fan of extreme compression and limiting, but doing so in a multiband fashion (as occurs due the loudness war) actually does result in more consistent EQ from album to album, label to label, genre to genre, etc. which virtually eliminates the need to adjust EQ at playback time between each post-war selection.


You're obviously being sarcastic, but I don't think that it's a given that "those are old font-size defaults" means "those are bad font-size defaults." I like the default HN size. There's no reason that my preference should override yours, but neither is there any reason that yours should override mine, and I think "that's how the other sites are" intentionally doesn't describe the HN culture, so it need not describe the HN HTML.


on mobile at least, i find thati can frequently zoom in, but can almost never zoom out, so smaller text allows for more accessibility than bigger text


Browser (and OS) zoom settings are for accessibility; use that to zoom out if you've got the eyes for it. Pinching is more about exploring something not expected to be readily seen (and undersized touch targets).


Don't do this.


I agree, don't set the default font size to ~12px equiv in 2025.


[flagged]


Do you think that "Don't do this" as a reply comment is following the spirit of the guidelines? It doesn't seem very thoughtful or substantive to me.


Content does need white space.

HN has a good amount of white space. Much more would be too much, much less would be not enough.


No kidding. I've set the zoom level so long ago that I never noticed, but if I reset it on HN the text letters use about 2mm of width in my standard HD, 21" display.


> but if I reset it on HN the text letters use about 2mm of width in my standard HD, 21" display.

1920x1080 24" screen here, .274mm pitch which is just about 100dpi. Standard text size in HN is also about 2mm across, measured by the simple method of holding a ruler up to the screen and guessing.

If you can't read this, you maybe need to get your eyes checked. It's likely you need reading glasses. The need for reading glasses kind of crept up on me because I either work on kind of Landrover-engine-scale components, or grain-of-sugar-scale components, the latter viewed down a binocular microscope on my SMD rework bench and the former big enough to see quite easily ;-)


Shameless plug: I made this userstyle to make HN comfortable to handle both on desktop and mobile. Minimal changes (font size, triangles, tiny bits of color), makes a huge difference, especially on a mobile screen.

https://userstyles.world/style/9931/


Thanks for that, it works well, and I like the font choice! Though personally I found the font-weight a bit light and changed it to 400.


> HN still uses a font-size value that usually renders to 12px by default as well, making it look insanely small on most modern devices, etc.

On what devices (or browsers?) it renders "insanely small" for you? CSS pixels are not physical pixels, they're scaled to 1/96th of an inch on desktop computers, for smartphones etc. scaling takes into account the shorter typical distance between your eyes and the screen (to make the angular size roughly the same), so one CSS pixel can span multiple physical pixels on a high-PPI display. Font size specified in px should look the same on various devices. HN font size feels the same for me on my 32" 4k display (137 PPI), my 24" display with 94 PPI, and on my smartphone (416 PPI).


On my MacBook it's not "insanely small", but I zoom to 120% for a much better experience. I can read it just fine at the default.


On my standard 1080p screen I gotta set it to 200% zoom to be comfortable. Still LOTS of content on the screen and no space wasted.


> At quick glance, it looks like they're still using the same CSS that was made public ~13 years ago:

It has been changed since then for sure though. A couple of years ago the mobile experience was way worse than what it is today, so something has clearly changed. I think also some infamous "non-wrapping inline code" bug in the CSS was fixed, but can't remember if that was months, years or decades ago.

On another note, they're very receptive to emails, and if you have specific things you want fixed, and maybe even ideas on how to do in a good and proper way, you can email them (hn@ycombinator.com) and they'll respond relatively fast, either with a "thanks, good idea" or "probably not, here's why". That has been my experience at least.


I hesitate to want any changes, but I could maybe get behind dynamic font sizing. Maybe.

On mobile it’s fine, on Mac with a Retina display it’s fine; the only one where it isn’t is a 4K display rendering at native resolution - for that, I have my browser set to 110% zoom, which is perfect for me.

So I have a workaround that’s trivial, but I can see the benefit of not needing to do that.


The font size is perfect for me, and I hope it doesn’t get a “usability update”.


“I don’t see any reason to accommodate the needs of others because I’m just fine”


I bet 99.9% of mobile users' hidden posts are accidentally hidden


12 px (13.333 px when in the adapted layout) is a little small - and that's a perfectly valid argument without trying to argue we should abandon absolute sized fonts in favor of feels.

There is no such thing as a reasonable default size if we stop calibrating to physical dimensions. If you choose to use your phone at a scaling where what is supposed to be 1" is 0.75" then that's on you, not on the website to up the font size for everyone.


I find it exactly the right size on both PC and phone.

There's a trend to make fonts bigger but I never understood why. Do people really have trouble reading it?

I prefer seeing more information at the same time, when I used Discord (on PC), I even switched to IRC mode and made the font smaller so that more text would fit.


I'm assuming you have a rather small resolution display? On a 27" 4k display, scaled to 150%, the font is quite tiny, to the point where the textarea I currently type this in (which uses the browsers default font size) is about 3 times the perceivable size in comparison to the HN comments themselves.


Agreed. I'm on an Apple Thunderbolt Display (2560x1440) and I'm also scaled up to 150%.

I'm not asking for some major, crazy redesign. 16px is the browser default and most websites aren't using tiny, small font sizes like 12px any longer.

The only reason HN is using it is because `pg` made it that in 2006, at a time when it was normal and made sense.


Yup, and these days we have relative units in CSS such that we no longer need to hardcode pixels, so everyone wins (em, rem). That way people can get usability according to the browsers defaults, which make the whole thing user configurable.


1920x1080 and 24 inches

Maybe the issue is not scaling according to DPI?

OTOH, people with 30+ inch screens probably sit a bit further away to be able to see everything without moving their head so it makes sense that even sites which take DPI into account use larger fonts because it's not really about how large something is physically on the screen but about the angular size relative to the eye.


Yeah, one of the other cousin comments mentions 36 inches away. I don't think they realize just how far outliers they are. Of course you have to make everything huge when your screen is so much further away than normal.


I have HN zoomed to 150% on my screens that are between 32 and 36 inches from my eyeballs when sitting upright at my desk.

I don't really have to do the same elsewhere, so I think the 12px font might be just a bit too small for modern 4k devices.


I'm low vision and I have to zoom to 175% on HN to read comfortably, this is basically the only site I do to this extreme.


I have mild vision issues and have to blow up the default font size quite a bit to read comfortably. Everyone has different eyes, and vision can change a lot with age.


Even better: it scales nicely with the browser’s zoom setting.


Text size is easily fixed in your browser with the zoom setting. Chrome will remember the level you use on a per site basis if you let it.


I'm sure they accept PRs, although it can be tricky to evaluate the effect a CSS change will have on a broad range of devices.


The text looks perfectly normal-sized on my laptop.


Really? I find the font very nice on my Pixel XL. It doesn't take too much space unlike all other modern websites.


A uBlock filter can do it: `||news.ycombinator.com/*$replace=/<html/<!DOCTYPE html><html/`


Could also use tampermonkey to do that, also perform the same function as OP.


There is a better option, but generally the answer is "no"; the best solution would be for WHATWG to define document.compatMode to be writable property instead of readonly.

The better option is to create and hold a reference to the old nodes (as easy as `var old = document.documentElement`) and then after blowing everything away with document.write (with an empty* html element; don't serialize the whole tree), re-insert them under the new document.documentElement.

* Note that your approach doesn't preserve the attributes on the html element; you can fix this by either pro-actively removing the child nodes before the document.write call and rely on document.documentElement.outerHTML to serialize the attributes just as in the original, or you can iterate through the old element's attributes and re-set them one-by-one.


On that subject I would be fine if the browser always rendered in standard mode. or offered a user configuration option to do so.

No need to have the default be compatible with a dead browser.

further thoughts: I just read the mdn quirks page and perhaps I will start shipping Content-Type: application/xhtml+xml as I don't really like putting the doctype in. It is the one screwball tag and requires special casing in my otherwise elegant html output engine.


Still possible in VSCode through somewhat hackish methods (esp. arbitrary CSS injection via createTextEditorDecorationType). Here are some quick screenshots of random JS/Rust examples in my installation: https://imgur.com/a/LUZN5bl


This is one of those things where both extremes of madness and genius wrap around to infinity and meet again.


Honestly, it looks like a ransom request letter! :D


Saw this comment and couldn't figure out what it implied so I clicked on the link.

Now I see it definitely made sense.


Has anyone had success getting a coding agent to use an IDE's built-in refactoring tools via MCP especially for things like project-wide rename? Last time I looked into this the agents I tried just did regex find/replace across the repo, which feels both error-prone and wasteful of tokens. I haven't revisited recently so I'm curious what's possible now.


That's interesting, and I haven't, but as long as the IDE has an API for the refactoring action, giving an agent access to it as a tool should be pretty straightforward. Great idea.


Serena MCP does this approach IIRC


Please consider making the UI respect the user's custom text scaling settings for accessibility. I'm not referring to DPI scaling but the TextScaleFactor value at HKCU\Software\Microsoft\Accessibility (see [1][2]) that users can set in Ease of Access > Display > Make text bigger.

(Failing that, adding basic support for scaling text or UI via ctrl+plus/minus would be a huge improvement!)

With the exception of Chromium/Chrome [3] this's been a persistent issue with Windows desktop apps from Google (most of these also use hard-coded control sizes making the problem worse).

[1] https://learn.microsoft.com/en-us/windows/apps/design/input/...

[2] https://learn.microsoft.com/en-us/uwp/api/windows.ui.viewman...

[3] https://issues.chromium.org/issues/40586200


I'm split with this. If it helps other people, then I'm all for it. But speaking as someone who is legally blind and makes extensive use of these settings, Windows 10 accessibility drives me mad. I'm waiting for fractional scaling to improve for Linux so I can make the switch.

The problem with Make text bigger and Make everything bigger is they apply to every single application that supports them. Let's say I have two applications: A is comfortable enough to see and B isn't. If I change either of these settings to help me use B, A could now be a problem because it can take up too much screen real estate, which makes it unusable for a different reason.

This doesn't sound like much of a problem until you have 5 or more applications you're trying to balance via these two settings. In reality, it's more complex than I'm describing because I may need to change both settings to help with a new application, which then means I have to continuously test every other application I use to make sure they're all somewhat comfortable enough to use.

If an application I use updates to include support for these settings, I have to go through all this unplanned work again to try and make everything usable once more. It's frustrating.

I know people make fun of Electron, but one major plus point for me is I have per application scaling when using it, and so it gives me better accessibility than Windows does by far.

> (Failing that, adding basic support for scaling text or UI via ctrl+plus/minus would be a huge improvement!)

I'd consider this to be a better option.


Try Fedora with KDE. It has fractional scaling, per display.

I set my laptop (1920x1080) to 120%, effectively making it 1600x900 but with very good physical size of things. I set my external panel (2560x1440) to 160%, effectively making it 1600x900 also. KDE even visualizes the two panels to be the same size. Ontop of these basic DPI settings, you can then tweak font/text even further. Its quite amazing. Windows cannot do custom dpi per monitor, only a single custom dpi that gets applied to all monitors.

If you do go down the fractional scaling rabit hole, make sure whatever values you pick, both the height and width ends without any fractions after applying your custom dpi... that elimnates all blurs. In my example above, 2560/1.6 and 1440/1.6 gives nice round numbers, even though the operating systems typically only offer 100/125/150/175/200 etc.

I built a small console app for myself that takes the resolution and tests all increments of 1% to see which resolution combinations gives values that don't end with fractions at the end. So it tells me which effective resolutions I will get at which % settings. Its awesome and made it so that I can easily make so that my laptop and external display has the same amount of space (or line of code) on the screen, even though they are different physical sizes.


> Windows cannot do custom dpi per monitor, only a single custom dpi that gets applied to all monitors.

This is wrong. Windows supports per monitor DPI since Windows 8 and have an improved API since Windows 10. I find it the only good implementation among desktop OSes. It is the only one that guarantees that font renders align with the pixel grid.

Many old apps do not support this API though. It is opt-in and while there is a hybrid mode to let Windows scale fonts and Win32 components via API hooks, without implementing DPI change callback most apps turn into blurry mess.

Usually browsers have the gold standard implementation of those callbacks hence why Electron is used everywhere.


Brother I'm looking right at it. I cannot set one monitor to 120% and another to 160% (both are custom values), like on KDE. If I use a custom setting it gets applied to both monitors, in fact it gets grayed out for some reason - the values don't even show properly. Only a reset button available that logs you out to reset it to 100%.

If I want to set them to different scaling factors, I have to use one of the values from the drop downs (100/125/150/175/200%), which is not what I want.


You have literally said this:

> Windows cannot do custom dpi per monitor, only a single custom dpi that gets applied to all monitors.

Here are all of my monitors at different DPIs: https://imgur.com/a/q3z2P1E . They don't have a "single DPI" that gets applied to all of them. The custom DPI setting is for changing all base system DPI.

> I cannot set one monitor to 120% and another to 160% (both are custom values), like on KDE.

Okay you're unhappy with the granularity. Yes Windows uses 25% granularity.

I don't know if this will work but you can probably do a combination of changing the base DPI and then calculating the 25%. So you can set the base DPI to something like 120 (which is 125%) and then set the other monitor to 125% which gives 156%:

I think the base DPI is stored in this registry key:

HKEY_CURRENT_USER\Control Panel\Desktop\WindowMetrics\AppliedDPI

It is a DWORD value


Thanks for detailed response. Do you happen to know if this is a recent change in Fedora/KDE? I tried somewhat recently, although I can't remember quite when that was. Gnome had experimental support for fractional scaling at the time but it wasn't good enough to switch to.

> Windows cannot do custom dpi per monitor, only a single custom dpi that gets applied to all monitors.

Yeah, support for custom DPI in general isn't great. I've been using https://www.majorgeeks.com/files/details/windows_10_dpi_fix.... for years to at least partially help.

Edit: I think I answered my own question about how recent the change might have been: https://blogs.kde.org/2024/12/14/this-week-in-plasma-better-...

This seems to be just after I last tried. I'll give it another go, thanks BatteryMountain!


As a word of warning, it is still not 100% perfect. I've noticed that on my laptop, when the Zed editor is maximized, there is a tiny gap between it and the panel at the bottom. I think this happens when an app, even if it supports fractional scaling in general, can't handle a logical window size that is not a whole number. To be fair, this is one of the only apps I've really had any scaling issues with lately, and it is just a minor visual annoyance. The Linux DPI scaling story is finally pretty solid.

Also, many apps (including Electron/Chromium apps) will still run under XWayland when using a Wayland session by default, because there are still a handful of small issues and omissions in their Wayland drivers. (It's pretty minor for Electron/Chromium so you can opt to force it to use native Wayland if you want.) In case of XWayland apps, you'll have to choose between allowing X11 apps to scale themselves (like the old days) or having the compositor scale them (the scaling will be right, even across displays, but it will appear blurry at scales higher than 1x.) I still recommend the Wayland session overall; it gives a much more solid scaling experience, especially with multiple monitors.


> In case of XWayland apps, you'll have to choose between allowing X11 apps to scale themselves (like the old days) or having the compositor scale them (the scaling will be right, even across displays, but it will appear blurry at scales higher than 1x.) I still recommend the Wayland session overall; it gives a much more solid scaling experience, especially with multiple monitors.

I'm wondering if this was the problem I was running into before – it sounds eerily familiar. I never got far enough to explore individual apps outside of preinstalled ones because I couldn't get comfortable enough. I appreciate your response as I wasn't aware of the different session types.


Yeah, it probably has something to do with this. In X11 sessions, the display server does not typically handle scaling. Instead, the desktop environments provide scaling preferences to UI toolkits that then do the scaling themselves. In Wayland, the display server does handle scaling.

In both X11 and Wayland, you should usually see most applications following your scaling preferences nowadays. In Wayland sessions, you can ensure that applications always appear at the correct size, though at the cost of "legacy" applications appearing blurry. This behavior is configured in the Display Settings in KDE Plasma.

Also possibly useful: if you like the KDE Plasma session, it has a built-in magnifier; just hold Ctrl+Meta and use the scroll wheel.


> Yeah, it probably has something to do with this. In X11 sessions, the display server does not typically handle scaling. Instead, the desktop environments provide scaling preferences to UI toolkits that then do the scaling themselves. In Wayland, the display server does handle scaling.

Presumably this leads to a more unified scaling experience. This was one thing I was concerned about before, as it didn't seem that way. That's a solid improvement on its own.

> Also possibly useful: if you like the KDE Plasma session, it has a built-in magnifier; just hold Ctrl+Meta and use the scroll wheel.

This is useful yes, along with the rest of your comments. Thanks for your help.


Thats why you need to calculate which "eventual resolutions" divides down to values without fractions after your scaling has been applied.

So if you take a 2560x1440 panel, 160%/1.6 scaling factor will give you 1600x900, hence there won't be any artifacts. Between 100% and 200% there are maybe 5 combinations that will give you clean resolutions.

As an example:

Enter monitor Width (1920):

Enter monitor Height (1080):

1920x1080 at 100%

1600x900 at 120%

1536x864 at 125%

1280x720 at 150%

1200x675 at 160%

960x540 at 200%

800x450 at 240%

768x432 at 250%

640x360 at 300%

Aything besides these value WILL give you artifacts at some level.


Curious about GNOME fractal scaling issues you experience.

I currently have the experimental feature enable at 150% scale for a laptop screen at 2560x1600 resolution. Have not had any issues by itself nor with an external 3440x1400 at 100% scale with GNOME 48.


I wish I could give you a better answer here, but I honestly don't remember. I only remember that something I needed was missing from it for me to make the switch.


So Gnome does support it, but it is terrible. Last I tried it, it also applied the custom scaling value to all the displays like Windows. KDE does it perfectly.


Ideally, applications should use the Windows settings by default, but allow configuring a different scaling. Even more ideally, Windows should allow per-application settings, but until it does it’s the applications’ job.


Part of my wonders if this is what Microsoft hoped would happen when they implemented the settings in the manner they did. But it hasn't played out that way.


It is.

Any app implementation of the windows setting could expose a multiplier of it somewhere. They already did the hard part of building a dynamic UI...


What are your thoughts on screen magnifiers? Personally I tend to increase scaling a bit and use Magnifier for anything that's too small (or increase the font size in the application if possible)


I try to avoid using them. If I can, I prefer to configure my environment to not need them, but that does take a fair amount of work. I get by because of my technical knowledge. I don't know how other people cope.


Incidentally I once ran into a mature package that had lived in the 0.0.x lane forever and treated every release as a patch, racking up a huge version number, and I had to remind the maintainer that users depending with caret ranges won't get those updates automatically. (In semver caret ranges never change the leftmost non-zero digit; in 0.0.x that digit is the patch version, so ^0.0.123 is just a hard pin to 0.0.123). There may occasionally be valid reasons to stay on 0.0.x though (e.g. @types/web).


Presumably they’re following https://0ver.org/


Isn’t vim or bash kinda like that? One of them publishes something like a few hundred patches on top the released tarball…


Maybe that is intentional? Which package is it?


It's the type definitions for developing chrome extensions. They'd been incrementing in the 0.0.x lane for almost a decade and bumped it to 0.1.0 after I raised the issue, so I doubt it was intentional:

https://www.npmjs.com/package/@types/chrome?activeTab=versio...


This is part of the DefinitelyTyped project. DT tends to get a lot of one-off contributions just for fixing the one error a dev is experiencing. So maybe they all just copied the version incrementing that previous commits had done, and no one in particular ever took the responsibility to say "this is ready now".


threejs ?



When trying to understand complex C codebase I've often found it helpful to rename existing variable as emojis. This makes it much easier to track which variables are used where & to take in the pure structure of the code at one glance. An example I posted previously: https://imgur.com/F27ZNfk

Unfortunately most modern languages like Rust and JS follow the XID_Start/XID_Continue recommendation (not very well-motivated imo) which excludes all emoji characters from identifiers.


wouldn't writing a parser of sorts that would replace emojis with a valid alphabetical string identifier be trivial?


You're right that writing a preprocessor would be straightforward. But while you're actively editing the code, your dev experience will still be bad: the editor will flag emoji identifiers as syntax errors so mass renaming & autocompletion won't work properly. Last time I looked into this in VSCode I got TypeScript to stop complaining about syntax errors by patching the identifier validation with something like `if (code>127) return true` (if non-ascii, consider valid) in isUnicodeIdentifierStart/isUnicodeIdentifierPart [1]. But then you'd also need to patch the transpiler to JS, formatters like Prettier, and any other tool in your workflow that embeds their own version of TypeScript...

[1] https://github.com/microsoft/TypeScript/blob/81c951894e93bdc...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: