Hacker Newsnew | past | comments | ask | show | jobs | submit | wahern's commentslogin

As of 2023 municipalities and counties can no longer mandate bicycle registration. (See https://law.justia.com/codes/california/code-veh/division-16... as amended by sec. 7 at https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...) Though universities, like UC Davis, might still be able to require it for bikes on campus.

I hadn't heard of the requirement before. Mandatory registration originally seems to have been intended to address bike theft. All bicycles sold in California must have a serial number. A significant number of cities (most?) had ordinances requiring registration. But few people knew about it and even fewer registered their bikes.


Automatically changing behavior by testing if the output sink is a TTY is traditionally considered an anti-pattern by those with enough time and hair loss spent at the terminal. It's one of those things where there are definitely occasions where it's useful, but it's overused and can end up frustrating people more than it helps, like when they're attempting to replicate a work flow in a script.[1] A classic example of "just because you can do something doesn't mean you should do it".

I don't know how it works today, but IIRC colorization by GNU ls(1) used to require an explicit option, --color, typically added through an alias in default interactive shell configs, rather than ls automatically enabling it by default when detecting a TTY.

Explicit is generally better than implicit unless you're reasonably sure you're the last layer in the software stack interacting with the user. For shell utilities this is almost never the case, even when 99% of usage is from interactive shells. For example, `git` automatically invokes a pager when it detects output is to a TTY; this is endlessly frustrating to me because most of the time I'd prefer it dumped everything to the screen so I could more easily scroll using my GUI terminal window, as well as retain the output in the scroll buffer for later reference. Git does have the -P option to disable this behavior, but IMHO it has proper defaults reversed; usually I just end up pipe'ing to cat because that's easier to remember than bespoke option arguments for frilly anti-features.

[1] Often times it forces people to use a framework like expect(1) to run programs with another pseudo TTY for child programs just to replicate the behavior.


> I don't know how it works today, but IIRC colorization by GNU ls(1) used to require an explicit option, --color, typically added through an alias in default interactive shell configs, rather than ls automatically enabling it by default when detecting a TTY.

It works exactly like this today. Plus, lots of software added support of NO_COLOR today.

> For example, `git` automatically invokes a pager when it detects output is to a TTY; this is endlessly frustrating to me because most of the time I'd prefer it dumped everything to the screen so I could more easily scroll using my GUI terminal window.

Set your pager to cat? That's what I personally do, never really liked this built-in convention either.


ls is usually aliased to ls --color=auto in the bashrc that comes with your distribution

Auto means to enable if output is a terminal. I think this is reasonable. The default is no color, ever.


All true, but note that BSD introduced, and both Linux/glibc and Linux/musl support, a syscall(2) wrapper routine that takes a syscall number, a list of arguments (usually as long's), and performs the syscall magic. The syscall numbers are defined as macros beginning with SYS_. The Linux kernel headers export syscall numbers with macros using the prefix __NR_, but to match the BSD interface Linux libc headers usually translate or otherwise define them using a SYS_ prefix. Using the macros is much better because the numbers often vary by architecture for the same syscall.

See https://man7.org/linux/man-pages/man2/syscall.2.html


Except with BSDs you are on your own if you go down that route, because there are no stability guarantees.

It is more of an implementation detail for the rest of the C APIs than anything else.


At least FreeBSD's syscall ABI is guaranteed to be stable, one can run ancient binaries on a modern kernel. I believe the same is not true of OpenBSD and maybe NetBSD however.

Do you need a guarantee or is enough that it's painful enough for the BSD maintainers when they remove syscalls that they rarely do it? It's even worse if they renumber them so that really doesn't happen outside of syscalls that were only briefly available in a development branch.

Varies a bit by flavor: OpenBSD values security more than stability, so they are willing to break old binaries more often; FreeBSD does require compat modules/etc for some things, but those are available for a long time and sometimes something slips through.

If they break old syscalls, it breaks your code that skips libc, but it also breaks running an old userland with a new kernel and that needs to work for upgrade scenarios. It also breaks binaries that were statically linked with an older libc. When a new kernel breaks old binaries, people stop upgrading the kernel and that's not what maintainers want.


Indeed. Another reason to use the system's macros rather than hardcoding integer literals--the numbers can change between releases. Though that doesn't guarantee the syscall works the same way between releases wrt parameters and return value semantics, if it still exists at all. And I believe OpenBSD removed the syscall wrapper altogether after implementing the pinsyscalls feature.

The core primitives written in assembly operate on fixed sized blocks of data; no allocations, no indexing arrays based on raw user controlled inputs, etc. Moreover, the nature of the algorithms--at least the parts written in assembly, e.g. block transforms--means any bugs tend to result in complete garbage and are caught early during development.

The steelman argument for Farenheit as it is, not necessarily the motivations behind it, has been fleshed out here: http://lethalletham.com/posts/fahrenheit.html

TL;DR: "The remarkable result here is that 0℉ is nearly exactly the 1st percentile of daily lows, and 100℉ is nearly exactly the 99th percentile of daily highs." NB: The context is the continental US.


It's a pretty neat analysis, but it looks like the "nearly exactly" part must be a coincidence for the particular methodology and data they used (most significantly that it's based on 2018 weather).

Fahrenheit was created in northern Europe, using the temperature of a salt water and ice mixture as the zero calibration point. It was later adjusted to define the difference between water's freezing and boiling points to be exactly 180°, since 180 is a highly composite number with many divisors. So off the bat, it's a bit odd that 0°F and 100°F would match the 1st and 99th percentiles of population-adjusted daily highs and lows in the US with that much precision. It's a coincidence already in the sense that the creator was not aiming for this.

But it's also a coincidence because they used 2018 data, which was a particular warm year on average. (2012 was warmer, but I don't see any warmer years before 2012 in the National Weather Service's table which goes all the way back to 1875.) Average temperature across the US can vary by 3° or 4°F year to year. The population adjusted temperature should vary even more because it depends on lot on which weather systems hit the major population centers that year. I'm not sure how much the 1st and 99th percentile would change if they redid the analysis for a different year, but it would probably vary by several degrees.

It's also kind of interesting that you would never have gotten this result before around 2012 or so, due to global warming.


It doesn't matter if it's a coincidence or not. The fact that it works out that way still plays to its convenience and "good feel" in the US.

Arguing that it's a coincidence isn't really relevant.

I agree with the poster further up: I'm more or less good with all metric units expect temperature. While I still "feel" all the US customary units better than metric, I can intuitively "see" meters, liters, and kilograms. But Celsius continues to elude me, even after dating and being married to someone for 8+ years who grew up in a metric country.


I'm not sure you fully read my comment. It only works for 2018. If you did their analysis any other year, you'll get the 1st percentile is -4°F or something similar.

I only called out the "nearly exactly" part of the claim. US weather is approximately in the range of 0-100°F, give or take 20 degrees. But the analysis found 0°F to be nearly exactly the first percentile of daily highs and lows, to within a twentieth of a percentile point.

It's true that US temperature is around 0°F-100°F but usually false that those temperatures are the 1st and 99th percentile.


There's some pretty compelling evidence that Ray's inception of the idea may have come from hearing about a bounty on MLK's head. See https://slate.com/news-and-politics/2025/12/martin-luther-ki...

If true I wouldn't consider Ray to have been acting on behalf of a conspiracy (even if the bounty itself was a conspiracy), but it's not quite acting alone, either. It's sort of like if someone got the idea for doing something from gossip on 4Chan. They may have already been primed to do something horrendous, but there's an element of but-for causation regarding the particulars and follow-through.


PipeNet is also the name of the scheme independently invented by Wei Dai contemporaneously with USNRL's Onion Routing: http://www.weidai.com/pipenet.txt Onion Routing is what Tor is based on. I'm not sure if the original Tor author(s) knew about PipeNet, but I wouldn't be surprised if they were familiar.

PipeNet was conceived in 1996 (https://cryptome.org/jya/pipenet.htm), before the USNRL work was made public in 1997 (IIRC), so definitely independent, in as much as these things are ever truly independent. Both are derivative of Chaum Mixes (1979), which had become popularized as anonymous e-mail remailers in the 1990s.

P.S. Not a comment about project name clashing, just thought it would be interesting to point out. Wei Dai's PipeNet is all but forgotten these days. But I had came across it (on sci.crypt?) before stumbling on the Onion Routing web page.


Sherman, set the wayback machine....

Definitely a blast from the past. One of the things that made PipeNet very interesting compared to its contemporary peers (e.g. onion routing) was that it used fixed size pipes with constant traffic. An observer would be unable to know when traffic was being sent down the pipe so correlation attacks become significantly more difficult. Pair it with some probabilistic encryption like Blum-Blum-Shub and you can party like a late 90s cypherpunk.


You mean, rewrite the prompt: "Please summarize the article again, but this time identify and explain any references to Geocities".

P.S. I don't mean to assume the previous commenter used ML to summarize, but it just occurred to me some people probably are, and missing details like that is probably common, more common than missing a reference the classic way, otherwise it wouldn't be a summary. At the same time, they may consider themselves to have read the article.


read -s in pdksh does nearly the opposite, saving the string to your history file! See https://man.openbsd.org/ksh#read pdksh is the system shell on OpenBSD, among others, and I just confirmed this is indeed what it does in OpenBSD.

EDIT: FWIW, ksh93 also behaves like pdksh (inherited ksh88 feature?), while zsh behaves like bash. read -s was added to bash 2.04 (2000) and zsh 4.1.1 (2003, committed 2002), both long after the flag was used in ksh--at least as early as the initial pdksh commit to OpenBSD in 1996.


As pdksh has aged into memory, OpenBSD's version is now known as oksh.

Android selected another fork, mksh, as their system shell. This is also included in Red Hat, along with ksh93.

I had read that zsh has strict emulation modes for ksh and bash. Is it possible that zsh behavior changes when those are triggered?


> Is it possible that zsh behavior changes when those are triggered?

It doesn't look that way, at least looking at the option handling code in the read builtin (bin_read): https://github.com/zsh-users/zsh/blob/8a3ee5a/Src/builtin.c#...


Yeah, I came to Linux from BSD and still have some ksh and csh muscle memory from The Before Time

See also https://en.wikipedia.org/wiki/S-Lang (https://www.jedsoft.org/slang/index.html), a (stack-based) scripting language implementing a terminal UI toolkit. Mutt can use use S-Lang instead of ncurses.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: