Hacker Newsnew | past | comments | ask | show | jobs | submit | skobes's commentslogin

Omitting <body> can lead to weird surprises. I once had some JavaScript mysteriously breaking because document.body was null during inline execution.

Since then I always write <body> explicitly even though it is optional.


Shoutout to fava, the beancount GUI frontend:

https://beancount.github.io/fava/

I really like its big picture view of the accounts, the search / query interface, and live editing of transactions.


This illustrates the difficulty of maintaining a separation between bugs and discussions:

> To be clear, I 100% believe that there is some kind of leak affecting some specific configuration of users

In this case it seems you believe a bug exists, but it isn't sufficiently well-understood and actionable to graduate to the bug tracker.

But the threshold of well-understood and actionable is fuzzy and subjective. Most bugs, in my experience, start with some amount of investigative work, and are actionable in the sense that some concrete steps would further the investigation, but full understanding is not achieved until very late in the game, around the time I am prototyping a fix.

Similarly the line between bug and feature request is often unclear. If the product breaks in specific configuration X, is it a bug, or a request to add support for configuration X?

I find it easier to have a single place for issue discussion at all stages of understanding or actionability, so that we don't have to worry about distinctions like this that feel a bit arbitrary.


Is the distinction arbitrary? It sounded like issues are used for clear, completable jobs for the maintainers. A mysterious bug is not that. The other work you describe is clearly happening, so I'm not seeing a problem with this approach other than its novelty for users. But to me it looks both clearer than the usual "issue soup" on a popular open source project and more effective at using maintainer time, so next time I open-source something I'd be inclined to try it.


Some people see "bug tracker" and think "a vetted report of a problem that needs fixing", others see "bug tracker" and think "a task/todo list of stuff ready for an engineer to work on"

Both are valid, and it makes sense to be clear about what the teams view is


Agreed. Honestly, I think of those as two very different needs that should have very different systems. To me a bug tracker is about collecting user reports of problems and finding commonalities. But most work should be driven by other information.

I think the confusion of bug tracking with work tracking comes out of the bad old days where we didn't write tests and we shipped large globs of changes all at once. In that world, people spent months putting bugs in, so it makes sense they'd need a database to track them all after the release. Bugs were the majority of the work.

But I think a team with good practices that ships early and often can spend a lot more time on adding value. In which case, jamming everything into a jumped-up bug tracker is the wrong approach.


I think these are valid concerns for a project maintainer to think through for managing a chosen solution but I don't think there is a single correct solution. The "correct", or likely least bad, solution depends on the specific project and tools available.

For bug reports, always using issues for everything also requires you to evaluate how long an issue should exist before it is closed out if it can't be reproduced(if trying to keep a clean issue list). That could lead to discussion fragmentation if now new reports start coming in that need to be reported, but not just anyone can manage issue states, so a new one is created.

From a practical standpoint, they have 40 pages of open discussion in the project and 6 pages of open issues, so I get where they're coming from. The GH issue tracker is less than stellar.


This is not a standalone article but a section from Butterick's book, "Typography for Lawyers", which is hosted in full on the website. The book is an opinionated style manual, and many alternatives are described in nearby sections.


I'm fond of STIX Two, which is very close to Times New Roman but just a little bit nicer, especially the italic.


I agree! More praise: it's well-hinted, has good support for Unicode and math, and comes packaged with macOS.

It should be mentioned that the x-height is much higher than the usual Times New Roman, which is usually a good thing imo, but different.


Once I realized that some people expect and are happy for you to jump in with unprompted thoughts or stories, it became easier for me to be intentional about doing so.

I think I'm a lot better now than when I was younger at adapting to a wide range of conversational styles, mostly just from paying more attention to that dynamic.

Do you feel like your conversational toolbox has evolved over time? :)


Ha, yes a bit! Not interrupting or talking over someone was drilled into me in childhood, but exposure to different family dynamics helped me learn that it's not a universal value, and that I can adapt and adjust my communication styles for different groups and situations.

That's still a bit of a struggle to push myself to "speak out of turn" and ensure my voice is included in a discussion.


I hate these too, but I'm worried that a ban just incentivizes being more sneaky about it.


I would consider that an https://xkcd.com/810/ situation.

My objection to AI comments is not that they are AI per se, but they are noise. If people are sneaky enough that they start making valuable AI comments, well that is great.


I think people are just presuming that others are regurgitating AI pablum regardless.

People are seeing AI / LLMs everywhere — swinging at ghosts — and declaring that everyone are bots that are recycling LLM output. While the "this is what AI says..." posts are obnoxious (and a parallel to the equally boorish lmgtfy nonsense), not far behind are the endless "this sounds like AI" type cynical jeering. People need to display how world-weary and jaded they are, expressing their malcontent with the rise of AI.

And yes, I used an em dash above. I've always been a heavy user of the punctuation (being a scattered-brain with lots of parenthetical asides and little ability to self-edit) but suddenly now it makes my comments bot-like and AI-suspect.

I've been downvoted before for making this obvious, painfully true observation, but HNers, and people in general, are much less capable at sniffing out AI content than they think they are. Everyone has confirmation-biased themselves into thinking they've got a unique gift, when really they are no better than rolling dice.


Thing is, the comments that sound "AI" generated but aren't have about as much value as the ones that really are.

Tbh the comments in the topic shouldn't be completely banned. As someone else said, they have a place for example when comparing LLM output or various prompts giving different hallucinations.

But most of them are just reputation chasing by posting a summary of something that is usually below the level of HN discussion.


>the comments that sound "AI" generated but aren't have about as much value as the ones that really are

When "sounds AI generated" is in the eye of the beholder, this is an utterly worthless differentiation. I mean, it's actually a rather ironic comment given that I just pointed out that people are hilariously bad at determining if something is AI generated, and at this point people making such declarations are usually announcing their own ignorance, or alternately they're pathetically trying to prejudice other readers.

People now simply declare opinions they disagree with as "AI", in the same way that people think people with contrary positions can't possibly be real and must be bots, NPCs, shills, and so on. It's all incredibly boring.


I mean verbose for no good reasons, not contributing meaningfully to the discussion in any way.

Just like those StackOverflow answers - before "AI" - that came in 30 seconds on any question and just regurgitated in a "helpful" sounding way whatever tutorial the poster could find first that looked even remotely related to the question.

"Content" where the target is to trick someone into an upvote instead of actually caring about the discussion.


Developers have been anthropomorphizing computers for as long as they've been around though.

"The compiler thinks my variable isn't declared" "That function wants a null-terminated string" "Teach this code to use a cache"

Even the word computer once referred to a human.


If LLMs produce fake citations, why would we trust LLMs to check them?


Because the risk is lower. They will give you suspicious citations and you can manually check those for false positives. If some false citation pass, it was still a net gain.


Because my boss said if I don't, I'm fired.


Wouldn't this fall under Auer deference (agency's interpretation of its own regulation)?

There is some uncertainty about whether Auer deference survives after Loper Bright.


But this isn't an ambiguous area of law. The statute is pretty clear in the text here - that the EB1-A criteria are necessary but not sufficient. That's what the step1 (necessary) and step2 (sufficient) boil down to. You can litigate on what qualifies as necessary if the agency is doing something weird, but ultimately it is a subjective evaluation. The court isn't going to adjudicate on the merits, USCIS is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: