Hacker Newsnew | past | comments | ask | show | jobs | submit | fauigerzigerk's commentslogin

You shouldn't expect a database system to release memory back to the OS just because it's no longer needed for some specific purpose.

DuckDB has a memory_limit setting with a default of 80% of RAM. If you want to set a lower limit you can do something like

SET memory_limit = '1GB';

https://duckdb.org/2024/07/09/memory-management


Alternatively, China could make progress fabricating and exporting its own chips and designing its own GPUs. The entire chip sector could go the way of solar panels and EVs with prices dropping and margins collapsing to near zero.

Yup, they're also like 5-10 years out from their own lithography machines as well. China wanted Taiwan before TSMC was a thing, by the time they take Taiwan back they won't need TSMC.

I remember this argument being used against Postgres and for Oracle, against Linux and for Windows or AS/400, etc. And I think it makes sense for a certain type of organisation that has no ambition or need to build its own technology competence.

But for everyone else I think it's important to find the right balance in the right areas. A car mechanic is never in the business of building tools. But software engineers always are to some degree, because our tools are software as well.


But postgres is a professional tool. I don't argue for "use enterprise bullshit". I steer clear of that garbage anyway. SWEs always forget the moat of people focusing their whole work day on a problem and having wider access to information than you do. SWEs forget that time also costs money and oftentimes it's better and cheaper just to pay someone. How much does it cost to ship an internal agent solution that runs automated E2E tests for example (independent of quality)? And how much does a normal SaaS for that cost? Devs have cost and risk attached to their work that is not properly taken into account most of the times.

There is a size of tooling thats fine. Like a small script or simple automation or cli UI or whatever. But if we're talking more complex, 95% of the times a stupid idea.

PS: of course car mechanics built their tools. I work on my car and had to build tools. A hex nut that didn't fit in the engine bay, so I had to grind it down. Normal. Cut and weld an existing tool to get into a tight spot. Normal. That's the simple CLI tool size of a tool. But no one would think about building a car lift or a welder or something.


> A car mechanic is never in the business of building tools.

Oh, don't say. A welder, an angle grinder and some scrap metal help a lot.

Unless you're a "dealer" car mechanic, where it is not allowed to think at all, only replace parts.


>Therefore if parallelising code reduces the runtime of that code, it is almost always more energy efficient to do so

Only if it leads to better utilisation. But in the scenario that the parent comment suggests, it does not lead to better utilisation as all cores are constantly busy processing requests.

Throughput as well as CPU time across cores remains largely the same regardless of whether or not you paralellise individual programs/requests.


That's true, which is why I added the caveat that this is only true if parallelising reduces the overall runtime - if you can get in more requests per second through parallelisation. And the flip side of that is that if you're able to perfectly utilise all cores then you're already running everything in parallel.

That said, I suspect it's a rare case where you really do have perfect core utilisation.


There's a bunch of hyperactive people in those Apple "support" forums who don't actually help anyone. They respond to almost every discussion thread aggressively deflecting any criticism directed at Apple.

They pretend to offer "solutions" so their posts don't come across as unconstructive, but their solutions are always essentially the same, often culminating in a factory reset. There is never any attempt to get to the bottom of anything or diagnose what the actual issue is.

They are volunteering their time to make people shut up, bow their head in shame and go away. I don't think this is what you want in an open source project.


Indeed. Apple should close those forums. It damages their brand to have such antagonistic people pretending to be support agents. A company of Apple's wealth could afford to have a small army of people in the Philippines do the same job with much less aggression.

>small army

Instead we get:

https://developer.apple.com/forums/thread/669252

>”… Now, the few Apple engineers that get back to me for some of these issues and the Apple support as well often tell me that Apple really cares about customer feedback. I really want to believe this ... but it's so hard to believe it, if less than 1% of my submitted reports (yes, less than 1%, and it's probably much less) ever gets a response. A response, if it ever comes, can come after 3 months, or after 1 year, or after 3 years; only rarely does it come within 1 month. To some of the feedbacks, after getting a response from the Apple engineers, I responded, among other things, by asking if I'm doing something wrong with the way I submit the feedback reports. Because if I do something wrong, then that could be the reason why only so few of them are considered by the Apple engineers. But I never got any answer to that. I told them that it's frustrating sending so much feedback without ever knowing if it's helpful or not, and never got an answer. …”

Why is this Apple’s path?


I pretty much expect 0 support from any major company unless I am covered with juicy Enterprise support contract. They are too big to care.

Same! Though—

In my exp. their _support_ is fantastic which is another reason it’s odd they will simply leave countless _feedback_ submissions open nearly indefinitely. They ignore their free laborers!


Wholeheartedly agree. The few times in my life that I’ve bothered to post there with a problem, it’s been all the more upsetting that the patronizing generic advice and scolding of the frustrated users, is coming from random volunteer fanbois on the Internet, not even paid Apple staff who contractually have to be positive about Apple. A company with such rabidly loyal supporters shouldn’t deploy them like this. And if it was wise back in 2010 when Apple software was for the most part quite good… it sure isn’t wise now when they’re reaching what I hope is a temporary nadir in quality.

I don't think anything Apple does at this point can damage their brand. It's indestructible.

Tim Cook runs a well oiled machine. At some point, leadership will change. And I don’t think it is as simple as, “Just keep going what Tim was doing.” There are so many moving parts that it is nigh certain Apple will go through a period of brand damage when things begin to fall through the cracks. Will that fall be dramatic? Probably not. But I think you underestimate just how much a shift in leadership can tip the scales.

Exactly same are the Google “support forums”.

At least you know it’s not working as place to submit issue reports. It is better than other way, like Figma, 1Password and many others: a Support Forum with an army of yes-men “support specialists”. They would answer your query with basic troubleshooting and then will say that it will be passed to development team or will be considered, etc. perfectly designed system to pacify user and dismiss their report.


Yes, fanbois, lecturing people that they're using it wrong.

It's not just on Apple's forums, Microsoft has the same kind of guys. They tend to look really popular too because all the other fanbois upvote their comments.

And not only there, many open-source software forums have the same problem.


Are those the people who recommend "fixes" like resetting the PRAM / NVRAM to solve application level issues, or who recommend removing all files from the Desktop to somehow speed up general responsiveness? The Apple pages are awash with them.

I wonder if other cult brands attract the same kind of personalities, or if Apple has somehow done something special to encourage it. When a Harley Davidson owner says he has a problem with his bike, do Harley zealots jump out of the woodwork to attack the dissenter and defend the brand from which they derive their personality?

“ When a Harley Davidson owner says he has a problem with his bike, do Harley zealots jump out of the woodwork to attack the dissenter and defend the brand from which they derive their personality?”

I’m no Harley owner but you and I both know the answer to that.


Fanatical supporters of brands with a high defect rate are a thing. Norton motorcycles. A broad range of English cars. Amiga computers.

Honestly I'm not sure. Motorcycle interest might select against relevant personality traits/disorders. Maybe they bond over commiseration over Harley's decline (a narrative I've heard of)

This shows up in a lot of other areas, like small game companies that have a devoted following. It can get pretty nasty because these types of people are able to be condescending just short of ToS, while baiting other people into crossing the line. A common thread is weak moderation or biased moderation.

As a developer, it's easy to be blind to this because they're on "your side", but it's bad for the health of your support forums.


Given that it appears in Windows, I presume if there was a Harley Davidson support forum, there'd be fanatics defending them there.

(They do defend them IRL, it's "commonly known" that HDs have issues that the install base "overlooks".)


That may be true to some extent but I am still often able to find some kind of answer, granted it's often just "NO, Apple doesn't do this".

Is it really activism though, i.e. a concerted effort to put pressure on project leaders and make actual "demands"? Or is it just the occasional young Rust enthusiast asking questions or making the case for Rust?

You haven't been getting the checks? Bring that up at our next secret cabal meeting.

They're borrow checks

Probably something in-between: a self-organizing cult with too much support from industry.

I agree that null pointers have never been this huge problem compared to many others.

But I think it is rather convenient to formally specify which pointers can be null and which ones can not.

How do I tell the users of my library wether they can pass null into my API? How do I stop people from putting "just in case" null checks into code that doesn't need it, obscuring the logic of the program and burdening callers with yet more error handling?

In my opinion, the "modern" approach of using some sort of Maybe type is just far more productive.


It's a really common pattern that you want to write

   Object value = a.b.c.d;
for something like a JSON value you got over the web where any of it might be null but have to write something like

   Object value = nonNull(a) ? nonNull(a.b) ? nonNull(a.b.c) ? d : null : null : null
or

   Object value = null
   try {
      value = a.b.c.d
   } catch(NullPointerException x) {}
Just being able to write something like

   const value = a?.b?.c?.d;
is a great relief. If it is just value lookup letting a.b return null if a is null would be fine with me but there is something a little creepy about a.doSomething() doing nothing and returning null in that case.

Personally I am not a fan of Optional<T> and (worse) Either<A,B> in Java for various reasons. I think the official functional stuff in Java (like the streams API) is awful, but I like working with functions like

   Object value = nullSafe(a,x=>x.b,x=>x.c,x=>x.d)
The author has some affinity towards a collection-first or collection-only style of programming and certainly I have written programs or subprograms working with dynamic structures that leaned heavily into List<T>, that is, a value which might be present or absent can be treated as a List which just happens to have 0..1 members and x.map(y=>...) and other functional operators do the right thing for the 0..1 and 0..N cases and if you use the same set of operators for both of those cases they just roll off your fingers and you are less likely to forget to put in null checks and when you compose more complex operators out of simpler ones it tends to "just work"


My personal preference would be to have separate types `A` and `A?` and overload field projections to work on `A?` as you'd expect. Subtyping optional.

Putting on my theory hat, while we we can overload field projection for any monad (data structure representing a result + an effect : think result for exceptions, or a thunk for a promise), it's not the best idea.

But for what is at least morally a commutative (order doesn't matter: FAIL ; pure = pure ; FAIL = FAIL) and idempotent (FAIL ; FAIL = FAIL) monad, it works...

Which justifies fun things, like lazy projections on promises!


... or maybe in the definition of type A you can specify what

   ((A) null).someValue
or

  ((A) null).doSomething()
does at the field or method level. I guess there is the issue of how you figure out what the type is which depends of course on the language. Could be a better fit for C++ than all-virtual Java but I don't want to give the C++ any ideas to make that language more complex!


You could, but the issue is that now when I'm defining a type I need to think about it's nullable behaviour.

Versus if I instead view nullability as a way of transforming types (aka a functor) that works by reflection; this gives me some parametricity: the behaviour of my function will be the "same" regardless of which generic I slot in (though fields don't play that well with generics... something something row polymorphism something something).

Ill formed thoughts really; what I'm handwaving at is I slightly agree with the anti-complexity crowd that an operator should usually do the same thing, and yet I think it harms terseness and hence often readability and understanding to have non-overloaded boilerplate everywhere (addf64, addu32, addi16...).

Parametricity is a nice compromise for non-primitive ops, imo.


Yeah, there are two ideals I think of in terms of programming languages:

(1) As a simple set of primitive operations that I can use to build up increasingly complex operations and on that level I value "simple". Like the all-virtual method dispatch in Java as compared to whatever it is in C# or C++. In that case I value predictability and users knowing what they're going to get.

(2) As a platform for domain-specific languages such as the tactics described in On Lisp although those tactics apply well to other languages. In that case my desires as an "author of libraries" and "users of libraries" start to diverge. With that hat on I start to think (a) application programs can consist of "islands" with radically different coding styles (say in Java part of the system is comfortable with POJOs but other parts work with dynamically structured data with no schema or a schema defined at runtime) and (b) I'd like to be able to bend the language really far.

From the viewpoint of a programmer who wants to get work and have an impact on the industry I'm usually working in commercially mature languages and finding as much of those values in them as I can.


Minor correction:

   Object value = null
   try {
      value = a.b.c.d
   } catch(NullPointerException x) {}
and

   const value = a?.b?.c?.d;
are not the same. The ?. operator returns `undefined`, even if the operand is `null`.

Also, Optional<T> in Java is kinda weird. It's a mechanism to enforce null checks.

I much prefer pattern matching like in F# and Rust, where you get an Option<T>:

https://fsharpforfunandprofit.com/posts/the-option-type/

The ergonomics of matching and only having the variable when it's actually Some(_) makes it very natural, so much so that switching languages feels painful, as I'm constantly doubting myself. I don't need to read documentation about whether the output CAN be null/None. If it can be, it is encoded in the type system.

This is unlike Java. Many types can still just return null.


While the "modern" (actually, 1970s, from ML) approach with option types is vastly superior, modern C compilers do let you declare that function parameters should not be NULL. In GCC or Clang, add __attribute__((nonnull)). There are still issues[1] with this, but it's sufficient for most code.

[1] One big issue is that GCC takes this hint and uses it in the optimizer, so you cannot have both a compile time and a run time check for a non-NULL parameter.


A function argument might be a pointer for a few reasons. Is it optional? Is it to avoid copying a huge value? Is it so it can be mutated? Or, my favorite reason, maybe the function was written to be called by another function that happens to already have that value as a pointer.

So yes, I wish there was a commonly used way to express which of these properties of a pointer will be exploited by the function.


C is such a Blub language lol

Optional is `Option<T>`

Zero-copy is `&T`

Mutation is `&mut T`

Diehard C programmers have Stockholm Syndrome for the language because they like to show off how they can be productive in a bad language. If they took a few months to learn C++ / Rust / C# / any language that has solved this, they'd have to admit that they staked a lot of their ego on a language that constantly makes them jump through hoops. Because they love showing that they're good hoop-jumpers.

But any noobie who's a year into programming will say "Oh cool, in those languages I don't have null pointer exceptions" and never learn C. Good!


If only I had a dollar for every time I’ve heard “you just need a different mindset” and “you just to be better at C” as an excuse as to why “we don’t need rust”, it’s always been from the same people who have CVEs against their own C code failing to do exactly what at they say they could never do in real C code

Didn’t c++ solve this problem with references vs pointers? References shouldn’t be null, pointers could be.


It didn't make pointers safer to use though. In Swift and some other modern languages you can't dereference an optional (nullable) pointer without force-unwrapping it.


You can still fabricate a null reference IIRC.


It always involve an undefined behaviour at creation of the 'null reference', there is no legal way to create it.


True. In any case references solve only half of the problem because it lets you state "this function will not take a null pointer". You still cannot say "this function may take a null pointer" unless you use a very unusual convention of saying that any pointer argument may take a null pointer.


I don't find that convention unusual. That's how I (and everyone at my company) writes code every day. If an argument is a pointer, that means it may be null. If it may not be null, it should be a reference.

And the libraries that you use?

E.g.,

std::size_t std::strlen(const char* str);


After using TypeScript I see userland Maybe types as a workaround for a language design flaw. When the builtin type system allows you to declare nullable and non-nullable reference types and produces compile errors when you don’t check a nullable value before dereferencing it, the problems with null go away

Option<Option<T>> works fine, T | undefined | undefined doesn't. Typescript does weird things because they wanted to make it backwards compatible with Javascript.

It's not blocked. It's just not working in Safari (which includes Chrome and Firefox on iOS).


>I don't think a crate should abdicate from modeling the error domain any more than they should abdicate from modeling the other types.

Yes, it's just harder. We usually have a pretty good idea what callers want from the happy path, but the range of things that callers may or may not want to do in case of an error is very broad.


This is an interesting perspective. I usually don't try to imagine how someone should handle the error. Instead I try to indicate the different types of errors that could occur and what type of information needs to be included to be useful to the caller. I can leave the question of what to do with that information to the caller since it's highly situational and the context necessary to make that decision lives outside of the code where I am modeling the error.


That can be tricky because there may be a trade-off between error message quality and something else. Like, perhaps, the size of an error, code size or even runtime performance. Another trade-off with too-detailed errors---especially when those details are part of the library API---is that they become an extensibility hazard. Unless you're extremely certain about the line that divides a specific implementation from the logical domain of errors for a particular operation, you might get that wrong. And changing the implementation in the future might want a change in the error details.

This is very hand-wavy, but I think we're talking at a very high level of abstraction here. My main point is to suggest that there is more of a balancing act here than perhaps your words suggest.


I agree that it's a balancing act. I just don't think you get to abdicate from doing that balancing act and getting the balance wrong has consequences just like getting the balancing act wrong in your non error data model.


>I usually don't try to imagine how someone should handle the error.

But that's exactly what you do when you...

>... try to indicate the different types of errors that could occur and what type of information needs to be included to be useful to the caller.

Modelling the error domain means to decide which possible distinctions matter and which don't. This is not self evident. It's you imagining what users may want to do with your errors.

Is it enough for the caller to know that some string you're parsing is invalid or do they need to know what it is exactly that makes it invalid?

If you decide to just pass on everything you happen to know based on your current implementation then you are putting restrictions on future changes to that implementation, potentially including the use of third party libraries.

What if you find a shortcut or a library that makes your parser 10 times faster but doesn't provide this detailed information?

What I see a lot in Rust is that massive amounts of implementation details are leaked via errors. thiserror actually encourages this sort of implementation leak:

  #[derive(Error, Debug)]
  pub enum MyError {
      Io(#[from] io::Error),
      Glob(#[from] globset::Error),
  }
https://docs.rs/thiserror/latest/thiserror/


Sure, if you don't model the error domain correctly you will leak stuff that maybe you shouldn't leak. But I'm not sure that this is worse than just not exposing the types of errors that you are handling.

Your example is interesting actually because there are real differences in those types of errors. IO errors are different from the globset errors. It is reasonable to assume that you would want to handle them differently. If your function can actually have both errors as a consumer I would want to know that so I can make the correct decisions in context. If you don't have a way to signal to my code that both happen you've deprived me of the tools to do my own proper error modeling and handling.


>Sure, if you don't model the error domain correctly you will leak stuff that maybe you shouldn't leak

You are implying that there is one correct and self evident set of distinctions. I disagree with that. In library design, you're always making assumptions about what users may want. In my opinion, this is even harder when modelling errors, because there are so many possible ways in which callers might want to respond.

>Your example is interesting actually because there are real differences in those types of errors. IO errors are different from the globset errors.

Of course. I'm not complaining about the distinction between io errors and globbing errors here but about the fact that the globset library and its specific error type is leaked.

What if someone (say fastburningsushi) comes along and creates a glob library that absolutely smokes this one? :P


AVP may be dead but VisionOS is not. I'm pretty sure Apple smart glasses are coming.


If someone cracks “smart glasses” that’s the next smartphone-size market and revolution, guaranteed, no question about it.

VR headsets ain’t it but I’m convinced the reason every company is working on them and developing AR stuff for their traditional devices (which are terrible to use for AR) is because they don’t want to still be at the starting line if someone figures out smart glasses.


This is the “answer” in plain sight and I agree. The iPhone is the beating heart of the modern Apple empire. Tim Cook has been a vocal proponent of AR since the summer of Pokemon Go. That combined with Meta getting traction with their Rayban line is almost certainly at the center of an overarching internal strategy at Apple to ensure they are positioned to maintain or even grow position as end user mobile computing form factors shift beyond the traditional smartphone. Getting the ux and app ecosystem ready visually is what ‘caused’ Liquid Glass.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: