Hacker Newsnew | past | comments | ask | show | jobs | submit | delifue's commentslogin

How does static allocation avoid wasting memory?

Static memory allocation requires hardcoding an upper limit of size of everything. For example, if you limit each string to be at most 256 bytes, then a string with only 10 bytes will waste 246 bytes of memory.

If you limit string length to 32 bytes it will waste fewer memory but when a string longer than 32 bytes comes it cannot handle.


Your C++ compiler already implements a solution to that called short string optimization. Strings start out as small byte buffers that can be easily be passed around. When they grow beyond that, the fixed buffer is swapped out for pointer to another allocation on the heap. There's no (immediate) reason that allocation has to come from a direct call to the system allocator though, and it usually doesn't. It can just as easily come from an allocation pool that was initialized at startup.

Even if you needed to hardcode upper size limits, which your compiler already does to some extent (the C/C++ standards anticipate this by setting minimum limits for certain things like string length), you wouldn't actually pay the full price on most systems because of overcommit. There are other downsides to this depending on implementation details like how you reclaim memory and spawn compiler processes, so I'm not suggesting it as a good idea. It's just possible.


> if you limit each string to be at most 256 bytes, then a string with only 10 bytes will waste 246 bytes of memory.

No? Unless you limit each string to be exactly 256 bytes but that's silly.

> If you limit string length to 32 bytes it will waste fewer memory but when a string longer than 32 bytes comes it cannot handle.

Not necessarily. The early compilers/linkers routinely did "only the first 6/8 letters of an identifier are meaningful" schtick: the rest was simply discarded.


Unfortunately web API doesn't yet allow drawing multi-line text in canvas. To draw multi-line text in canvas you need a layouting library


Many problems in article are specific to old versions of iOS which is only in old versions of iPhone. Most old iPhone users are not potential paying customers. iOS need to be supported but old versions of iOS don't.


I head that Jane Street use OCaml and said similar thing. Although there are few OCaml developers they are overall better so hiring is easier


Key sentence

Because you weren’t suffering from too much work, you were suffering from too little truly important work.


Not being able to store mutable ref in other type reduces expressiveness. The doc already mentions it cannot allow Iterator that doesn't consume container

https://github.com/rue-language/rue/blob/trunk/docs/designs/...

No silver bullet again


Just to be clear, these proposals are basically scratch notes I have barely even validated, I just wanted to be able to iterate on some text.

But yes, there is going to inherently be some expressiveness loss. There is no silver bullet, that's right. The idea is, for some users, they may be okay with that loss to gain other things.


For future readers, please use this link: https://github.com/rue-language/rue/blob/b0867ccff77ee9957d6...

I am going to be cleaning these up, as they don't necessarily represent things I actually want to do in this exact way. My idea was to dump some text and iterate on them, but I think that's actually not great given some other process changes I'm making, so I want to start fresh.


Modern large productivity software (including IDE) are often "fragile".

Sometimes some configuration is wrong and it behave wrongly but you don't know which configuration.

Sometimes it relies on another software installed on system and if you installed the incompatible version it malfunctions without telling you incompatibility.

Sometimes the IDE itself has random bugs.

A lot of time is spent workarounding IDE issues


Building for an fpga shouldn’t be any harder than building for cortex mcus, and there are lots of free/oss toolchains and configurations for those.


Compiling RTL to run on an FPGA is way more complicated than compiling code to run on a CPU. Typically it has to meet timing, which requires detailed knowledge of logic placement. I'm not saying that's impossible, just that it's more complicated.


> shouldn’t

Is doing so much heavy lifting here, I need to ask; how much FPGA configuration you have done before?


Very little, just student projects in undergrad.

So yes, in that sense I'm talking out of my ass. But perhaps you can help enlighten me what it is that makes building FPGA firmware different from building MCU firmware.


There is nuance distinction between "fundamentally work faster" and "being pushed to work faster".

The first is what to optimize. The second "being pushed to work faster" often produce bad results.

https://x.com/jamonholmgren/status/1994816282781519888

> I’ll add that there’s also a massive habits and culture problem with shipping slop that is very hard to eradicate later.

> It’s like a sports team, losing on purpose so they can get better draft picks for the next year. It rarely works well because that loser mentality infects the whole organization.

> You have to slow down, do it right, and then you can speed up with that foundation in place.


This reminds of me Makimoto’s Wave:

https://semiengineering.com/knowledge_centers/standards-laws...

There is a constant cycle between domain-specific hardware-hardcoded-algorithm design, and programmable flexible design.


It's also known as Sutherland's Wheel of Reincarnation:

http://www.cap-lore.com/Hardware/Wheel.html


I put some article content to Pangram. Pangram says it's AI

https://www.pangram.com/history/282d7e59-ab4b-417c-9862-10d6...

The author's writing style is really similar to AI. AI already somehow passed Turing test. The AI detectors are not that trustworthy (but still useful).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: