Hacker Newsnew | past | comments | ask | show | jobs | submit | more Scene_Cast2's commentslogin

This post assumes C/C++ style business logic code.

Anything HPC will benefit from thinking about how things map onto hardware (or, in case of SQL, onto data structures).

I think way too few people use profilers. If your code is slow, profiling is the first tool you should reach for. Unfortunately, the state of profiling tools outside of NSight and Visual Studio (non-Code) is pretty disappointing.


I don’t disagree, but profiling also won’t help you with death by a thousand indirections.


Sure, but that's mostly a myth.


So how do you see in a profiler, that everything is 1.2x slower than it could be?


No-one's getting out of bed for 1.2x.


Depends. If you have a real-time system, that might very well will be what you chase after. Also why not make your program a bit faster when it is no work, by starting it the right way upfront. I mean I wouldn't rewrite a program for this, but when I program some new part and I can avoid an indirection, why not do it? Less complexity, less (failure) state, better performance.


> If you have a real-time system, that might very well will be what you chase after.

If you have a real-time system you care about the difference between real-time and not. But any given 1.2x factor is extremely unlikely to be the difference-maker.

> When I program some new part and I can avoid an indirection, why not do it? Less complexity, less (failure) state, better performance.

Well if it makes the code simpler then you should do it for that reason. But thinking about this kind of low-level performance detail at all will usually do more harm than good.


In languages that don't hide much (e.g heap allocations in Java), less complexity corresponds to better performance (up to a point). Because better performance is fundamentally about letting the computer do less work. When pointer indirection is explicit, the programmer is nudged towards thinking whether they actually want it. Same with dynamic dispatch, polymorphism and every other runtime complexity.


I know games are constantly brought up as an example, but it's for good reason. Your frametime being 16.6ms instead of 20ms is the difference between a shipped feature and a cut one in a console title. And all the data and instruction cache thrashing caused by pointer chasing can (and does) absolutely make that difference.


I found that taking a specific brand of Vitamin D (the Genestra D-mulsion in particular) right before bed was guaranteed to give me vivid dreams. I've had half a dozen friends try it, with every single one reporting similar results.


I checked the ingredients. That is because it contains glycerin. Which is a great and safe supplement to take for anyone with sleeping issues. But will cause very vivid dreams at the start. D3 will not by itself have a huge effect on dreams.


> Glycerin

Do you mean glycine? I've not heard of glycerine having a positive effect on sleep, although glycine is often recommended.

Glycine is an amino, glycerin is a triol.

When I tried to search specifically for "glycerin" and sleep I just get a couple of reddit threads, but no real sources.


This is such a weird fact that I googled it and sure enough it is widely noted!


my bottle of Nature Made D3 also contains glycerin


Probably common since there is pretty much all upside to supplementing it.


I've heard not to take vitamin D right before bed because it will kinda keep you up. Maybe the vitamin D as a stimulant is what's gives you the extra dream awareness.


That's interesting. I know vitamin D can improve sleep quality in people who are deficient, and sleep quality helps with dream recall -- I wonder if that's the mechanism or it's something else.

A cursory search shows lots of redditors taking Vitamin D (some of them way, way too much btw) and having wild dreams too.

I take 800IU a day and haven't noticed anything on that little.


What is way too much? I take around 4000IU per day. Which just about brings my blood levels into the “green” area in blood testing.


The reddit post was taking 50,000IU a day, which is usually the amount prescribed for someone to take once a week.

Your 4000IU isn't too much. Lots of the brands you see in stores are 5k for daily supplementation.


> Your 4000IU isn't too much.

By what metric? Jeez. People, you need to get your blood checked. There is no one-fits-all dosage. In winter, 4000 IU/d was enough to raise my blood levels well into the excessive range.


My 4000 units were after blood testing and also after genetic testing which showed some VDR mutations that might benefit from supplementation. As mentioned in another comment, that dose brings me slightly over 30 ng/ml, so basically borderline ok.

I fully agree that supplementation should always be combined with both blood testing and also a general medical evaluation.


Current recommendations are 800 IU per day if you’re not significantly deficient. Always keep testing at least once a year or so. I took 5000 IU per day for a while, which ended up pushing me over 60 ng/ml. That’s considered too high a level and may have negative health effects.


I tested after taking 4000 IU daily for quite a while and ended up at 30.9 ng/ml, so I guess I have some buffer left. But I fully agree, regular testing is prudent when supplementing anything above common established levels.


FWIW: my functional provider recently noted low levels in my labs and I was already taking 2K IU daily. She bumped me up to 6K UI daily.


4,000 is perfect.


Given 10min of sunlight the body can naturally produce 15,000UI equivalent so I think gp is likely astroturfing for that brand


That is actually a low dose.


Yeah, I showed a really mild deficiency in my work so they just suggested adding a low daily dose for me. I wouldn't expect to have had any side effects.


I was very deficient and they gave me 50k UI per day prescription vitamin D3 for 60 days. Sure enough I was high-normal on my next test. 800ui is likely not enough to have any effect unless you consistently take it for years.


That's wild. I've never heard of such a high dose being prescribed daily.

Yes, I wouldn't expect to notice anything on my dose.


It was for 60 days. If they continued to take this much indefinitely it would surely cause troubles, but 60 days when starting from deep deficiency is reasonable.


Even that sounds extreme. The "Vitamin D Hammer" for people extremely deficient is 50k IU just once, not even for a temporary period.


It is high, but it's not extreme. 50k IU just once is an equivalent of about 7000 IU daily for a week, which won't really move the needle much if you're seriously deficient (in fact, it's still within what's considered a safe daily dose for healthy people - you can produce more than that from sunlight alone). You can feel free to take your "hammer" weekly, no deficiency required.

When I took >5000 IU daily for three months, I only raised 25(OH) D level in my blood from 9 to 30 ng/ml, and there's no evidence of toxicity below 150 ng/ml.

Of course, when dealing with high doses you need to keep your levels in check, as absorption can differ between individuals.


I was very very low, googling my level 'risk of rickets' came up!


Supplementing any "large" amount of either Vitamin D or Bs really messes with my sleep. It makes it harder to fall asleep and I get crazy dreams (and sometimes hallucinations in bed too)


Spicy food and dehydration do wonders for me!


I bought some and have not noticed any impact on my dreams.


Yep. MoE, FlashAttention, or sparse retrieval architectures for example.


Still way pricier (>2x) than Gemini 3 and Grok 4. I've noticed that the latter two also perform better than Opus 4, so I've stopped using Opus.


Don't be so sure - while I haven't tested Opus 4.5 yet, Gemini 3 tends to use way more tokens than Sonnet 4.5. Like 5-10X more. So Gemini might end up being more expensive in practice.


Yeah, only comparing tokens/dollar it is not very useful.


For anyone looking for some IDEs to tinker around with shaders:

* shadertoy - in-browser, the most popular and easiest to get started with

* Shadron - my personal preference due to ease of use and high capability, but a bit niche

* SHADERed - the UX can take a bit of getting used to, but it gets the job done

* KodeLife - heard of it, never tried it


Cables[0] is pretty cool too. Kirell Benzi has released some impressive work using it [1].

[0]: https://cables.gl/

[1]: https://youtu.be/CltYdTVH7_A


Had a look in Mint's software manager and found this (flatpak/aur/macports/windows): https://github.com/fralonra/wgshadertoy


There's also bonzomatic which the demo scene uses for shader live coding competitions:

https://github.com/Gargaj/Bonzomatic


Also on macOS (and iPadOS) it's super easy to get started with Metal shaders in Playgrounds.


For swiftUI+metal specifically: https://metal.graphics


Godot has a shader editor and the effects are updated in real time; another option.


Matmuls (and GEMM) are a hardware-friendly way to stuff a lot of FLOPS into an operation. They also happen to be really useful as a constant-step discrete version of applying a mapping to a 1D scalar field.

I've mentioned it before, but I'd love for sparse operations to be more widespread in HPC hardware and software.


My biggest worry is that it's harder and harder to find a phone with an unlockable bootloader.


Lineageos maintains a list and you can filter for devices with official bootloader unlock https://wiki.lineageos.org/devices/. Buy only these devices to signal to these companies that this matters.

Noteably OnePlus 13 and Pixel 9a, both 2025 phones, can be unlocked.


If someone want something also quite recent and cheaper in this supported list there is also motorola edge+ (2023) with good specs. I got myself refurbished with perfect condition for just 240usd.


Super cool. Also, this is an example of why having an open OS is awesome.


I've noticed that image models are particularly bad at modifying popular concepts in novel ways (way worse "generalization" than what I observe in language models).


Maybe LLMs always fail to generalize outside their data set, and it’s just less noticeable with written language.


This is it. They’re language models which predict next tokens probabilistically and a sampler picks one according to the desired ”temperature”. Any generalization outside their data set is an artifact of random sampling: happenstance and circumstance, not genuine substance.


However: do humans have that genuine substance? Is human invention and ingenuity more than trial and error, more than adaptation and application of existing knowledge? Can humans generalize outside their data set?

A yes-answer here implies belief in some sort of gnostic method of knowledge acquisition. Certainly that comes with a high burden of proof!


Yes. Humans can perform abduction, extrapolating given information to new information. LLMs cannot, they can only interpolate new data based on existing data.


Yes


Can you elaborate on what you mean by that, and prove it?

https://journals.sagepub.com/doi/10.1177/09637214251336212


The proof is that humans do it all the time and that you do it inside your head as well. People need to stop with this absurd level of rampant skepticism that makes them doubt their own basic functions.


the concept is too nebulous to "prove" but the fact im operating a machine (relatively) skillfully to write to you shows we are in fact able to generalise. This wasn't planned, we came up with this. Same with cars etc. We're quite good at the whole "tool use" thing


Most image models are diffusion models, not LLMs, and have a bunch of other idiosyncrasies.

So I suspect it's more that lessons from diffusion image models don't carry over to text LLMs.

And the Image models which are based on multi-mode LLMs (like Nano Banana) seem to do a lot better at novel concepts.


But the clocks in this demo aren't images.


Yes, but they are reasoning within their dataset, which will contain multiple example of html+css clocks.

They are just struggling to produce good results because they are language models and don’t have great spatial reasoning skills, because they are language models.

Their output normally has all the elements, just not in the right place/shape/orientation.


They definitely don't completely fail to generalise. You can easily prove that by asking them something completely novel.

Do you mean that LLMs might display a similar tendency to modify popular concepts? If so that definitely might be the case and would be fairly easy to test.

Something like "tell me the lord's prayer but it's our mother instead of our father", or maybe "write a haiku but with 5 syllables on every line"?

Let me try those ... nah ChatGPT nailed them both. Feels like it's particular to image generation.


They used to do poorly with modified riddles, but I assume those have been added to their training data now (https://huggingface.co/datasets/marcodsn/altered-riddles ?)

Like, the response to "... The surgeon (who is male and is the boy's father) says: I can't operate on this boy! He's my son! How is this possible?" used to be "The surgeon is the boy's mother"

The response to "... At each door is a guard, each of which always lies. What question should I ask to decide which door to choose?" would be an explanation of how asking the guard what the other guard would say would tell you the opposite of which door you should go through.


Also, they're fundamentally bad at math. They can draw a clock because they've seen clocks, but going further requires some calculations they can't do.

For example, try asking Nano Banana to do something simpler, like "draw a picture of 13 circles." It likely will not work.


I would love to hear your (and others) opinions.

I don't have a good idea of what happened inside or what they could have done differently, but I do remember them going from a world-leading LLM AI lab to selling embeddings to enterprise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: