This post assumes C/C++ style business logic code.
Anything HPC will benefit from thinking about how things map onto hardware (or, in case of SQL, onto data structures).
I think way too few people use profilers. If your code is slow, profiling is the first tool you should reach for. Unfortunately, the state of profiling tools outside of NSight and Visual Studio (non-Code) is pretty disappointing.
Depends. If you have a real-time system, that might very well will be what you chase after. Also why not make your program a bit faster when it is no work, by starting it the right way upfront. I mean I wouldn't rewrite a program for this, but when I program some new part and I can avoid an indirection, why not do it? Less complexity, less (failure) state, better performance.
> If you have a real-time system, that might very well will be what you chase after.
If you have a real-time system you care about the difference between real-time and not. But any given 1.2x factor is extremely unlikely to be the difference-maker.
> When I program some new part and I can avoid an indirection, why not do it? Less complexity, less (failure) state, better performance.
Well if it makes the code simpler then you should do it for that reason. But thinking about this kind of low-level performance detail at all will usually do more harm than good.
In languages that don't hide much (e.g heap allocations in Java), less complexity corresponds to better performance (up to a point). Because better performance is fundamentally about letting the computer do less work. When pointer indirection is explicit, the programmer is nudged towards thinking whether they actually want it. Same with dynamic dispatch, polymorphism and every other runtime complexity.
I know games are constantly brought up as an example, but it's for good reason. Your frametime being 16.6ms instead of 20ms is the difference between a shipped feature and a cut one in a console title. And all the data and instruction cache thrashing caused by pointer chasing can (and does) absolutely make that difference.
I found that taking a specific brand of Vitamin D (the Genestra D-mulsion in particular) right before bed was guaranteed to give me vivid dreams. I've had half a dozen friends try it, with every single one reporting similar results.
I checked the ingredients. That is because it contains glycerin. Which is a great and safe supplement to take for anyone with sleeping issues. But will cause very vivid dreams at the start. D3 will not by itself have a huge effect on dreams.
I've heard not to take vitamin D right before bed because it will kinda keep you up. Maybe the vitamin D as a stimulant is what's gives you the extra dream awareness.
That's interesting. I know vitamin D can improve sleep quality in people who are deficient, and sleep quality helps with dream recall -- I wonder if that's the mechanism or it's something else.
A cursory search shows lots of redditors taking Vitamin D (some of them way, way too much btw) and having wild dreams too.
I take 800IU a day and haven't noticed anything on that little.
By what metric? Jeez. People, you need to get your blood checked. There is no one-fits-all dosage. In winter, 4000 IU/d was enough to raise my blood levels well into the excessive range.
My 4000 units were after blood testing and also after genetic testing which showed some VDR mutations that might benefit from supplementation. As mentioned in another comment, that dose brings me slightly over 30 ng/ml, so basically borderline ok.
I fully agree that supplementation should always be combined with both blood testing and also a general medical evaluation.
Current recommendations are 800 IU per day if you’re not significantly deficient. Always keep testing at least once a year or so. I took 5000 IU per day for a while, which ended up pushing me over 60 ng/ml. That’s considered too high a level and may have negative health effects.
I tested after taking 4000 IU daily for quite a while and ended up at 30.9 ng/ml, so I guess I have some buffer left. But I fully agree, regular testing is prudent when supplementing anything above common established levels.
Yeah, I showed a really mild deficiency in my work so they just suggested adding a low daily dose for me. I wouldn't expect to have had any side effects.
I was very deficient and they gave me 50k UI per day prescription vitamin D3 for 60 days. Sure enough I was high-normal on my next test. 800ui is likely not enough to have any effect unless you consistently take it for years.
It was for 60 days. If they continued to take this much indefinitely it would surely cause troubles, but 60 days when starting from deep deficiency is reasonable.
It is high, but it's not extreme. 50k IU just once is an equivalent of about 7000 IU daily for a week, which won't really move the needle much if you're seriously deficient (in fact, it's still within what's considered a safe daily dose for healthy people - you can produce more than that from sunlight alone). You can feel free to take your "hammer" weekly, no deficiency required.
When I took >5000 IU daily for three months, I only raised 25(OH) D level in my blood from 9 to 30 ng/ml, and there's no evidence of toxicity below 150 ng/ml.
Of course, when dealing with high doses you need to keep your levels in check, as absorption can differ between individuals.
Supplementing any "large" amount of either Vitamin D or Bs really messes with my sleep. It makes it harder to fall asleep and I get crazy dreams (and sometimes hallucinations in bed too)
Don't be so sure - while I haven't tested Opus 4.5 yet, Gemini 3 tends to use way more tokens than Sonnet 4.5. Like 5-10X more. So Gemini might end up being more expensive in practice.
Matmuls (and GEMM) are a hardware-friendly way to stuff a lot of FLOPS into an operation. They also happen to be really useful as a constant-step discrete version of applying a mapping to a 1D scalar field.
I've mentioned it before, but I'd love for sparse operations to be more widespread in HPC hardware and software.
Lineageos maintains a list and you can filter for devices with official bootloader unlock https://wiki.lineageos.org/devices/. Buy only these devices to signal to these companies that this matters.
Noteably OnePlus 13 and Pixel 9a, both 2025 phones, can be unlocked.
If someone want something also quite recent and cheaper in this supported list there is also motorola edge+ (2023) with good specs. I got myself refurbished with perfect condition for just 240usd.
I've noticed that image models are particularly bad at modifying popular concepts in novel ways (way worse "generalization" than what I observe in language models).
This is it. They’re language models which predict next tokens probabilistically and a sampler picks one according to the desired ”temperature”. Any generalization outside their data set is an artifact of random sampling: happenstance and circumstance, not genuine substance.
However: do humans have that genuine substance? Is human invention and ingenuity more than trial and error, more than adaptation and application of existing knowledge? Can humans generalize outside their data set?
A yes-answer here implies belief in some sort of gnostic method of knowledge acquisition. Certainly that comes with a high burden of proof!
Yes. Humans can perform abduction, extrapolating given information to new information. LLMs cannot, they can only interpolate new data based on existing data.
The proof is that humans do it all the time and that you do it inside your head as well. People need to stop with this absurd level of rampant skepticism that makes them doubt their own basic functions.
the concept is too nebulous to "prove" but the fact im operating a machine (relatively) skillfully to write to you shows we are in fact able to generalise. This wasn't planned, we came up with this. Same with cars etc. We're quite good at the whole "tool use" thing
Yes, but they are reasoning within their dataset, which will contain multiple example of html+css clocks.
They are just struggling to produce good results because they are language models and don’t have great spatial reasoning skills, because they are language models.
Their output normally has all the elements, just not in the right place/shape/orientation.
They definitely don't completely fail to generalise. You can easily prove that by asking them something completely novel.
Do you mean that LLMs might display a similar tendency to modify popular concepts? If so that definitely might be the case and would be fairly easy to test.
Something like "tell me the lord's prayer but it's our mother instead of our father", or maybe "write a haiku but with 5 syllables on every line"?
Let me try those ... nah ChatGPT nailed them both. Feels like it's particular to image generation.
Like, the response to "... The surgeon (who is male and is the boy's father) says: I can't operate on this boy! He's my son! How is this possible?" used to be "The surgeon is the boy's mother"
The response to "... At each door is a guard, each of which always lies. What question should I ask to decide which door to choose?" would be an explanation of how asking the guard what the other guard would say would tell you the opposite of which door you should go through.
Also, they're fundamentally bad at math. They can draw a clock because they've seen clocks, but going further requires some calculations they can't do.
For example, try asking Nano Banana to do something simpler, like "draw a picture of 13 circles." It likely will not work.
I don't have a good idea of what happened inside or what they could have done differently, but I do remember them going from a world-leading LLM AI lab to selling embeddings to enterprise.
Anything HPC will benefit from thinking about how things map onto hardware (or, in case of SQL, onto data structures).
I think way too few people use profilers. If your code is slow, profiling is the first tool you should reach for. Unfortunately, the state of profiling tools outside of NSight and Visual Studio (non-Code) is pretty disappointing.