Hacker Newsnew | past | comments | ask | show | jobs | submit | more orbital-decay's commentslogin


Alignment is indeed a red herring, but the article conflates alignment training of the model itself and prompting a bot based on that model. Musk's manipulations with Grok are definitely the latter.

Get COB strips, good ones are essentially dotless.

You'll still have the contrast between the strip and the background, though. You have to hide the strip in a clever way, there are all sorts of hacks and decorative elements out there. However, that's also why all stuff done entirely with LED strips looks kind of cheesy, there's always a visible gradient. Designing it without gradients is the tricky part, and it usually means you need spotlights and other types of sources for the main lighting.


They definitely didn't. They demonstrated their stuff long before OAI and the models were nothing like each other.

Taking the control from you is a corporate decision, not the inherent property of compartmentalization.

Yeah, I think it sometimes even repeats Gemini's injected platform instructions. It's pretty curious because a) Gemini uses something closer to the "chain of draft" and never repeats them in full naturally, only the relevant part, and b) these instructions don't seem to have any effect in GLM, it repeats them in the CoT but never follows them. Which is a real problem with any CoT trained through RL (the meaning diverges from the natural language due to reward hacking). Is it possible they used is in the initial SFT pass to improve the CoT readability?

What's the reason for the seller to know who I am?

Any normal pre-total-surveillance store would've had zero issues selling me something for cash if I walked in wearing a ski mask.


That is not remotely true, dude. Probably some stores would've been ok with it. But for the past 40 years or more, wearing a ski mask around has had the connotation of "this person is up to no good". A lot of stores would've had a problem with your hypothetical purchase for quite some time now.


Let's never mind the ski mask. For thousands of years, a stranger could walk into a store and buy something for cash. The store didn't know their name, didn't have surveillance cameras or computers because they didn't exist and generally wouldn't even be able to remember that the purchase had happened if asked about it six months later.

The baked-in assumptions observation is basically the opposite of the impression I get after watching Gemini 3's CoT. With the maximum reasoning effort it's able to break out of the wrong route by rethinking the strategy. For example I gave it an onion address without the .onion part, and told it to figure out what this string means. All reasoning models including Gemini 2.5 and 3 assume it's a puzzle or a cipher (because they're trained on those) and start endlessly applying different algorithms to no avail. Gemini 3 Pro is the only model that can break the initial assumption after running out of ideas ("Wait, the user said it's just a string, what if it's NOT obfuscated"), and correctly identify the string as an onion address. My guess is they trained it on simulations to enforce the anti-jailbreaking commands injected by the Model Armor, as its CoT is incredibly paranoid at times. I could be wrong, of course.


I've had some weird "thinking outside the box" behavior like this. I once asked 3 Pro what Ozzy Osbourne is up to. The CoT was a journey, I can tell you! It's not in its training data that he actually passed away. It did know he was planning a tour though. It had a real struggle trying to consolidate "suspicious search results" and even questioned whether it was fake news, or running against a simulation!, determining it wasn't going to fall for my "test".

It did ultimately decide Ozzy was alive. I pushed back on that, and it instantly corrected itself and partially blamed my query "what is he up to" for being formulated as if he was alive.


Odd, mine didn't do anything interesting.

It's pretty slow to converge though, as it needs enough data points so they cross some certainty threshold. Especially in the context of VPN exit points as the traffic comes from all over the world.


I'm surprised that sampling bias is not in the list. Is it possible that these fossils simply haven't been found yet?


That was my first conclusion, too - the absence of something in the fossil record does not mean that it was not there, just that it did not fossilise.

For one, predators in general often have more gracile build, high power to weight ratio - and don’t fossilise well. They’re also much rarer than herbivores, of course. This means the signal in the fossil record is much weaker and any deviation seems much greater, as you have to turn up the gain to get meaningful data.

Perhaps cats during that period were predominantly dry desert hunters - it is a common niche for felidae - and that environment produces checks wristwatch few fossils.

Perhaps there was another critter extant during that period that just found the crunch of cat bones irresistible, and they all got scavenged.

Perhaps they developed culture and cremated their dead.

Dunno. All that said the E-O was a big transition and it likely did result in gigadeaths, and predators would have been harder hit, ultimately and proportionally.


Similar thoughts crossed my mind as well. But then there's the repopulation with a species that can be traced from Asia. The pre-gap felines just aren't part of the post-gap set. If some were descendants of some endemic low-fossilization branch, chances are they'd be connected across the gap through similarities.


I think the postulation is that the cats would be so abundant, it shouldn't be hard to find their fossils.


have you tried turning the computer off on on?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: