Hacker Newsnew | past | comments | ask | show | jobs | submit | baq's commentslogin

Real time is defined as ‘no slower than some critical speed’, in case of conversation with humans this should be around 10 tok/s including speech synthesis.

At this point of the timeline compute is cheap, it’s RAM which is basically unavailable.

you can have them review each other's work, too.

dark mode is easier on battery for OLEDs but not on LCDs where black needs the pixel to be fully active (white is off, black is on).

black on white is easier to read than white on black full stop, no astigmatism necessary.

https://esa.org/communication-engagement/2018/08/03/resource...

ambient lightning is highly recommended to not strain your vision.


The link you shared literally says neither is better and it depends on the person.

like "Also, in every color combination surveyed, the darker text on a lighter background was rated more readable than its inverse (e.g. blue text on white background ranked higher then white text on blue background)"?

yes it's all preference, vision is subjective, but being surprised that dark mode isn't best is in this context... weird.


The $20 plan for CC is good enough for 10-20 minutes of opus every 5h and you’ll be out of your weekly limit after 4-5 days if you sleep during the night. I wouldn’t be surprised if Anthropic actually makes a profit here. (Yeah probably not, but they aren’t burning cash.)

If the trend line holds you’ll be very, very surprised.

Open source models hosted by independent providers (or even yourself, which if the bubble pops will be affordable if you manage to pick up hardware on fire sales) are already good enough to explain most code.

There’s little reason to use sonnet anymore. Haiku for summaries, opus for anything else. Sonnet isn’t a good model by today’s standards.

That was the whole point for humans, too.

Except it doesn't work the same way it won't work for LLMs.

If you use too many microserviced, you will get global state, race conditions, much more complex failure models again and no human/LLM can effectively reason about those. We somewhat have tools to do that in case of monoliths, but if one gets to this point with microservices, it's game over.


In a sense humans are fancy autocomplete, too.

I actually don't disagree with this sentiment. The difference is we've optimised for autocompleting our way out of situations we currently don't have enough information to solve, and LLMs have gone the opposite direction of over-indexing on too much "autocomplete the thing based on current knowledge".

At this point I don't doubt that whatever human intelligence is, it's a computable function.


You know that language had to emerge at some point? LLMs can only do anything because they have been fed on human data. Humans actually had to collectively come up with languages /without/ anything to copy since there was a time before language.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: