Hacker Newsnew | past | comments | ask | show | jobs | submit | throwawaymaths's commentslogin

too many features. tryhard vibe


We're implementing "editions" in order to remove some features that don't deliver enough value.


how is the community going these days ? stable ? growing ?


this time in lua. cloudflare can't catch a break


Or they're not thoroughly testing changes before pushing them out. As I've seen some others say, CloudFlare at this point should be considered critical infrastructure. Maybe not like power but dang close.


My power goes out every Wednesday around noon and normally if the weather is bad. In a major US metro.

I hope cloudflare is far more resilient than local power.


The 'rewrite it in lua' crowd are oddly silent now.


How do you know?


[flagged]


Did you really go through the trouble of creating an account just to spit trash? Damn!


Anyone knows why lua? Or is it perhaps as a redis script in lua?


Figured it, its prob a nginx lua module


Time to use boring languages such as Java and Go.


have you ever deployed an erlamg system?

the biggest bugbear for concurrent systems is mutable shared data. by inherently being distributable you basically "give up on that" so for concurrent erlang systems you ~mostly don't even try.

if for no other reason than that erlang is saner than go for concurrency

like goroutines aren't inherently cancellable, so you see go programmers build out the kludgey context to handle those situations and debugging can get very tricky


ironically with zig most of the things that violate expectations are keywords. so you run head first into a whole ton of them when you first start (but at least it doesn't compile) and then it you have a very solid mental model of whats going on.


yeah, its a better c, but like wouldnt it be nice if c had stadardized fat pointers so that if you move from project to project you don't have to triple check the semantics? for example and like say 50+ "learnings" from 40 years c that are canonized and first class in the language + stdlib


What to say from WG14, when even one of C authors could not make it happen?

Notice how none of them kept involved with WG14, just did their own thing with C in Plan 9, and with Inferno, C was only used for the kernel, with everything else done in Limbo, finalizing by minor contributions to Go's first design.

People that worship UNIX and C, should spend some time learning that the authors moved on, trying to improve the flaws they considered their original work suffered from.


and you can have good races too (where the order doesnt matter)


Yeah it's worth emphasizing, if I spawn two threads, and both of them print a message when they finish (and don't interact with each other in any other way), that's technically a race condition. The output of my program depends on the order on which these threads complete. The question is whether it's a race that I care about.


'const expected = [_]u32{ 123, 67, 89, 99 };'

constant array with u32, and let the compiler figure out how many of em there are (i reserve the right to change it in the future)


'const expected: []const u32 = &.{ 123, 67, 89, 99 };' also works.


why would you need batteries included? the ai can code most integrations (from scratch, if you want, so if you need something slightly off the beaten path it's easy


I think the logic can be applied to humans as well as AI:

Sure, the AI _can_ code integrations, but it now has to maintain them, and might be tempted to modify them when it doesn't need to (leaky abstractions), adding cognitive load (in LLM parlance: "context pollution") and leading to worse results.

Batteries-included = AI and humans write less code, get more "headspace"/"free context" to focus on what "really matters".

As a very very heavy LLM user, I also notice that projects tend to be much easier for LLMs (and humans alike) to work on when they use opinionated well-established frameworks.

Nonetheless, I'm positive in a couple of years we'll have found a way for LLMs to be equally good, if not better, with other frameworks. I think we'll find mechanisms to have LLMs learn libraries and projects on the fly much better. I can imagine crazy scenarios where LLMs train smaller LLMs on project parts or libraries so they don't get context pollution but also don't need a full-retraining (or incredibly pricey inference). I can also think of a system in line with Anthropic's view of skills, where LLMs very intelligently switch their knowledge on or off. The technology isn't there yet, but we're moving FAST!

Love this era!!


> As a very very heavy LLM user, I also notice that projects tend to be much easier for LLMs (and humans alike) to work on when they use opinionated well-established frameworks.

i have the exact opposite experience. its far better to have llms start from scratch than use batteries that are just slightly the wrong shape... the llm will run circles and hallucinate nonexistent solutions.

that said, i have had a lot of success having llms write opinionated (my opinions) packages that are shaped in the way that llms like (very little indirection, breadcrumbs to follow for code paths etc), and then have the llm write its own documentation.


Maybe if they could learn how to switch their intelligence on, that would help more?


What’s more likely to have a major security problem – Django’s authentication system or something custom an LLM rolled?


I don't even particularly care for Django, but darned if I'd want to reimplement on my own any of the great many problems they've thoroughly solved. It's so widely used that any weird little corner case you can think of has already been addressed. No way I'd start over on that.


Its literally the opposite.

Why would you generate sloppy version of core systems that must be included by default in every project.

It makes absolutely zero sense to generate auth/email sending/bg tasks integration/etc


Because then every app is a special snowflake.

At some point you'll need to understand things to fix it, and if it's laid out in a standard way you'll get further, quicker.


> suspend/resume

special @asyncSuspend and @asyncResume builtins, they will be the low level detail you can build an evented io with.

new Io is an abstraction over the higher level details that are common between sync, threaded, and evented, so you shouldn't expect the suspension mechanism to be in it.


Oh really? That's perfect.


IDK being able to produce a good product in a corpo environment sure sounds like a competency issue.

> how hard it is for top performers to make change

then you're not a top performer anymore?

seems pretty straightforward

> they must be stupid

one can be not stupid and still not competent


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: