Hacker Newsnew | past | comments | ask | show | jobs | submit | mightybyte's commentslogin

Sounds like this should live in Wikipedia somewhere on https://en.wikipedia.org/wiki/Ellipse...or maybe a related but more CS focused related page.

It also could happen because tech companies have optimized their products to maximize the amount of time that people spend on them, often in ways that directly result in a worse user experience (by showing ads instead of the most relevant search results, for example).

It makes no sense what you say. If the experience with A was really worse than with B, people would stay with B.

The original poster said “more useful”, not “better”, so you’re already arguing something different than what was said. I might spend more time with something less useful because its time efficiency is one of the things that makes it less useful now.

Regarding your argument of “better” you seem to be arguing by definition.

Edit: I now realize you are the original poster who said “more useful”, so why did you change it?


More useful is one of many ways of being better. What are you talking about?

You vote with your feet. If you can only follow the world would be exactly as simple as you make it out to be.

If you write things for your own website you would make more of an effort and it would ideally find an audience that enjoys your world view or insights into your topics.

It would be great to lure you into that experience. HN is a terrible dating agency. Gathering down votes here is the opposite of making friends. It is however great for discovering authors like Henry.

He could have spend his time complaining on x how bad it is.


If you’re arguing that there are different ways of being better than your argument falls even further apart since you might choose a worse option because it is better in some way…

No, this is not at all a given. There could be switching costs that cause people to stay on a product that is actually worse. Users also simply might be unaware of alternatives or that they are better. It's not hard to imagine any number of other reasons why in our imperfect world there is not perfectly elastic competition.

The term "functional programming" is so ill-defined as to be effectively useless in any kind of serious conversation. I'm not aware of any broadly accepted consensus definition. Sometimes people want to use this category to talk about purity and control of side effects and use the term "functional programming" to refer to that. I would advocate the more targeted term "pure functional programming" for that definition. But in general I try to avoid the term altogether, and instead talk about specific language features / capabilities.


> The term "functional programming" is so ill-defined as to be effectively useless in any kind of serious conversation.

This is important. I threw my hands up and gave up during the height of the Haskell craze. You'd see people here saying things like LISP wasn't real FP because it didn't match their Haskell-colored expectations. Meanwhile for decades LISP was *the* canonical example of FP.

Similar to you, now I talk about specific patterns and concepts instead of calling a language functional. Also, as so many of these patterns & concepts have found their way into mainstream languages now, that becomes even more useful.


To your point, but lispers like the author of Let Over Lambda specifically called Lisp non-functional.


to add a grain of salt, some of the lisp world is not functional, a lot of code is straight up imperative / destructive. but then yeah a lot of the lisp culture tended to applicative idioms and function oriented, even without the static explicit generic type system of haskell.


Sure, but that's part of my point in agreeing that definitions of "functional programming" are muddy at best. If one were to go back to say 1990 and poll people to name the first "functional programming" language that comes to mind, I'd wager nearly all of them would say something like LISP or Scheme. It really wasn't until the late aughts/early teens when that started to shift.


Yeah sorry i wasn't bringing much by commenting this above. And yeah lisp was the historical soil for FP (schemers took the lead on this).


No I think your point is good, it just wasn't contradictory and I think that was your intent. Defining FP is a dark art :)


maybe FP should be explained as `rules not values`. in scheme it's common to negate the function to be applied, or curry some expression or partially compose / thread rules/logic to get a potential future value that did nothing yet


I like it. I think I said this in a separate post in here but I've taken to breaking it down to different archetypes and discussing them separately.


I usually define functional programming as "how far away a language is from untyped lambda calculus". By that definition, different languages would fall in different parts of that spectrum.


Was just talking with someone the other day who used to write Haskell professionally but is now using Python. He said that in his experience when there are bugs the "blast radius" is much larger in a dynamic language like Python than in a static language like Haskell. That has been my experience as well.

Something I haven't seen talked about, though, is how powerful the type system is for constraining LLMs when using them to generate code. I was recently trying to get LLMs to generate code for a pretty vague and complex task in Haskell. I wasn't having much luck until I defined a very clear set of types and organized them into a very clear and constrained interface that I asked the LLM to code to. Then the results were much better!

Sure, you can use these same techniques in less strongly typed languages like Rust, and you can probably also use a similar approach in dynamically typed languages, but Haskell's pure functions allow you to create much stronger guard rails constraining what kinds of code the LLM can write.


Amen. I've been coding a big hobby project in Rust since July, after having spent years using Haskell for such things. I chose Rust because the primary DB I wanted to use (TypeDB) only had drivers for Rust and Python at the time. Rust is popular relative to Haskell, so I thought others might be more likely to sign on, and the type system seemed almost as expressive.

But since purity is not encoded in Rust's type system, any function might do any kind of IO -- in particular, read from or write to disk or one of the DBs. That makes the logic much harder to reason about.

(Also, Rust's syntax is so noisy and verbose that it's harder to see what's going on, and less context fits in my head at one time. I'm getting better at paying that cost, but I wish it weren't there.)

I can't say I made the wrong decision, but I often fantasize about moving most of the logic into Haskell and just calling Rust from Haskell when I need to call TypeDB from Rust.


Db access in rust typically needs some sort of handle and a mutex. Limiting access to the handle makes the rest of the code pure with respect to the db. The handle plays a similar role to the IO type.

Actor-like patterns makes this nice. Message-objects can be passed to/from a module with db-access or other io.


How can you prevent code from creating a handle in a new place?


You can limit access to your db credentials. But other code can still launch missiles etc.


this makes me want to move to a haskell (or any hard fp language) shop in 2026..


I've been using Haskell professionally for the last 5 years, I definitely hope I can continue!


Genuinely curious on the types of projects you use Haskell for! I’ve been thinking of learning it beyond the lightweight treatment I got during my CS degree.


Mostly “boring” stuff where the type system pays rent fast:

- Domain/state machines (payments/fulfillment-style workflows): modeling states + transitions so “impossible” states literally can’t be represented. - Parsers/DSLs & config tooling: log parsers, small interpreters, schema validation, migration planners. - Internal CLIs / automation: batch jobs, release helpers, data shapers, anything you want to be correct and easy to refactor later. - Small backend services when the domain is gnarly (Servant / Yesod style) rather than huge monoliths.

If you’re learning it beyond CS exposure, I’d start with a CLI + a parser (JSON/CSV/logs), then add property-based tests (QuickCheck). That combo teaches types, purity, effects, and testing in one project without needing to “go full web stack” on day 1.


happy to hear that, do you happen to know good places to connect with haskell teams ?


Interesting experience and perhaps not entirely surprising given that type-hole driven development was already a thing prior to LLMs. Like how many ways are there to implement "[a] -> [b] -> [(a,b)]", let alone when you provide a vague description of what that is supposed to do.


minikanren folks were already experimenting with program synthesis given a test suite that needs to be fully satisfied.


I've found it useful in limited cases for writing optics which can be incredibly obtuse, sometimes boilerplatey, and yet ultimately accomplish what in other languages might be considered utterly trivial use cases... consider the following prompt and output:

    reply with one-line haskell source code ONLY: implement the function projectPair :: (Field1 s s a a, Field2 s s b b) => Lens s s (a, b) (a, b)

    lens (\s -> (s^._1, s^._2)) (\s (a,b) -> s & _1 .~ a & _2 .~ b)
... which can be shown to satisfy the three lens laws. If you can understand the types it is generally true that the implementation falls out much more easily, in a similar vein as "show me your tables, and I won't usually need your flowchart; it'll be obvious."

I suppose LLMs are good for this and other extremely constrained forms of boilerplate production. I consider it an incremental improvement over go codegen. Everything else I still tend to hand-write, because I don't consider source code production the bottleneck of software development.


He's also won the International Obfuscated C Code Contest 3 times.

https://www.ioccc.org/authors.html#Fabrice_Bellard


One of my favorite pieces on this topic is this talk "Stop Treading Water: Learning to Learn":

https://www.youtube.com/watch?v=j0XmixCsWjs


It definitely has things in common with meetup.com. But it looks meaningfully distinct to me because the appear to specifically have some kind of strong preference against connected devices. Honestly, I've been wishing for things in this vein recently because of the feeling that our world is growing too superficial with our faces buried in phones and being fed by addictive algorithms.

That being said, I think you're right about some of the challenges that an effort like this will encounter.


As a professional haskeller, I feel it necessary to point out for people in this thread who are less exposed to Haskell and who may be Haskell-curious...this is not what real-world commercial Haskell code looks like. To use a C analogy, I'd say it's closer to IOCCC entries than Linux kernel code.


Thanks for that. Having read the article, I was left with the overwhelming impression that I'd have solved it in a totally different way if I was trying in OCaml.

Briefly, I'd have started with an array which for each colour had an array containing the coordinate pairs for that colour. I'd probably then have sorted by length of each array. The state also has an empty array for the coordinates of each placed queen.

To solve, I'd take the head array as my candidates, and the remaining array of arrays as the next search space. For each candidate, I'd remove that coordinate and anything that was a queen move from it from the remaining arrays, and recursively solve that. If filtering out a candidate coordinate results in an empty list for any of the remaining arrays, you know that you've generated an invalid solution and can backtrack.

At no point would I actually have a representation of the board. That feels very imperative rather than functional to me.

To me, this solution immediately jumps out from the example - one of the queens in on a colour with only 1 square, so it HAS to be there. Placing that there immediately rules out one of the choices in both colours with 2 squares, so their positions are known immediately. From that point, the other 2 large regions have also been reduced to a single candidate each.


Yeah, comparing to how you'd solve this in any other mainstream language is really an apples-to-oranges comparison here because this is explicitly tackling the contrived problem of solving it at the type level rather than at the much more common value level. Very few languages in existence have the ability to do this kind of type-level computation. I'd say Haskell is really the only language that could conceivably be called "viable for mainstream use" that currently supports it, and even in Haskell's case the support is new, largely experimental, in a state of active research, and not well integrated with the ergonomics of the rest of the language.


As someone who has never touched Haskell and who knows nearly nothing about it, Haskell is not, in fact, a "dynamically typed, interpreted language", which, "has no currying".


At the risk of explaining away a perfectly good joke, that person is writing programs at the type level. The joke is that the type system is turing complete if you enable the right extensions.


In this same vein, Hot Ones minus Sean might be pretty entertaining as well.


Hot Ones Minus Sauce. Sean driving a guest to madness through sheer conversation.


I would argue that the title is misleading and overly alarmist here. This particular bug may have involved recursion and a stack overflow, but that's like saying "malloc kills" in the title of an article about a heap overflow bug. The existence of stack overflow bugs does not imply that recursion is bad any more than the existence of heap overflow bugs implies that malloc is bad. Recursion and malloc are tools that both have pretty well understood resource limitations, and one must take those limitations into account when employing those tools.


Did you see the article references [1][2] from 2006 and 2017 that already argue that recursion is a security problem? It's not new just not well-known.

[1] https://www.researchgate.net/publication/220477862_The_Power...

[2] https://www.qualys.com/2017/06/19/stack-clash/stack-clash.tx...


You might be agreeing without realising it.

>> I would argue that the title is misleading and overly alarmist here. This particular bug may have involved recursion and a stack overflow, but that's like saying "malloc kills" in the title of an article about a heap overflow bug.

Let's see what the article[1] you cited says:

  Rule 3: Do not use dynamic memory allocation after initialization.
  Rationale: This rule appears in most coding guidelines for safety-critical software. The reason is simple: Memory allocators, such as malloc, and garbage collectors often have unpredictable behavior that can significantly impact performance.
If you think recursion is a known security problem, do you also think using the heap is a known security problem?


Arguably, Stack Clash is just a compiler bug--recursive code shouldn't be able to jump the guard pages. This was fixed in Clang in 2021 [1], in GCC even earlier, and in MSVC earlier than that.

[1]: https://blog.llvm.org/posts/2021-01-05-stack-clash-protectio...


Recursion per se isn't an issue; unbounded stack use is. If you either know your input size is bounded (e.g. it's not user-generated) or use tail-recursion (which should get compiled to a loop), it's fine.

If your algorithm does unbounded heap allocations instead, you're still going to get oomkilled. The actual vulnerability is not enforcing request resource limits. Things like xml bombs can then exacerbate this by expanding a highly compressed request (so a small amount of attacker work can generate a large amount of receiver work).


Exactly. The article would have been much more informative if it had detailed why the usual approaches to limiting resource usage wouldn't work to prevent DoS here.


The problem, in practice, is the limit for malloc on most systems is a few GB, while the default stack size on windows is 1MB, a stupidly small size.

I love recursion, so I will spawn a thread to do it in with a decent sized stack, but it’s very easy to break if you use defaults, and the defaults are configured differently in every OS.


Using recursive techniques to parse potentially hostile inputs kills.


Parsing anything from a potential adversary needs to account for failure. Unbounded recursion is secure (ie fails safely) if the compiler is working properly.

As to DoS, without looking at the code I'm unclear why various approaches to bounding resource consumption wouldn't have worked. I assume something specific to this library and how it is used must have prevented the obvious approaches. Still, not an issue in the general case.


Guarding against unbounded recursion requires both compiler support and runtime environment support: you have to use enough resources to handle legitimate queries, but small enough memory constraints that a "query of death" doesn't kill nodes that are expensive to reactivate. Even then, by their very nature queries-of-death are usually hard to detect and a much simpler solution is something you can do in the static space, such as put an arbitrary hard-bound on recursion depth far below your resource constraints so you can fail the query without losing the whole processing node.

Google protobuffers have buried deep within at least their C++ parser an arbitrary hard limit for nesting depth (I think it may be 32). It's annoying when you hit it, but it's there for a reason.


> Guarding against unbounded recursion requires both compiler support and runtime environment support

I feel like this is similar in spirit to saying "guarding against infinite loops requires both ...".

Where resource consumption is concerned, as you pointed out you can track that manually. Presumably you have to do that anyway, since the iterative case will also need to give up and fail the task at some point.

I really don't see where recursion itself introduces an issue here. I guess if you expect to pass through a great many nodes that don't otherwise trigger resource allocation except for the stack frame, and the compiler can't optimize the activation records away, it could be problematic. That's pretty specific though. Is that really the case for a typical parser?


> can't optimize the activation records away

The stack frame also holds local variables. It's not just a return address. If your function requires 3 local variables then each call requires 3 stack slots.


It was for the proto buffer c++ parser; couldn't say for typical.


Nobody should use recursion in production code, period.

And no, it's not like malloc. If you don't understand why then you definitely shouldn't be putting recursive calls in your codebase.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: