Hacker Newsnew | past | comments | ask | show | jobs | submit | mgaunard's commentslogin

The main issue is that Markdown remains a pretty primitive language to write documents in, with dozens of incompatible extensions all over the place.

I don't know if it's the best format to focus on.


Fair point about fragmentation! Ferrite uses Comrak which implements CommonMark + GitHub Flavored Markdown (GFM) — arguably the closest thing to a "standard" we have.

We chose Markdown because: - It's what most developers already use (README files, documentation, wikis) - Plain text files are portable, grep-able, git-friendly, and won't lock you in - GFM covers tables, task lists, strikethrough, and autolinks which handles 90% of use cases

We also support JSON, YAML, and TOML with native tree viewers. Wikilinks ([[links]]) and backlinks are planned for v0.3.0 for folks wanting Obsidian-style knowledge bases.

That said, I'd love to hear what format you'd prefer — always interested in expanding support!


asciidoc or rst/sphinx, are tools which are much better suited to build software documentation with cross-references etc.

AsciiDoc and RST/Sphinx are definitely more powerful for structured documentation with cross-references, includes, and admonitions.

For now Ferrite is focused on Markdown since that's the most common format for notes and quick docs. But the architecture could support other formats — the parser layer is modular.

If there's demand, AsciiDoc would be the easier addition (cleaner syntax than RST). Would be curious how many folks would use it as their primary format vs. Markdown.


This is one reason why I created TapirMD, which offers better specificity.

That's quite inaccurate.

It needs to remain destructible, and if the type satisfies things like (move-)assignable/copyable, those still need to work as well.

For boxed types, it's likely to set them into some null state, in which case dereferencing them might be ill-formed, but it's a state that is valid for those types anyway.


Well it’s unspecified what empty/size return for collections after a move. Not a dereference, not UB but unspecified as I said. UB pops up in hand written code - I’ve seen it and the language doesn’t provide any protection here.

Thankfully clippy lints do exist here to help if you integrate that tooling


It's more useful to practice programming through projects. Then once you feel you're missing the knowledge for a particular problem you're trying to solve, read up about that one.

Projects are essential, but I've found there is a huge problem with your advice: you have no clue about the possible solution surface.

My advice to learners has been "try to learn as much about a topic as someone who has taken the subject in college and forgotten about it".

For example consider calculus: Someone who took calc 20 years ago and hasn't used it since will probably forget exactly how to compute most derivatives and integrals. But if someone mentions an optimization problem "we need to know when this curve peaks" or asks something involving finding the area under a curve, alarm bells should start ringing. They'll know this can be done, and likely go grab a calc book to refresh.

Another example I run across all the time, which is the opposite scenario: Survival analysis. I have been on countless teams where somebody needs to understand something like churn or the impact of a offering a discount that hasn't expired yet, etc. These are all classic survival analysis problems, yet most people are ignorant that this field of study even exists! Because of this I've seen so many times where people complain that "we'll have to wait months or years to see if these changes impact customer lifetime!" (note: if anyone out there is doing Churn or LTV analysis and aren't familiar with survival analysis, you are most certainly approaching it incorrectly).

I've seen a lot of people get frustrated with self study because they try to learn the material too well. If you aren't going to be using survival analysis soon, it's not really worth remembering all the details of how to implement a Kaplan Meier curve. But if you even have a vague sense of what problem this solves, when you encounter that problem in a project, you know where to go back to. Then you typically walk away with a much stronger sense of the subject then if you had studied it harder in the first place.


Computer science is to programming what physics is to engineering. They are not the same thing. You can do some programming without knowing computer science, but a general knowledge of computer science will give you a much more solid foundation, and for some aspects is indispensable.

Thats a little like saying if you want to learn mechanical engineering, fix things around your home and then do research when you get stumped.

Building a bunch of software projects probably isn’t a very efficient way of learning computer science. You might figure out things like big-O or A* on your own, but a more academic approach is going to take you further, faster.


It's well established that practical project work is what works best at producing tangible results, and most institutions that aim to produce the best programmers focus on that.

I can understand this is not the approach preferred by academic types which is a strong community on hackernews.

Most people are more motivated to understand the theory because it helps them solve a practical problem, rather than theory for the sake of theory.


I thought this thread was about computer science. Working on a programming project is related to computer science in the same way that welding together a shelf is related to mechanical engineering.

Being "handy" around the house (or even more advanced tinkering) and a mechanical engineering degree--maybe especially from a good school--are absolutely not the same thing.

Totally agree! And being able to whip together a webapp for your church is absolutely not the same thing as computer science.

Computer scientists often program but not all programmers are computer scientists.


An elitist view disconnected from reality.

Even something like game theory was only developed and earned nobel prizes because of its applications to making money in finance.


That seems more like a necessary precondition, than a path to learning computer science. Like you will probably need to learn penmanship and multiplication tables before you get into real mathematics, but, that isn’t really mathematics.

Academic types are often not interested in practical things and getting their hands dirty.

As a pragmatic type, I find it endlessly disappointing how many other pragmatic types have absolutely zero familiarity or grounding in even surface level theoretic stuff that academic types are doing.

See also: golang

AI for coding itself is a fad

That's probably why a boundary (like MCP) is useful. Imagine maintaining the critical application logic the "old fashioned way" and exposing a MCP-like interface to the users so that they can have their LLM generate whatever UI they like, even realtime on the fly as they are engaging with the application. It's a win-win in my mind.

The applications of MCP and tool-calling are vastly wider than just for coding, with tremendous diversity. Constraining it to the single application of coding doesn't make any sense.

At this point, when people say this I just assume they’ve not used the latest models or haven’t invested time in learning how to use these tools properly.

There’s slop out there, yes, but in the hands of an engineer who cares to use tools well, LLMs allow you to move much more quickly and increase the quality of your output dramatically.


Good software isn't about quantity but quality of the code.

AI cannot produce better quality code than someone who is actually qualified in the problem domain.

What I've seen AI be very good at is creating a lot of legacy code very quickly, which itself needs extensive use of AI just to maintain it.

A decent approach to move quickly for PoC or prototypes, or to enable product managers to build things without a team. But obviously not something you can build a real company on.


Have you been in the same industry as the rest of us? 90% of all developers out there in the wild create "legacy code very quickly" anyways, they too create "slop" before we coined the term "AI slop". This mythical "someone who is actually qualified in the problem domain" you mention is maybe 5% of the entire software development ecosystem. If you work with only those developers, you're extremely privileged and lucky, but also in a very isolated bubble.

If you work on meaningful tech projects, where tech is a real driver to the businesses and there are genuine challenges to overcome, then you can't afford slop.

I say that, but then it's true I have seen businesses be successful despite low quality of software. It turned out that for those businesses, the value wasn't that much driven by tech after all.


Absolutely. But not for the reason you are getting at.

Because whem AI gets good enough there is no longer going to be code at all.

Which makes me sad, as a luddite.


Have you even tried it? I don't know anyone who has seriously used the latest models and stuff like CC and still said that with a straight face.

s/Code/Thinking/ and it might make better sense. For some people coding == thinking..

strongly agree! can't see any use of LLMs beyond tabcomplete and navigating an unknown codebase.

Assuming that wasn't trolling, what's the last thing you tried and when? Latest Claude Code can do a lot over lots of domains. I recommend giving their top plan a fair chance for a month.

Also most people who I see succeed from the start are technical managers. That is people who can code, who are used to delegate (good at phrasing) and are more likely to accept something that works even though is not character by character what they expected.


As a technical manager the reason you accept things that are not quite what you had in mind is because you delegate the responsibility of the correctness to the employee who is gonna be owning that area, maintaining it and ensuring it works.

There is no responsibility with AI. It doesn't care about getting it right or making sure the code delivers on what the company needs. It has nothing to prove, no incentive to supporting anything.


Why would we use something that makes us dumber and wastes our planet more than anything else.

Nah, I'll skip.


> Assuming that wasn't trolling, what's the last thing you tried and when?

vibe-coding the GTK-rs app with Codex, delegating ready Figma designs to Junie to implement in Jetpack Compose, improving a11y and perf on my personal website built in Svelte with Claude. (a friend threw $100 into that slot machine and shared access to Claude Code with me.)

all in December 2025. all failed *horrendously*. Claude Sonnet 4.5 actually worsened the Lighthouse performance score on mobile from 95 down to 40!! though, it did a decent job on a11y.

Codex CLI constantly used older versions of all sorts of crates, spat out deprecated code, outright refused to use Blueprints and Context7, but some time after, I could get a mockup of Superhuman-style email client.

and Junie... well, it did best. it had Gemini 3 Flash picked as a model. despite Material 3 Expressive being beyond Gemini's knowledge cutoff, it actually used Context7 to fill the gap. though, it fumbled the bag with details, performance, and overabstractioning.

> I recommend giving their top plan a fair chance for a month.

as I said, my friend already did. we both find it the worst spend of the decade. seriously, in yesteryear's Call of Duty we would at least have fun. being a purchase worse than Call of Duty is a floor so low, you have to break it deliberately.

before you say "try again," do your research around food prices and salaries in Ukraine. (or delegate it to Claude Deep Research feature, you may love it.) to feed yourself well here, you have to spend measly by Western standards $300-600/mo. (that's without housing and utilities, presuming you live alone.) though, salaries outside dev jobs are just as measly, and then you have the overall job crisis in Ukraine overlaid on top of the worldwide IT job crisis.

4-8 months of Claude Max amount to the rumored price of Steam Machine. I would rather spend $800 to see the joy on my younger sister's face and familiarize her with true games, beyond what's on Google Play.

> That is people who can code, who are used to delegate (good at phrasing) and are more likely to accept something that works even though is not character by character what they expected.

I would rather spend that time nursing a real open-source contributor or a junior colleague. I would have a real person with real willpower and real ideas in my network, and that would spread the joy of code.


Computers themselves are a fad.

Malaysia to South Africa is an interesting one, why is this route so prevalent?

I don't think 70% of bugs are memory safety issues.

In my experience it's closer to 5%.


I believe this is where that fact comes from [1]

Basically, 70% of high severity bugs are memory safety.

[1] https://www.chromium.org/Home/chromium-security/memory-safet...


High severity security issues.

Right, which is a measure which is heavily biased towards memory safety bugs.

70% of security vulnerabilities are due to memory safety. Not all bugs.

Using the data provided, memory safety issues (use-after-free, memory-leak, buffer-overflow, null-deref) account for 67% of their bugs. If we include refcount It is just over 80%.

That's the figure that Microsoft and Google found in their code bases.

probably quite a bit less than 5%, however, they tend to be quite serious when they happen

Only serious if you care about protecting from malicious actors running code on the same host.

you dont? I would imagine people that runs for example a browser would have quite an interest in that

Browsers are sandboxed, and working on the web browsers themselves is a very small niche, as is working on kernels.

Software increasingly runs either on dedicated infrastructure or virtual ones; in those cases there isn't really a case where you need to worry about software running on the same host trying to access the data.

Sure, it's useful to have some restrictions in place to track what needs access to what resource, but in practice they can always be circumvented for debugging or convenience of development.


Browsers are sandboxed by the kernel, and we're talking about bugs in the kernel here...

Even if modern browsers lean more on kernel features, initially the sandboxing in browsers is implemented through a managed runtime.

Quite easy to outperform a parsing library when you're not actually doing any parsing work and just memory-mapping pre-parsed data...

That being said storing trees as serializable flat buffers is definitely useful, if only because you can release them very cheaply.


Imagine if you measured the speed of beer delivery by the rate at which beer cans can be packed/unpacked from truck pallets. But then somebody shows up with a tanker truck and starts pumping beer directly in and out. You might argue this is 'unfair' because the tanker is not doing any packing or unpacking. But then you realize it was never about packing speed in the first place. It was about delivering beer.

This is actually a good analogy; the beer cans are self-contained and ready for the customer with zero extra work. The beer delivered by the tanker still needs to be poured into individual glasses by the bartender, which is slow and tedious.

He probably meant beer kegs. Memory map-able data is closer to sending beer cans since both are in a ready to use format.

All Apple software just randomly changes UI with every iteration.

It's the software equivalent of fast fashion.

Just avoid it and stay with true and tried staples instead.


members of a const struct are also const.

Now you obviously can still have escape hatches and cast the const away whenever you want.


> members of a const struct are also const.

Yes, but if your struct contains references, the constness doesn't apply to what those references point to. In Rust it does.


For pointers, const only affects whether you can re-set it to point to something else, not the pointee.

Nothing prevents you from building a smart pointer with those semantics though, std::indirect is an example of this (arguably closer to Rust's Box).


Sure, but my point is that the semantics between C++ and Rust are different, and are therefore not an exact match as the article stated.

In C++, you define the semantics yourself.

No, const semantics are defined by the language definition.

It's defined by whatever you put in your const overloads.

const is primarily a type annotation that affects overload resolution.

You must be confused because Rust has no overloading to begin with.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: