Hacker Newsnew | past | comments | ask | show | jobs | submit | Etheryte's commentslogin

What do you mean, Russia has been doing the same thing for most of the war? The success relies on you controlling the territory, or at least territory close enough, so the results vary.

In a war zone any large high power jammer will be like supernova in the darkness visible for detectors from tens of kilometers away. So its gonna be immediately destroyed.

Iran protesters cant find or destroy jammers though.


Isn't Iran doing this from the air? That would be far more effective. In a contested space with AA everywhere that wouldn't be feasible (i.e. large parts of Ukraine)

If you think cocaine and marijuana are comparable/interchangeable with heroin, you might want to educate yourself on the topic a bit more before trying to make a quip.

This is, in a way, why it's nice that we have companies like Red Hat, SUSE and so on. Even if you might not like their specific distros for one reason or another, they've found a way to make money in a way where they contribute back for everything they've received. Most companies don't do that.

Contribute back how and where? Definitely not to Gentoo if we look at the meagre numbers here.

Red Hat contributes to a broad spectrum of Linux packages, drivers, and of course the kernel itself [1].

One example is virtualization: the virtio stack is maintained by Red Hat (afaik). This is a huge driver behind the “democratization” of virtualization in general, allowing users and small companies to access performant virt without selling a kidney to VMware.

Also, Red Hat contributes to or maintains all of the components involved in OpenShift and OpenStack (one of which is virtio!).

[1] https://lwn.net/Articles/915435/


Why should Red Hat be expected to contribute to Gentoo? A distro is funded by its own users. What distro directly contributes to another distro if it’s not a derivative or something?

Red Hat primarily contributes code to the kernel and various OSS projects, paid for by the clients on enterprise contracts. A paying client needs something and it gets done. Then the rest of us get to benefit by receiving the code for free. It’s a beautiful model.

If you look at lists of top contributors, Red Hat (along with the usual suspects in enterprise) are consistently at the top.


Presumably, contribute to the entire ecosystem in terms of package maintenance and other non-monetary forms.

As others mentioned, Red Hat (and SUSE) has been amazing for the overall Linux community. They give back far more than what the GPL requires them to. Nearly every one of their paid "enterprise" products has a completely free and open source version.

For example:

  - Red Hat Identity Management -> FreeIPA  (i.e. Active Directory for Linux)
  - Red Hat Satellite -> The Foreman + Katello
  - Ansible ... Ansible.
  - Red Hat OpenShift -> OKD
  - And more I'm not going to list.

Red Hat employs a significant number of GCC core devs.

Red Hat contributes a huge amount to the open source ecosystem. They're one of the biggest contributors to the Linux kernel (maybe the biggest).

https://insights.linuxfoundation.org/project/korg/contributo...

It looks like they're second to Intel, at least by LF's metric. That said driver code tends to be take up a lot of space compared to other areas. Just look at the mass of AMD template garbage here: https://github.com/torvalds/linux/tree/master/drivers/gpu/dr...


Intel has long been a big contributor--mostly driver stuff as I understand it. (Intel does a lot more software work than most people realize.) Samsung was pretty high on the list at one point as well. My grad school roommate (now mostly retired though he keeps his hand in) was in the top 10 individual list at one point--mostly for networking-related stuff.

Yes, that would be nice but when I look at their Grub src.rpm for instance, some of those patches would look original but came from Debian.

Back in the day when the boxes were on display in brick-and-mortar stores, SuSE was a great way to get up and running with Linux.


The OpenSUSE Tumbleweed installation on my desktop PC is nearing 2 years now and still rolling. It is a great and somewhat underrated distribution.

SuSE/openSuSE is innovating plenty of stuff which other distros find it worth to immitate, e.g. CachyOS and omarchy as Arch-derivatives felt that openSuSE-style btrfs snapshots were pretty cool.

It's a rock-solid distro, and if I had a use for enterprise support, I'd probably look into SLES as a pretty serious contender.

The breadth of what they're doing seems unparalleled, i.e. they have rolling release (Tumbleweed), delayed rolling release (Slowroll) which is pretty unique in and of itself, point release (Leap), and then both Tumbleweed and Leap are available in immutable form as well (MicroOS, and Leap Micro respectively), and all of the aforementioned with a broad choice of desktops or as server-focused minimal environments with an impressively small footprint without making unreasonable tradeoffs. ...if you multiply out all of those choices it gives you, it turns into quite a hairy ball of combinatorics, but they're doing a decent job supporting it all.

As far as graphical tools for system administration go, YaST is one of the most powerful and they are currently investing in properly replacing it, now that its 20-year history makes for an out-of-date appearance. I tried their new Agama installer just today, and was very pleased with the direction they're taking.

...so, not quite sure what you're getting at with your "Back in the day..." I, too, remember the days of going to a brick-and-mortar store to buy Linux as a box set, and it was between RedHat and SuSE. Since then, I think they've lost mindshare because other options became numerous and turned up the loudness, but I think they've been quiety doing a pretty decent job all this time and are still beloved by those who care to pay attention.


SUSE has a lot of ex-Red Hatters at high levels these days. Their CEO ran Asia-Pacific for a long time and North America commercial sales for a shorter period.

SUSE has always been pretty big in Europe but never was that prominent in North America except for IBM mainframes, which Red Hat chipped away at over time. (For a period, SUSE supported some mainframe features that Red Hat didn't--probably in part because some Red Hat engineering leadership was at least privately dismissive of the whole idea of running Linux on mainframes.)


I've found openSUSE MicroOS to be a great homelab server OS.

SuSE slowroll is news to me, thanks.

Red hat certainly burns a lot of money in service of horrifyingly bad people. It's nice we get good software out of it, but this is not a funding model to glorify. And of course american businesses not producing open source is the single most malignant force on the planet.

> Red hat certainly burns a lot of money in service of horrifyingly bad people.

Red Hat also has a nasty habit of pushing their decisions onto the other distributions; e.g.

- systemd

- pulseaudio (this one was more Fedora IIRC)

- Wayland

- Pipewire (which, to be fair, wasn't terrible by the time I tried it)


Pushing their decisions? This is comical.

I guess Debian, SUSE, Canonical, etc get that email from Red Hat just go along with it. We better make the switch, we don’t want our ::checks notes:: competitor made at us.


systemd and friends go around absorbing other projects by (poorly) implementing a replacement and then convincing the official project to give up.

I don’t know where they come from, but I try to avoid all in that list. To be fair, audio is a train wreck anyway.

Eh, pulseaudio got a lot better, and pipewire "just works" at this point (at least for me). Even Bluetooth audio works OOTB most of the time.

Pipewire rocks. Wayland it's half baked and a disaster on legacy systems. SystemD... openrc it's good enough, and it never fails at shutdown.

It's difficult to infer what kind of nuts is going on here.

If we're going to socialize production, let's do it properly.

Red Hat pushing for the disaster that is Wayland has set the Linux Desktop back decades.

It is the Microsoft of the Linux world.


Why is Wayland a disaster? Most of the Linux community is strongly in favor of it.

I'm sorry but this is just completely disconnected from reality. Wayland is being successfully used every single day. Just because you don't like something doesn't mean it's inherently bad.

I don't know that Red Hat is a positive force. They seem to be on a crusade to make the Linux desktop incomprehensible to the casual user, which I suppose makes sense when their bread and butter depends on people paying them to fix stuff, instead of fixing it themselves.

You don’t know they are a positive force?

This, despite the fact that Rocky, Alma, Oracle Enterprise Linux, etc exist because of the hard work and money spent by Red Hat.

And what are those companies doing to fix this issue you claim Red Hat causes? Nothing. Because they like money, especially when all you have to do is rebuild and put your name on other people’s hard work.

And what exactly is incomprehensible? What exactly is it that they’re doing to the Linux desktop that make it so that people can’t fix their own problems? Isn’t the whole selling point of Rocky and Alma by most integrators is that it’s so easy you don’t need red hat to support it?


Just a note: Rocky and Alma came out of CentOS

I think it's fair to say that Red Hat simply doesn't care about the desktop--at least beyond internal systems. You could argue the Fedora folks do to some degree but it's just not a priority and really isn't something that matters from a business perspective at all.

Can you name a company which does care about the linux desktop? Over the years i’m pretty sure redhat contributed a great deal to various desktop projects, can’t think of anyone who contributed more.

Well Red Hat did make a go at a supported enterprise desktop distro for a time and, as I wrote, Fedora--which Red Hat supports in a variety of ways for various purposes--is pretty much my default Linux distro.

So I'm not being critical. Yes, Red Hat employees do contribute to projects that are most relevant to the desktop even if doing so is not generally really the focus of their day jobs. And, no, other companies almost certainly haven't done more.


Off the top of my head System76 jumps to mind with their hardware and Pop!_OS.

> Can you name a company which does care about the linux desktop?

To some extent Valve. They have to, since the Steam Deck's desktop experience depends on the "Linux desktop" being a good experience.


Fedora is probably the best out-of-the-box desktop experience. Red Hat does great things, even if the IBM acquisition has screwed things up.

I find systemd pleasant for scheduling and running services but enraging in how much it has taken over every other thing in an IMO subpar way.

It's not just systemd, though. You have to look at the whole picture, like the design of GNOME or how GTK is now basically a GNOMEy toolkit only (and if you dare point this out on reddit, ebassi may go ballistics). They kind of take more and more control over the ecosystem and singularize it for their own control. This is also why I see the "wayland is the future", in part, as means to leverage away even more control; the situation is not the same, as xorg-server is indeed mostly just in maintenance work by a few heroes such as Alanc, but wayland is primarily, IMO, a IBM Red Hat project. Lo and behold, GNOME was the first to mandate wayland and abandon xorg, just as it was the first to slap down systemd into the ecosystem too.

The usual semi conspiratorial nonsense. GNOME is only unusable to clickers that are uncomfortable with any UI other than what was perfected by windows 95. And Wayland? Really? Still yelling at that cloud?

I expect people will stop yelling about Wayland when it works as reliably as X, which is probably a decade away. I await your "works for me!" response.

It’s very fair you can say “X works for me” but everyone saying otherwise is in the wrong.

I don't get your point. People regularly complain that Wayland has lots of remaining issues and there are always tedious "you're wrong because it works perfectly for me!" replies, as if the fact that it works perfectly for some people means that it works perfectly for everyone.

These days Wayland is MUCH smoother than X11 even with an Nvidia graphics cards. With X11, I occasionally had tearing issues or other weird behavior. Wayland fixed all of that on my gaming PC.

It’s even more pleasant when you use a distro that natively uses systemd and provides light abstractions on top. One such example is NixOS.

NixOS is anything but a light abstraction (I say this as a NixOS user).

Tbh it feels like NixOS is convenient in a large part because of systemd and all the other crap you have to wire together for a usable (read compatible) Linux desktop. Better to have a fat programming language, runtime and collection of packages which exposes one declarative interface.

Much of this issue is caused by the integrate-this-grab-bag-of-tools-someone-made approach to system design, which of course also has upsides. Redhat seems to be really helping with amplifying the downsides by providing the money to make a few mediocre tools absurdly big tho.


How is it not a light abstraction? If you're familiar with systemd, you can easily decipher what the snippet below is doing even if you know nothing about Nix.

    systemd.services.rclone-photos-sync = {
      serviceConfig.Type = "oneshot";
      path = [ pkgs.rclone ];
      script = ''
        rclone \
          --config ${config.sops.secrets."rclone.conf".path} \
          --bwlimit 20M --transfers 16 \
          sync /mnt/photos/originals/ photos:
      '';
      unitConfig = {
        RequiresMountsFor = "/mnt/photos";
      };
    };
    systemd.timers.rclone-photos-sync = {
      timerConfig = {
        # Every 2 hours.
        OnCalendar = "00/2:00:00";
        # 5 minute jitter.
        RandomizedDelaySec = "5m";
        # Last run is persisted across reboots.
        Persistent = true;
        Unit = "rclone-photos-sync.service";
      };
      partOf = [ "rclone-photos-sync.service" ];
      wantedBy = [ "timers.target" ];
    };
In my view, using Nix to define your systemd services beats copying and symlinking files all over the place :)

Granted I have not used this library myself, so this is not coming from experience, but this type of copy does not instill confidence:

  let count = track(0);
  <button onClick={() => @count++}>{@count}</button>
  
  No useState, ref(), .value, $:, or signals.
You could replace `track` with `useState`, or `@` with `$` and it's pretty much the same thing. Whether you use syntax that's explicit or magic symbols you have to look up to understand is a matter of preference, but this does not really set it apart from any other library.

Not to mention that this is not even a valid TypeScript.

That doesn't matter if you run out of money before the end of the case.

true

Use Hammerspoon [0][1], it comes with a lot of macOS integrations out of the box and you write Lua, which takes zero effort to pick up and use. For me a big benefit is that you don't need to touch Xcode at all.

[0] https://www.hammerspoon.org

[1] https://www.hammerspoon.org/docs/hs.menubar.html


For me, the most important takeaway from the article was that the Passwords app supports 2FA codes! I was not aware of this, that's nice and getting rid of Authenticator is one less Google thing to worry about.

Having both your password and 2nd factor in the same place isn't the best idea however.

They're both on my phone anyway, so I don't really see the difference? Whether it's in one app or spilt across two apps makes no difference if there's no extra layers between. Having it all in Passwords is better in that regard, since it requires you to use Face ID to open the app.

I don’t think it matters much. In 1Password you still need your secret code to login in addition to your password. It essentially acts like a second factor.

I could give you my 1Password username and password right now and you wouldn’t be able to access it.

Being logged in to my 1Password essentially verifies that you have a physical device of mine.


The problem with hardware is that it's always viable soon, but not quite yet. Hardware is multiple orders of magnitude more difficult than anticipated, the fact that the richest companies in the world can't quite make it work is a testament to that. In my mind, hardware moonshots are kind of like trying to embed Doom in a SharePoint Framework Extension while high on psilocybin — impressive if someone manages to pull it off, but not for the sane of mind.

The big time suck I see with robotic anything is that simulation for training will only take it so far...eventually it needs to be in the real world, making mistakes, and this comes with far more red tape and much higher risk, slowing down the process. I don't see hardware as the bottleneck, its software and hardware working together in an environment where the stakes are much higher than just in the lab.

I feel like this point is refuted by what we're seeing out of robotics in China? And to a pretty good extend the U.S. Of course there is a curve which plots dexterity (hardware) and resulting capability, but we don't need /that/ much dexterity for some jobs. We have tele-operated humanoids 'doing things' where it is self evident that the bottleneck is the ability of the robot to act autonomously, not its hardware.

>I feel like this point is refuted by what we're seeing out of robotics in China?

Humanoid robot Olympic Games in China:

https://www.youtube.com/watch?v=5Y-tElcmJVE

Also, reminded - Russian "localization" (typical Russian hi-tech today, especially when on government investment, is simple rebadge of a Chinese tech) - even good Chinese robots starts to fall like drunk:

https://youtu.be/WVKxw72vlmo?t=15

and for the GGP comment:

>self driving delivery vans with a humanoid delivery robot

why humanoid? Glorified Roomba-like robots would do such job just fine. Every time seeing how the Amazon driver parks his van and runs around in our complex placing packages in front of the doors and making photos of the placed packages i'm wondering why Amazon wouldn't use 5-10 such Roombas per van instead. (and every time i think that i have to make such a startup myself, and after that i immediately think that Amazon would easily beat me by developing it 100x faster - in a week where i'd spend 2 years - and so i don't do it, and Amazon apparently doesn't do it too)


The simple reason why they don’t do it: It has to be 100% reliable. Robots get stuck, need charge, software has bugs, … So your fleet of robots would need supervision. Probably for years to come. And a human driver only costs like 70-80k a year.

The point of machine learning based systems (imo) is that they aren't 100% reliable.

Idk where people are getting the idea that systems designed to mimic biological brains will have machinelike precision whilst also being flexible to adapt to new situations.


A human supervisor monitoring 10 or more vehicles and unstuck them in case will also not cost more per year.

Yeah, but this requires good teleoperator infrastructure. You can’t unstuck a robot that loses connection. There are just a lot of things that can go wrong. And an entire car being stuck and waiting for someone to come is also not cheap. I am pretty sure Amazon (one of the biggest robotics company on the planet btw) has done the math.

Many houses don't have flat approaches, our definitely doesn't. Whatever delivers a package to our town home at least needs to navigate paver stones. Dog bots could probably do it though.

Deliveries are probably one of the few legitimate applications for humanoid robots, but even then 99% of the work is done on wheels and the robot is just there to ring a bell, open a door and climb stairs.

This is probably something people don't understand about humanoid robots. Nobody is dumb enough to replace their CNC machines with humanoid robots holding power tools and yet that is what you're being sold on when Elon Musk is teasing a trillion dollar valuation.

Instead, the vast majority of humanoids will be used for pretty boring FedEx or door dash style logistics work, not much different from wheeled robots.


> why humanoid? Glorified Roomba-like robots would do such job just fine.

Steps. Garden gates. Uneven surfaces. Communal entrances.

The real world is messy and certainly not flat.

Some sort of wheeled-legged-centaur type robot might work though.


One of the core ideas behind LLMs is that language is not a discrete space, but instead a multidimensional vector field where you can easily interpolate as needed. It's one of the reasons LLMs readily make up words that don't exist when translating text for example.

Not the input and output though, which is the important part for flow matching modeling. Unless you're proposing flow matching over the latent space?

Important to note here that the $200/week figure from 80s is the same as $830/week today due to inflation [0]. Rent and degrees specifically have gone up a ridiculous amount, yes, but as far as the rest of it goes, most students today would jump at the opportunity of having that much disposable income.

[0] https://www.bls.gov/data/inflation_calculator.htm


Is there a job you can work as a student in Iowa that pays $830/week net in 2026? (I'm from the UK and have no idea about such salaries)

Not to be that guy who always turns up and points out things aren't that bad, but you can easily rent a 5 bedroom home in Iowa for quite a bit less than 830 / week ($3300 / month) today.

5 bedroom home currently goes for about $2400 / month on Zillow.

I have to think part of the issue is that people no longer want to live in Iowa / LCOL and now prefer NYC / HCOL.


You're wrong, you've mixed up the numbers. Their general living expenses were $200/week. Their rent was $200/month, that's $200 for the whole month, not per week.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: