Hacker Newsnew | past | comments | ask | show | jobs | submit | LtdJorge's commentslogin

Doesn’t ZIP have all the metadata at the end of the file, requiring some seeking still?

Teams inside a VM it is, then.

Or: Put all of Windows inside of a VM, within a host that uses disk encryption -- and let it run amok inside of its sandbox.

I did this myself for about 8 years, from 2016-2024. During that time my desktop system at home was running Linux with ZFS and libvirt, with Windows in a VM. That Windows VM was my usual day-to-day interface for the entire system. It was rocky at first, but things did get substantially better as time moved on. I'll do it again if I have a compelling reason to.


If you’re doing your work inside the windows machine, what protection does Linux as a host get you?

The topic is bitlocker, and Microsoft, and keys.

With a VM running on an encrypted file system, whatever a warrant for a bitlocker key might normally provide will be hidden behind an additional layer that Microsoft does not hold the keys to.

(Determining whether that is useful or not is an exercise for the person who believes that they have something to hide.)


Isn’t it a pretty well-established fallacy that privacy only benefits those with something to hide?

Wouldn't it be easier to just use bitlocker and not back up your keys with microsoft?

Sure, the plan you outline does sound very simple. And in an ideal world, that'd be perfectly fine.

Except we don't live in an ideal world.

See, for example, the fuckery alluded to above.

Therein: Linking a Microsoft account to a Windows login is something that appears to happen automatically under some circumstances, and then bitlocker keys are also automatically leaked to the mothership...

The machine is quite clearly designed with the intent that it behaves as a trap. Do you trust it?


If you distrust Windows that much, isn't the only real option to just not use it?

That's yet another brilliantly simple plan that you've outlined!

Would you like for me to demonstrate how it, too, is short-sighted?


I don't think so.

If you believe Windows to be so actively malicious that it would go behind your back and enable key backups after you've explicitly disabled them, you should probably assume that it will steal your encrypted information in other ways too.


This continued usage of the word "you," as if directly and specifically targeted at me, that you're using: At first, I thought it was a mistake, but now I'm pretty sure that it is a very deliberate word choice on your part.

Therefore, based on that...

Since this is about me, then: I'd like to ask that you please stop fucking with me.

We can discuss whatever concepts that you'd like to discuss, in generalities, but I, myself, am not on the menu for discussion.

Thank you kindly!


Don't be silly, the indefinite "you" was simply the most natural construct to use there.

In no way should my use of the indefinite "you" be construed as a reference to ssl-3 specifically, it is an indefinite reference to literally anyone.


It's not just Teams. You need to be constantly vigilant not to make any change that would let them link your MS account to Windows. And they make it more and more difficult not only to install but also use Windows without a Microsoft account. I think they'll also enforce it on everybody eventually.

You need to just stop using windows and that's it.

The only windows I am using is the one my company makes me use but I don't do anything personal on it. I have my personal computer next to it in my office running on linux.


Just Teams in a browser tab instead. Does it actively require running as a full app to do anything?

Use Enterprise

Enterprise Adware? Sounds hilarious to people that already paid $190 USD/seat to get spammed.

In general, Windows has always belonged on a VM snapshot backing image. =3


If everything is static, they'll cache it in a DC close to you. That's better than what we had before.

If compared to a smartphone, maybe.

No, it is a poor pixel density when compared with a printed book, which should be the standard for judging any kind of display used for text.

At the sizes of 27" or 32", which are comfortable for working with a computer, 5k is the minimum resolution that is not too bad when compared with a book or with the acuity of typical human vision.

For a bigger monitor, a 4k resolution is perfectly fine for watching movies or for playing games, but it is not acceptable for working with text.


Compared to a smartphone it's not just poor it's complete dreck. Smarphones are in the 400s.

Do you hold your 32" monitor the same distance from your face as you hold your smartphone?

I fail to see how that is relevant as I neither introduced nor advocated for smartphone pixel density?

Then what was the intent of your comment? There's no point to making a 400dpi 32" display (even if that were remotely physically possible).

> Then what was the intent of your comment?

Pointing out to the other guy that their reply made no sense?

> There's no point to making a 400dpi 32" display

Thank you captain obvious.

You do know that there are densities between 130 and 400+ though, right?


Exactly, that’s the point

That’s not a point it’s nonsense thought termination.

There’s a gulf between 130 dpi and 460 dpi, and in that gulf there are densities which stop being poor at monitor viewing distances.

That smartphone densities are excessive for that purpose does not make middling standard densities good.


I have an LG OLED C3 as a monitor, 42". I may be able to distinguish separate pixels if looking at a '.' or something like that (a stuck pixel happened for a few weeks, which I could notice on a white background).

But the density is definitely enough for text for the distance required for such a screen size. At least when using grayscale AA, because OLED subpixel...


AMD doesn't have it. I just confirmed by grepping through dmesg and journalctl -b, the only time it appears is due to UPS driver notifications (unrelated).

I’ve recently compared WebP and AVIF with the reference encoders (and rav1e for lossy AVIF), and for similar quality, WebP is almost instant while AVIF takes more than 20 seconds (1MP image).

JXL is not yet widely supported, so I cannot really use it (videogame maps), but I hope its performance is similar to WebP with better quality, for the future.


You have to adjust the CPU used parameter, not just quality, for AVIF. Though it can indeed be slow it should not be that slow, especially for a 1mp image. The defaults usually use a higher CPU setting for some reason. I have modest infrastructure that generates 2MP AVIF in a hundred ms or so.

I tested both WebP and AVIF with maximum CPU usage/effort. I have not tried the faster settings because I wanted the highest quality for small size, but for similar quality WebP blew AVIF out of the water.

I also have both compiled with -O3 and -march=znver2 in GCC (same for rav1e's RUSTFLAGS) through my Gentoo profile.


Maximum CPU between those two libs is not really comparable though. But quality is subjective and it sounds like webp worked best for you! Just saying though, there is little benefit in using the max CPU settings for avif. That's like comparing max CPU settings on zip vs xz!

rav1e has not had an actual update in performance or quality in years since funding got dropped. Use an encoder like aom, or svt-av1.

I tried both AOM and rav1e, same quality with rav1e producing 20% larger image, same performance more or less (too long)

Other comments here are good, but one thing that's worth pointing out:

Encoding time isn't as important as decoding time since encoding is generally a once-off operation.

Yeah, we all want faster encodes, but the decodes are the most important part (especially in the web domain).


I know, thats why I used max CPU settings. But when processing map tiles with a final total compressed size of half a terabyte, and each one is 200kB, taking 20s per tiles is prohibitively expensive.

Do you mean Blackjack?

The Date API is horrible

Yea, that's understood to be the opinion, blandly repeating it adds little to the discussion.

It's simple. In it's simplicity it left many features on the floor. I just can't connect with the idea that someone would need to constantly be on MDN in order to work with it. It's not so horrible that it defies logic.


It’s not simple, though. Simple would be something like an object wrapping YYYY-MM-DD, like a COBOL programmer in the 1950s would’ve used. Instead, people have made thousands of variations of bugs around the complexity even basic usage forces you to internalize like the month number being zero-based while the year is 1900-based but the day of the month is 1-based following standard usage.

It's not simple, it has hidden pitfalls and footguns. Using it is the fastest way to blood on the floor (to quote a senior developer I worked with).

Yep, I’ve been using this one which is lighter (20kB): https://github.com/fullcalendar/temporal-polyfill/

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: