Hacker Newsnew | past | comments | ask | show | jobs | submit | rabf's commentslogin

When people ask what computer they should buy I always tell them to get any old office computer from ebay and use the rest of the money to buy a really nice monitor, and a really nice keyboard and mouse, as these are the bits you use! For most tasks that are undertaken on a computer any processor from the last 10 years coupled with 16GB of ram is more than sufficient.

If you buy a really nice monitor, which for me starts at 6K 32” - the eBay computer will no longer drive it. What I find insane is how long companies issued 19” 1080 screens to their employees. I don’t think that was a well calculated choice given that a couple of hundred more over 5 years would have surely improved productivity by a little bit for their 50k year employees. It felt almost done out of spite to keep people in their place

I wouldn't call a 6k monitor "nice", that's way above that. I would love on of those, since I can't stand blurry text, but even for me that's way too expensive to justify. So as the sibling says, if people are looking at old used pcs on ebay, they're unlikely to drop more than a grand on the screen.

A 32" 4k screen is nice enough and a reasonable one [0] can be had for a third of that. My I don't-know-how-old desktop I saved from the bin at work sporting an i5-6500 could drive that with no issues.

---

[0] Around 2020 I bought an LG something-or-other for 350 Euros for work, 32", 4k, some form of VA panel. It had pretty good colors and better contrast than the IPS monitor I use as a Sunday-photographer.


> which for me starts at 6K 32”

that's a weird start. for me the start is 4k with proper blacks & proper color calibration

> given that a couple of hundred more over 5 years would have surely improved productivity by a little bit

no company wants the bulk of it's people to improve their productivity by even a little bit. you should be productive enough, that's it.

> It felt almost done out of spite to keep people in their place

otherwise amazon and the likes would have competitors in every country. but I don't think it's out of spite.

it's the 'established' interpersonal culture between employers and employees, like in packs without natural alphas: if one beta-beta steals the show of the beta-alpha a few times too many, he's a goner. in packs with alphas the performer gets commended and a chance to compete for the top because you want your team to be lead by the currently best. hasn't been the case in our species for a long while now.

companies don't treat their employees bad out of spite, it's so they can stick to low, moderate(d) standards and cultures, ... and have an easy work life


A 6K monitor seems like complete overkill unless you’re editing 4K video or sitting way too close to it.

Many employees are doing text work and, until recently, operating systems and apps did a really bad job of working with Hi DPI displays. Your best bet was to target around 115 DPI on a monitor for decent text rendering without having to deal with font scaling. 19" 1080p is perfect for that. You just gave them multiple monitors if you wanted more real estate.

People asking for computer buying advice obviously don't need a 6K screen.

I have a ThinkPad T420 that is sufficient for most tasks that don't involve HEVC acceleration. It's got a mobile Sandy Bridge i7, booting off of a SATA SSD.

The only thing that really needed an upgrade was the display. I ditched the crappy 1366x768 TN for a 1440p IPS and an LVDS-eDP conversion board. Looks fantastic. Runs great.


`glow` is a pretty handy terminal mardown viewer.

Codex is included with the $20 a month chatgpt subsciption with very generous limits.

Positive reinforcement works better that negative reinforcement. If you the read prompt guidance from the companies themselves in their developer documentation it often makes this point. It is more effective to tell them what to do rather than what not to do.

This matches my experience. You mostly want to not even mention negative things because if you write something like "don't duplicate existing functionality" you now have "duplicate" in the context...

What works for me is having a second agent or session to review the changes with the reversed constraint, i.e. "check if any of these changes duplicate existing functionality". Not ideal because now everything needs multiple steps or subagents, but I have a hunch that this is one of the deeper technical limitations of current LLM architecture.


Probably not related but it reminds me of a book I read where wizards had Additive and Subtractive magic but not always both. The author clearly eventually gave up on trying to come up with creative ways to always add something for solutions after the gimmick wore off and it never comes up again in the book.

Perhaps there is a lesson here.


Could you describe what this looks like in practice? Say I don't want it to use a certain concept or function. What would "positive reinforcement" look like to exclude something?

Instead of saying "don't use libxyz", say "use only native functions". Instead of "don't use recursion", say "only use loops for iteration".

This doesn't really answer my question, which more about specific exclusions.

Both of the answers show the same problem: if you limit your prompts to positive reinforcement, you're only allowed to "include" regions of a "solution space", which can only constrain the LLM to those small regions. With negative reinforcement, you just cut out a bit of the solution space, leaving the rest available. If you don't already know the exact answer, then leaving the LLM free to use solutions that you may not even be aware of seems like it would always be better.

Specifically:

"use only native functions" for "don't use libxyz" isn't really different than "rewrite libxyz since you aren't allowed to use any alternative library". I think this may be a bad example since it massively constrains the llm, preventing it from using alternative library that you're not aware of.

"only use loops for iteration" for "done use recursion" is reasonable, but I think this falls into the category of "you already know the answer". For example, say you just wanted to avoid a single function for whatever reason (maybe it has a known bug or something), the only way to this "positively" would be to already know the function to use, "use function x"!

Maybe I misunderstand.


I 100% stopped telling them what not to do. I think even if “AGI” is reached telling them “don’t” won’t work

I have the most success when I provide good context, as in what I'm trying to achieve, in the most high level way possible, then guide things from there. In other words, avoid XY problems [1].

[1] https://xyproblem.info



> non-default different tools

The “default” recommendation is clearly Android Studio plus Kotlin/Java.

Other tools are smaller.


His application `boomer` is the best desktop zoom app for X11! Bound to a keyboard shortcut its very useful for debugging graphics layout errors during development.

I've always walked to the shops by pulling the earth around beneath my feet!

Once, an angry guy tried to explain that the world does not revolve around me.

I had to walk him away.


Though in the real-world case, there's an important difference that breaks the symmetry: You experience acceleration, whereas everybody else standing around you doesn't.

AMD used to be terrible on Linux, perhaps before your time. Nvidia was always the choice if you needed functional hardware acceleration that worked on par with windows. The nvidia driver was/still is? the same driver accross platforms with a compatibility shim for each OS. This is how Nvidia managed to have best in class 3D acceleration accross Windows, FreeBSD, and Linux for decades now. OpenGL support historically on AMD was really bad, and AMD support was through a propriety driver back the day as well. Part of the reason the AMD/ATI opensource driver gained so much traction and support was that the propritery driver was so bad! Then you get onto other support for things like CUDA for professional work which Nvidia has always been light years ahead of any other card manufacturer.

Source: Was burned many times by ATI's promises to deliver functioning software over the years. Been using Nvidia on Linux and Freebsd for as long as can recall now.


Nvidia and intel on linux for near on 20 years now, and also agree - generally the ATI/AMD experience was markedly worse.

Currently dual 3090s in this box and nvidia is still as simple as just installing the distro package.

There was a period in the mid 2010s where trying to get better battery life on laptops by optionally using the discrete gpu vs the integrated was a real pain (bumblebee/optirun were not so solid), but generally speaking for desktop machines that need GPU support... Nvidia was the route.

Don't love their company politics so much, although I think they're finally getting on board now that so many companies want to run GPU accelerated workloads on linux hosts for LLMs.

But ATI sucked. They seem to have finally gotten there, but they were absolutely not the best choice for a long time.

Hell - I still have a machine in my basement running a GTX970 from 2015, and it also works fine on modern linux. It currently does the gpu accel for whisper speech to text for HA.


Try just setting the correct dpi for your monitor and use a hi-dpi theme. No scaling required. Pixel perfect graphics.

Fractional scaling is a really bad solution. The correct way to fix this is to have the dpi aware applications and toolkits. This does in fact work and I have ran xfce under xorg for years now on hi-dpi screens just by setting a custom dpi and using a hi-dpi aware theme. When the goal is to have perfect output why do people suddenly want to jump to stretching images?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: