Hacker Newsnew | past | comments | ask | show | jobs | submit | panki27's commentslogin

I wondered the same thing, but characters usually don't reach the edges, so I guess circles fit the average character better?

I have exactly the same Sony Vegas keygen experience as the parent poster, but with the song from your fifth link!


Nothing stops you from creating a PR :-)))


I would, if I would GIMP use often enough to have the motivation - I use GIMP maybe 2 - 3 times a year.

And thats the irony covered in my post: Even that the source is available didnt motivate someone enough so far to create better version of the built


Nothing stops you from commenting these useless comments.


What does this do that OpenWebUI (or one of the many of other solutions) does not?


As someone building another competitor in the field, I'll relay some reasons why some of our customers ruled out OpenWebUI in their decision-making process:

- Instability when self-hosting

- Hard to get in touch with sales when looking for SLA-based contracts

- Cluttered product; Multiple concepts seemingly serving the same purpose (e.g. function calling vs. MCP); Most pre-MCP tools suffer from this

- Trouble integrating it with OIDC

- Bad docs that are mostly LLM generated


Broadly, I think other open source solutions are lacking in (1) integration of external knowledge into the chat (2) simple UX (3) complex "agent" flows.

Both internal RAG and web search are hard to do well, and since we've started as an enterprise search project we've spent a lot of time making it good.

Most (all?) of these projects have UXs that are quite complicated (e.g. exposing front-and-center every model param like Top P without any explanation, no clear distinction between admin/regular user features, etc.). For broader deployments this can overwhelm people who are new to AI tools.

Finally trying to do anything beyond a simple back and forth with a single tool calls isn't great with a lot of these projects. So something like "find me all the open source chat options, understand their strengths/weaknesses, and compile that into a spreadsheet" will work well with Onyx, but not so well with other options (again partially due to our enterprise search roots).


OpenWebUI isn't Open Source anymore. Open WebUI has an egregious CLA if I want to contribute back to it (Which I wouldn't do anyway because it isn't Open Source...)

Onyx Devs: This looks awesome, I will definitely add it to my list of things to try out... close to the top! Thanks, and please keep it cool!


What are compile times like right now, with modern hardware?


Phoronix includes a "Timed Linux Kernel Compilation" test as part of their reviews using the default build config.

Here is one comparing some modern high end server CPUs: https://www.phoronix.com/benchmark/result/amd-5th-gen-epyc-9... (2P = dual socket)

Here is one comparing some modern consumer CPUs: https://www.phoronix.com/benchmark/result/amd-ryzen-9-9900x-...

Searching "Phoronix ${cpuModel}" will take you to the full review for that model, along with the rest of the build specs.

With the default build in a standard build environment the clock speed tends to matter more. With tuning one could probably squeeze more out of the higher core count systems.


Note that those two links are using different configs. Here's the link for Threadripper 9995WX:

https://www.phoronix.com/review/amd-threadripper-9995wx-trx5...

That's using the same config as the server systems (allmodconfig) but it has the 9950X listed there and on that config it takes 547.23 seconds instead 47.27. That puts all of the consumer CPUs as slower than any of the server systems on the list. You can also see the five year old 2.9GHz Zen2 Threadripper 3990X in front of the brand new top of the range 4.3GHz Zen5 9950X3D because it has more cores.

You can get a pretty good idea of how kernel compiles scale with threads by comparing the results for the 1P and 2P EPYC systems that use the same CPU model. It's generally getting ~75% faster by doubling the number of cores, and that's including the cost of introducing cross-socket latency when you go from 1P to 2P systems.


Oh good catches! I must have grabbed the wrong chart from the consumer CPU benchmark, thanks for pointing out the subsequent errors. The resulting relations do make more sense (clock speed certainly helps, but there is wayyyy less of a threading wall than I had incorrectly surmised).

Here is the corrected link for the 9950X review with allmod instead of def for equal comparison (I couldn't find the def chart in the server review) https://www.phoronix.com/benchmark/result/amd-ryzen-9-9900x-...



It varies a lot depending on how much you have enabled. The distro kernels that are designed to support as much hardware as possible take a long time to build. If you make a custom kernel where you winnow down the config to only support the hardware that's actually in your computer, there's much less code to compile so it's much faster.

I recently built a 6.17 kernel using a full Debian config, and it took about an hour on a fast machine. (Sorry, I didn't save the exact time, but the exact time would only be relevant if you had the exact same hardware and config.) I was surprised how slow it still was. It appears the benefits of faster hardware have been canceled by the amount of new code added.


I believe you are referring to GNU/Linux, or as I've recently taken to calling it, GNU plus Linux.


The link appears to be broken, it redirects me to the main page.



For tmux users: you can use the lock-command option with something like cmatrix for a quick and dirty screensaver.


My most used function is probably the one I use to find the most recent files:

    lt () { ls --color=always -lt ${1} | head }


It's hidden in the "Copy" drop down at the top right.

https://http3-explained.haxx.se/~gitbook/pdf?limit=100


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: