Hacker Newsnew | past | comments | ask | show | jobs | submit | newswasboring's commentslogin

> we have decided that journals should not be the arbiters of quality.

At that point why even have a journal, let's just put everything as a Reddit post and be done with it. We will get comment abilities for free.

Maintaining quality standards is a good service, the journal system isn't perfect but its the only real check we have left.


> At that point why even have a journal

Great question.

> the journal system isn't perfect but its the only real check we have left.

I wish I could agree but Nature et al continually publish bad, attention-grabbing science, while holding back the good science because it threatens the research programmes that gave the editorial board successful careers.

"Isn't perfect" is a massive understatement.


My favorite form is when someone shouts "concurrency" in the middle of the sentence.


Isn't data entry a really good usecase for the LLM technologies? Of course depending on the exact usecase. But most "data entry" jobs are data transformation jobs and they get automated using ML techniques all the time. Current LLMs are really good at data transformation too.


No because they aren't reliable. You don't want to be storing hallucinated data. They can help write the scripts that do the actual work though.


We can't even use AI language translation because of compliance / liability - we translate food ingredients.

"It says 'no shellfish', go ahead - eat it"

Even with lots context the various services we tried would get something wrong.

e.g. huile is oil in French and sometimes it would get translated as "motor oil"


No data replication or transformation is not a good use-case for text generators.


If your core feature is data entry, you probably want to get as close to 100% accuracy as possible.

"AI" (LLM-based automation) is only useful if you don't really care about the accuracy of the output. It usually gets most of the data transformations mostly right, enough for people to blindly copy/paste its output, but sometimes it goes off the rails. But hey, when it does, at least it'll apologise for its own failings.


Ah yes, because hallucinations will definitely improve our data entry!


I grew up on the borland Turbo series. Learned C then C++ on it. Such nostalgia.

I was wondering, is there a way to get VS code to look like this? Maybe neoVim?


What difference does it make how many people use it? Complex software exists all over the world for handful of users. I personally work in an industry where anything we create will be used by at max 100 people worldwide. Does it diminish the complexity of code? I think not.


> #3: Limit what they test, as most LLM models tend to write overeager tests, including testing if "the field you set as null is null", wasting tokens.

Heh, I write this for some production code too (python). I guess because python is not typed, I'm testing if my pydantic implementation works.


The way you have discounted Wozniak's talent, one can also discount Steve's ambition. There are thousands of people same or more ambitious than Jobs. Being ambitious doesn't guarantee anything, neither does being as good as wozniak only work when combined with ambition.


Steve Jobs brought a hell of a lot more to the table than just raw ambition.

I just meant to contrast the level of ambition between the two. Wozniak was extremely unambitious and Jobs pushed him into starting a company, with great difficulty.

Nothing is guaranteed in life but I've certainly met more than a few people that I predicted would be successful and then they were. There are character traits that incline people toward success.


My guess would be local AI. Apple Silicon is uniquely suitable with its shared memory.


Yeah exactly. The MacBook Pro is by far the most capable consumer device for local LLM.

A beefed up NPU could provide a big edge here.

More speculatively, Apple is also one of the few companies positioned to market an ASIC for a specific transformer architecture which they could use for their Siri replacement.

(Google has on-device inference too but their business model depends on them not being privacy-focused and their GTM with Android precludes the tight coordination between OS and hardware that would be required to push SOTA models into hardware. )


I see. It'll be interesting to see how much on-device models take off for consumers, when off-device models will be so much more capable. In the past, the average consumer has typically been happy to trade privacy for better products, but maybe it will be different for llms.


Agreed. I think the latency wins could be meaningful for many of the obvious phone-centric usecases (Siri, app-/browser-enrichment, etc.) but the capability gap is definitely a hindrance.

Privacy is already a key differentiating feature for iPhone which is why I think they will continue to try to make this option viable. (They already do ChatGPT fallback which is a pragmatic concession to the reality that you highlight here.)


They are well positioned but have a history of screwing up their AI plays, I hope they can get it right.


This is only true if you consider AI to be LLMs and chatbots. The non-LLM AI built into just the iPhone camera is almost certainly the largest scale consumer deployment of any AI but largely goes unnoticed because it works so well.


Maybe this will help public realize they don't hate AI, they hate current form of capitalism. It took something which reduces work and made it a bad thing.


> It took something which reduces work and made it a bad thing.

It doesn't reduce work, it improves productivity, and virtually none of the productivity boost of the last 50 years benefited the worker. So you end up working the same hours, producing more and not being paid for the difference

https://economicsfromthetopdown.com/wp-content/uploads/2020/...


Improved productivity is reduced work. We dont have to work the same hours. Labor doesn't always have to relent.


idk man, I'm still working the same hours as 10 years ago, and my retirement age went up since then, if anything I'm working way more, certainly more than my parents and grandparents


Yes precisely what I am trying to say. This is not an outcome of technology, its an outcome of how our socio-economic system is set up. The company owners could have easily given you the benefit of technology improvement, made a 3 day work week or made a 4 hour work day and hired more people or reduced their own ambitions. Instead they chose to squeeze everything out of you.


We agree then. Politicians in my country were saying automation would bring the 3 days work week, in the early 1980s aha


This has been my thinking as well. Quite anti-human that a technology that improves automation and productivity works against common interest rather than for it.


/s/ is kind of a skeuomorph for me. I have never used sed but I understand this syntax.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: