Hacker Newsnew | past | comments | ask | show | jobs | submit | dbspin's commentslogin

The 'cut' page in DaVinci specifically exists to replicate the FC editing UX.

It's an optional way of editing separate from the 'edit' tab.


Retraining to what exactly? The middle class is being hollowed out globally - so reduced demand for the service economy. If we get effective humanoid robots (seems inevitable) and reliable AI (powered by armies of low payed workers filling in the gaps / taking over whenever the model fails), I'm not sure how much of an economy we could have for 'retraining' into. There are only so many onlyfans subscriptions / patronages an billionaire needs.

UBI effectively means welfare, with all the attendant social control (break the law lose your UBI, with law as ever expanding set of nuisances, speech limitations etc), material conditions (nowhere UBI has been implemented is it equivalent to a living wage) and self esteem issues. It's not any kind of solution.


Health care, elder care, child care are all chronically short of willing, able bodies.

Most people want to do anything but these three things - society is in many a ways a competition for who gets to avoid them. AI is a way of inexorably boxing people back into actually doing them.


Totally agree; these are all in need of bodies plus they are always understaffed (why the hell does a nurse need to oversee 15 patients in people have to rot in ICU for hours? We accept this because it's cost effective not because it's a decent or even safe practice). Governments could and should make conditions in those professions more tolerable, and use money from A.I to retrain people into them. If a teacher oversaw 10 kids instead of 35 maybe we'll have less burnout and maybe children get better education. If had more police there would be less crime and less burnout. Etc etc. The thing is what happens untill (and if) we get into this utopia.

> Governments could and should make conditions in those professions more tolerable, and use money from A.I to retrain people into them.

FWIW, my vision was not really this utopian. It was more about AI smashing white-collar work as an alternative to these professions so that people are forced into them despite their preference to do pretty much anything else. Everyone is more bitter and resentful and feels less actualized and struggles to afford luxuries, but at least you don't have to wait that long in the emergency room and it's 10 kids to a classroom.


I don't think it's Utopia either (I was being a bit sarcastic) but it's the best case scenario; the worst case is governments do nothing and let "the market" run its course; this could be borderline Great Depression levels of depravity I think.

As for those professions; I think they are objectively hard for certain kinds of people but I think much of the problem is the working conditions; less shifts, less stress, more manpower and you'll see more satisfaction. There's really no reason why teachers in the U.S should be this burned out! In Scandinavia being a teacher is a honorable, high status profession. Much of this has to do with framing and societal prestige rather than the actual work itself. If you pay elder carers more they'll be happier. We pretty treat our elders like a burden in most modern societies, in more traditional societies I'm assuming if you said your job is caring for elders it is not a low status gig.


Yea, the future is either UBI, or employing a very large number of people in public sector, doing jobs that are useful, but not necessary something free market capitalism values right now.

Either way, governments need to heavily tax corporations benefiting from AI to make it possible.


> If we get effective humanoid robots

That's still an if and also a when; could be 2 decades from now or more till this reliably replaces a nurse.

> Retraining to what exactly?

I wish I had a good solution for all of us and you raise good points , even if you retrain to become say a therapist or a personal trainer the economy could become too broken and fragmented for you to be able to making a living. Governments that can will have to step in.


At a certain point people will break, and these sociopathic C-suites will be the first ones on the chopping block. Of course, that's why the biggest degenerates like Zucc are all off building doomsday bunkers, but I don't see a reality in which people put up with these types of conditions for long.

That said, it'll certainly get much, much worse before it starts getting better. I guess the best we can hope for is that the kids find a way out of the hell these psychos paved for us all.


People put up with what they have to put up with. Many millions of people have lived and suffered under totalitarian regimes with basically zero options to do anything about it. I think that's where we're headed and by the time a sufficient amount of people realise how bad their situation is, the moment to do anything about it will have long since passed. There will be no cavalry riding to the rescue this time.

This is exactly what Meredith Whittaker is saying... The 'edge conditions' outside the training data will never go away, and 'AGI' will for the foreseeable future simply mean millions in servitude teleoperating the robots, RLHFing the models or filling in the AI gaps in various ways.

> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.

Running this paragraph through Gemini, returns a list of the fallacies employed, including - Attacking the Motive - "Even if the artists are motivated by self-interest, this does not automatically make their arguments about AI's negative impacts factually incorrect or "bad."

Just as a poor person is more aware through direct observation and experience, of the consequences of corporate capitalism and financialisation; an artist at the coal face of the restructuring of the creative economy by massive 'IP owners' and IP Pirates (i.e.: the companies training on their creative work without permission) is likely far more in touch the the consequences of actually existing AI than a tech worker who is financially incentivised to view them benignly.

> The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me.

This is a strange kind of anti-naturalistic fallacy. A paradigm shift (or indeed a revolution) is not in itself a good thing. One paradigm shift that has occurred for example in recent goepolitics is the normalisation of state murder - i.e.: extrajudicial assassination in the drone war or the current US govts use of missile attacks on alleged drug traffickers. One can generate countless other negative paradigm shifts.

> if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter?

1) You haven't produced it.

2) Such a thing - a beautiful product of AI that is not identifiably artificial - does not yet, and may never exist.

3) Scare quotes around intellectual property theft aren't an argument. We can abandon IP rights - in which case hurrah, tech companies have none - or we can in law at least, respect them. Anything else is legally and morally incoherent self justification.

4) Do you actually know anything about the history of art, any genre of it whatsoever? Because suggesting originality is impossible and 'efficiency' of production is the only form of artistic progress suggests otherwise.


At least in my country they face no competition. For a given location, only one app will work.


This comment is wilfully gloating over the murder of civilians. It's beneath the standards of the Hacker News community or any civil society. What has become of America?


I'd consider hallucinations to be a fundamental flaw that currently sets hard limits on the current utility of LLMs in any context.


I thought this for a while, but I've also been thinking about all the stupid, false stuff that actual humans believe. I'm not sure AI won't get to a point where even if it's not perfect it's no worse than people are about selectively observing policies, having wrong beliefs about things, or just making something up when they don't know.


The level of paid nation state propaganda is a rounding error next to the amount of corporate and political partisan propaganda paid directly or inspired by content that is paid for directly by non state actors. e.g.: Musk, MAGA, the liberal media establishment.


Literally got an email this morning from Google, to say my Google One plan now 'includes AI benefits' - including

"More access to Gemini 3 Pro, our most capable model More access to Deep Research in the Gemini app Video generation with limited access to Veo 3.1 Fast in the Gemini app More access to image generation with Nano Banana Pro Additional AI credits for video generation in Flow and Whisk Access Gemini directly in Google apps like Gmail and Docs" [Thanks but no thanks]


This always blows my mind about the US - the fact that individual cities and states are large enough markets people can become enormously wealthy catering to their locality. A staggering difference from Europe.


...I'm in the EU - its not an US specific feature


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: