Hacker Newsnew | past | comments | ask | show | jobs | submit | mekpro's commentslogin

Except that, In OpenRouter, Deepseek always maintain in Top 10 Ranking. Although I did not use it personally, i believe that their main advantage over other model is price/performance.

Fifth in market share in fact!

https://openrouter.ai/rankings

There are a lot of applications where you really just want a cheap and efficient model that's still somewhat competitive and that's exactly the niche DeepSeek fulfills the best.


I think the opposite. Having NVIDIA investing in TSMC's bleeding-edge process node should benefit Apple rather than disadvantage.

It means that Apple doesn't have to be sole investor in latest node development which is more harder to justify, especially in the year where smartphone upgrade cycle is slowdown. Having NVIDIA (and AI boom) in the picture should help Apple reduce CAPEX for their semi-conductor investment.


They are so beautiful that i dont want any of these been stole by AI.


They are no less beautiful nor are they gone after some gradient descent has slithered over them, so worry not.


It’s strange and highly unpleasant to watch people treat something as a zero-sum game that is decidedly not one.


No, I get it.

AI has the effect of making whatever it creates feel worthless. Something AI made says “this wasn’t worth spending any time on. It’s not something important ” Seeing something you care about become part of that Sucks.


That idea is incomprehensible to me.

I see value (or lack of it) in what I have in front of me, not in how much a person had to struggle and suffer for it to come into existence.

Either something is good or it’s not. Creating something good can sometimes take a lot of effort, but it’s not the effort that makes it good. Otherwise digging a hole and filling it back up would be a valuable undertaking.


What you are saying is that you either do not understand or do not care for craft (it’s an observation, not a criticism), but craft has definite value beyond the end result. Effort does play a huge part, including in animation.

http://gurneyjourney.blogspot.com/2019/03/painting-backgroun...

The lights in windows on the background of Akira, for example, were painstakingly painted one by one. That takes skill. That is impressive. It’s the kind of work that makes one with an appreciation for art (which goes beyond “pretty picture”) take another look and imagine what the artist was feeling and thinking as they were working. It makes you wonder about exact techniques and how to improve them, how to create something new.

All of that enhances the appreciation for the movie. The craft, the skill, the sweat put into it to make a hard and grandiose vision plays into how good and influential it has become.

Had those buildings just been spit out by gen AI along with everything else, there would be no value to taking a second look. You’d probably be looking at distorted images anyway, and even if you weren’t it’d just be a bunch of pixels with no intentionality to it. If no one put effort into the details, there’s no reason to look at them. The converse is also true.


I do care for craft, but I don’t view it as an end in itself. The value of craft lies in what it creates, and that value reflects back on the undertaking itself.

But if a machine can replicate mechanically what takes a human effort and ingenuity to do, a human doing the same thing through effort and ingenuity doesn’t magically add further value. And this is understood quite universally; that’s why no human practices the craft of multiplying large numbers anymore.


The part of craft that can be replicated mechanically is the least interesting and valuable part of art.

This is what AI art supporters fail to understand because few if any of them actually practice the craft they emulate. They tend to only work with code and algorithms for which there is no fundamental human expression involved. They assume that because apart from rote intellect and memory the human experience is meaningless in regards to coding as they are acting merely a means of inputting instructions into a machine, that the human experience is equally meaningless for all creative endeavors.

However the value lies not in the technical aspects of craft as an end (which, mind you, no AI is actually good at yet) but as a means of expressing the human experience of an artist and their relationship to the viewer. That dialogue isn't something an LLM can replicate because by definition humanity isn't something an LLM can experience. And even if perfectly mastered on a technical level, it wouldn't have the same value as human expression just as a skillful forgery doesn't have the same value as an original.


> The part of craft that can be replicated mechanically is the least interesting and valuable part of art.

Everything humans do can be replicated mechanically. We’re biological machines, and crafts are just behaviors, not some mystical feat that somehow defies replication or analysis. And there can be no reasonable doubt that machines will replicate (and indeed surpass) everything human very soon.


This doesn't actually refute my comment, even given the assumption that your predictions prove correct. Even given a purely physicalist universe and a machine perfectly capable of replicating all human endeavors, most humans will find more value in human expression.

That doesn't require any argument from mysticism, just an understanding that the context of humanity has value for most humans (perhaps not you, but most humans) beyond the pure transactional mechanisms of value creation, stimulus and response.


No, it doesn’t have to take effort, but that does mean that someone genuinely cares.

Like, I love blog posts. Really do, I’ll read anyone’s about anything. Someone thought of something and cared about it and put it into the world and that’s wonderful.

But someone making an AI post doesn’t care. And worse, it makes anyone who does care feel silly, like, why am I wasting my time on this thing that’s so worthless that whatever the first thing the computer spits out is good enough for them


AI output often 'looks like something' on first try, which makes it easy to assume no effort went in.

But there's a big difference between prompting and accepting the first output versus someone using search, multiple LLMs, actually READING the underlying papers, and iterating until it's done.

Sometimes that still means getting to 'done' faster than by more traditional means. Sometimes it means more depth than you'd manage otherwise. Sometimes somewhere in between.

Of course, by that point, either way, it doesn't really look like lazy AI output anymore.

Maybe it's not so much about the tools/agents as it is about the intent-to-engage behind them?


Yeah totally. Content aware fill is AI by any definition but I don’t have a problem with that.

It’s the stuff where, if the creator couldn’t be bothered to care about the details why should I? And most gen ai art is that way


Digging a hole and filling it up might very well have value as an art piece. The process is important for art, not just the end product.


Okay. Let's say we find out tomorrow that Spirited Away was animated via generative AI. Unbeknownst to everyone, Ghibli has a top-secret AI division which—thanks to some key lucky breakthroughs—is many decades ahead of everyone else and has been for a long time. The animators are a front to hide the truth; Miyazaki's anti-AI declarations were pure jealousy.

Would Spirited Away no longer be a good film?


You miss something critical here. For that to happen that GenAI would have had to be trained on another "Ghibli".

So your question isn't whether Ghibli had an AI, but whether Ghibli had a whole time traveling machine with it.

Your question feels like asking whether Einstein, Plato, etc. were secretly time travelers and copied someone else's style.

Something that is a general problem with all GenAI is that they copy and imitate. And just like with code being messy and dumb you'll find that Stable Diffusion in pieces of art does stupid and dumb stuff. Things it wasn't trained on. You can most prominently see that in big detailed fantasy (as in not just a photo) pictures, and looking at details. While the overall thing "looks cool" you don't get the details that artists do and you notice a lot of silly, dumb and what for a human author would be a "strange thing to invest time in and still do so badly" kind of situation.

I'd argue if we had AI in the sense that it had actually understood things and it could actually show creativity, etc. the story might be different, but as of today it is unknown whether that's possible. It would make sense, just like alien life would make a lot of sense. But for both actually thinking systems and alien life we have no clue how close we are to seeing one.

Every time someone takes an unbiased look at it (and there are many papers) it is shown that there is no understanding of anything, which to be fair is far from surprising given what the "training" (which is just a term that is an allegory and something that is kinda simulated, but also not really).

There might very well be hard and pretty obvious limitations, such as to feel and express like a human you need to be a human or provide away to simulate that and if you look at biology, anatomy, medicine, etc. you'll soon realize that even if we had technical means to do so we simply don't know most things yet, otherwise we could likely make Alzheimer, artificial brains, etc.

The topic then might be aside from all the ethical parts (when does something have human rights), whether a superhuman as all the futurists believe there will be even be able to create something of value to a regular human or are the experiences just too different. It can already be hard to get anything out of art you cannot relate to other than general analytical interest. However on that side of things Spirited Away already might be on the "little value" side.

This isn't to defend human creation per se, but to counter often completely off understanding of what GenAI is and does.

One final comparison: We already have huge amounts of people capable of reproducing Gibli and other art. Their work might be devalued (even though I'd assume some art their own stuff into their work).

People don't buy a Picasso, because they can't find a copy or a print that even has added benefits such as requiring as much care, being cheaper. Einstein isn't unimportant today, because you learn about his work in school or on Wikipedia.

But your question is like asking whether Einstein's work would be without value, if he secretly had Wikipedia.


> You miss something critical here. For that to happen that GenAI would have had to be trained on another "Ghibli".

Eh, maybe it got trained on Nausicaä, and then a lot of prompting and manual touch up work was used to adapt the style to what we now know as Spirited Away. Or maybe that animation department wasn't completely for show and they did draw some reference frames, but the AI figured out everything in between.

I don't really want to get into a discussion about the theoretical limits of AI, because I don't know what they are and I don't think anyone does. But if "the process is important for art," what happens if the creator lies about the process? If you initially experience the art without knowing about the lie, does learning the truth retroactively erase your previous experience? How does that make sense?

It has always seemed more logical to me that the final piece ought to be all that matters when evaluating art, and any details you know about the creator or process should be ignored to the greatest extent possible. That's difficult to do in many cases, but it can be a goal. I'm also aware that lots of people disagree with me on this.



Spirited Away is an intricate expression of Miyazaki's ethics as formed by his unique lived experience and nostalgia for classical Japanese culture, as well as a criticism of Western capitalist excess filtered through Shinto philosophy.

There is literally no universe in which a generative AI creates a work of art of that magnitude. You can get "make this meme is the style of Ghibli" from an AI and it can imitate the most facile properties of the style but that still requires the style to imitate. AI is never going to generate the genius of Hayao Miyazaki from first principles, that isn't even possible.


And at the very least, if someone was willing to do back breaking labor to tell me something, it’s probably something they think is important


the process is not important for art, although it might have value for people. art is a subjective experience, one that comes to life in the obeserver.


imo a massive problem with generative AI is in communication skills of its creators.

Look at Google Gemini and how it's accepted. The only two differences between it and the rest is that it's made by Google, and they don't brag about disrupting the society or damaging its workings(Google do disrupt the society and damage its workings).

It's one thing to design a shotgun, it's another to give it a commercial name "Street Sweeper". The latter is asking to be treated unfairly. Torrenting bunch of media contents and brandishing the runnable blobs as weapons that kill all $classes_of_good_people just isn't and never was the way you communicate anything to anyone.


right, but photography as an art form is pretty much dead, isn't it? and I'm saying this as a heavy AI user


In many ways, it is a zero-sum game. Art is a form of communication, and people have a finite amount of time and attention. Some people enjoy seeing the craftsmanship of artists, and some artists enjoy displaying their mastery of the craft. Beyond that, people use craftsmanship as a proxy for care/thought put into a work. If you can successfully mimic the appearance of craftsmanship without the effort, a major incentive for artists to create, polish, and publish their work is now gone. If you're someone who enjoys viewing craftsmanship, or who tries to find for high quality work based on the craftsmanship put into it, how long will you be willing to look through a sea of increasingly convincing noise to find some kind of signal?


ever seen a meme being ran into ground by overexposure? ever heard a song on the radio one too many times?

it's like that


If anything AI will be an amplifier for that style. I had never heard of "Ghibli" style until people were creating AI avatars based on that style.


Studio Ghibli doesn’t want the style “amplified”. That brings them no benefit, it’s only detrimental, and they’ve made that abundantly clear.

They are one of the best, most popular and influential animation studios ever. That you had never heard of them suggests you have little to no interest in animation, which is perfectly fine but also means you’re not their target audience.


How this improvement translate into real world agentic coding task ?


It doesn't. However, having a free-of-charge maths genius available 24/7 has broad potential. It's hard to predict what it will be used for.


It would be helpful in automating the busy work of many verification aware programming languages. At least the Dafny authors are excited about it.


IMHO, this remains a great space to explore. You type some formal specification in e.g. Hoare logic, and a mix of SAT/SMT and LLMs autocomplete it. Correct by definition.

It would also facilitate keeping engineers in the loop, who would decompose the problem into an appropriate set of formally specified functions.

They could also chip in when necessary to complete difficult proofs or redefine the functions.


Another possibility is to automatically annotate a software with assertions, preconditions, postconditions or other verification annotations based on the languages semantics and programmer intent, and then run a verifier on the result and evolve the program and annotations based on that intent. So for C, it could fill in data needed by Frama-C.


This already exists: https://www.wolframalpha.com/


Since you're bad at maths, you think being good at maths is being a calculator like WolframAlpha.


i got 70 token/s on m4 max


That M4 Max is really something else, I get also 70 tokens/second on eval on a RTX 4000 SFF Ada server GPU.


try enable flash attention and offload all layer to GPU


Is this limit will also count together with Claude Chat ?


you can easily reach 50$ per day. by force switching model to opus /model opus it will continue to use opus eventhough there is a warning about approaching limit.

i found opus is significantly more capable in coding than sonnet, especcially for the task that is poorly defined, thinking mode can fulfill alot of missing detail and you just need to edit a little before let it code.


wow. haven't tried Opus but Sonnet 4 is already damn good.


Just refactored 1000 lines of Claude Code generated to 500 lines with Gemini Pro 2.5 ! Very impressed by the overall agentic experience and model performance.


To professionals in the field, I have a question: what jobs, positions, and companies are in need of CUDA engineers? My current understanding is that while many companies use CUDA's by-products (like PyTorch), direct CUDA development seems less prevalent. I'm therefore seeking to identify more companies and roles that heavily rely on CUDA.


My team uses it for geospatial data. We rasterize slippy map tiles and then do a raster summary on the gpu.

It's a weird case, but the pixels can be processed independently for most of it, so it works pretty well. Then the rows can be summarized in parallel and rolled up at the end. The copy onto the gpu is our current bottleneck however.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: