Hacker Newsnew | past | comments | ask | show | jobs | submit | pseidemann's commentslogin

Isn't this just the result of unregulated capitalism?

United Statesians just doesn't sound as nice as Canadians or Mexicans.

Imports are needed and important, not "bad". Most countries import goods. Why? Because not everyone produces everything. That is how society started and still works today.

You can always long-press the on/off button to force a power cycle.

Or just pull out the battery.

There is no giving (or taking).

I think std::rvalue would be the least confusing name.


std::rvalue

I'm convinced naming things is equivalent to choosing the right abstraction, and caching things is creating a correct "view" from given normalized data.

Such "science" should be illegal.

If propaganda was illegal, who would decide what was propaganda and what was simply argumentation made from positions of relative ignorance?

The courts could easily decide whether a message has been paid for or not.

All messages are paid for by someone.

the greatest travesty of modern science is that fraud is not illegal.

in every other industry that i can imagine, purposely committing fraud has been made illegal. this is not the case in modern science, and in my opinion the primary driver of things like the replication crisis and the root of all the other problems plaguing academia at the moment.


It's not legal, but intentional misconduct can be tough to prove.

https://www.justice.gov/archives/opa/pr/professor-charged-op...

https://en.wikipedia.org/wiki/Eric_Poehlman

https://en.wikipedia.org/wiki/Scott_Reuben

> in every other industry that i can imagine

Our own industry (tech) is rife with unpunished fraud.


> intentional misconduct can be tough to prove

It's hard to prove when it isn't investigated. How many of the debunked psychology professors took federal funding? How many have been criminally investigated?


> How many of the debunked psychology professors took federal funding?

But being wrong isn't a crime. Intentional fraud is.

> It's hard to prove when it isn't investigated.

And it's hard to investigate without some reasonably solid evidence of a crime.


> it's hard to investigate without some reasonably solid evidence of a crime

I’d say the Ariely affair is reasonably suspicious.


I don't disagree, but it appears Duke did investigate in that case, and was unable to prove intentional wrongdoing.

I am glad it takes more than mere suspicion for the government to go search my private writings and possessions.


my own institution launched an internal investigation into a professor who i know for a fact committed fraud and was "unable to prove intentional wrongdoing". academic institutions have taken the "this never happens because we are morally pure" approach which we all know is a load of baloney, they are perversely incentivized to never admit fraud.

the witness and reportee who i am friends with was directly instructed by this professor to falsify data in a more positive light in order to impress grant funders. multiple people were in attendance in this meeting but even that was not enough to see any disciplinary action.

duke also has a notorious reputation for being a fraud mill.


> it appears Duke did investigate in that case, and was unable to prove intentional wrongdoing

They also kept the grant money. The university investigating itself isn’t meaningful.


> They also kept the grant money.

Is that not the reasonable response if an investigation didn't turn up wrongdoing?


Note both those guys were found guilty for taking government money under false pretenses (to do with fake science, not for doing fake science, which is more supporting evidence that fake science is legal.

The government funds an enormous proportion of research, and they've got a lot more power to do something about it when you make them mad.

What, specifically?

Industry funded research? Results that disagree with the current consensus? Nutrition science entirely?


Isn't there more indirection as long as LLMs use "human" programming languages?


If you think of the training data, e.g. SO, github etc, then you have a human asking or describing a problem, then the code as the solution. So I suspect current-gen LLMs are still following this model, which means for the forseeable future a human like language prompt will still be the best.

Until such time, of course, when LLMs are eating their own dogfood, in which case they - as has already happened - create their own language, evolve dramatically, and cue skynet.


More indirection in the sense that there's a layer between you and the code, sure. Less in that the code doesn't really matter as such and you're not having to think hard about the minutiae of programming in order to make something you want. It's very possible that "AI-oriented" programming languages will become the standard eventually (at least for new projects).


One benefit of conventional code is that it expresses logic in an unambiguous way. Much of "the minutiae" is deciding what happens in edge cases. It's even harder to express that in a human language than in computer languages. For some domains it probably doesn't matter.

It’s not clear how affordances of programming languages really differ between humans and LLMs.

I thought it's there to secretly catch time travelers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: