Hacker Newsnew | past | comments | ask | show | jobs | submit | russfink's commentslogin

zOMG dying laughing here

Regular HN discussion about wind vs nuclear.

> With four treatment sessions spaced fortnightly,

This is a clearer statement of “Every two weeks” than “bimonthly” or “semimonthly.”

Brilliant!


Mathematically speaking, “no child left behind” is equivalent to “no child out in front.”


Except what if you don’t really grok those ffmpeg flags and the LLM tells you something wrong - how will you know? Or more common, send you down a re-encode rabbit hole when you just needed a simple clipping off the end?


Ten years old laptop? Pretty sure it has a TPM 2.0 on it.


I also have a 10 year old laptop with no TPM 2.0 module. It was pretty high end for the time too (Dell XPS). I haven't needed it for much in recent years, but it still runs perfectly fine and I'm happy to continue using it if the need arises again. Sounds like I'll have to switch that over to Linux like I have all my other PCs.


Practical question: when getting the AI to teach you something, eg how attention can be focused in LLMs, how do you know it’s teaching you correct theory? Can I use a metric of internal consistency, repeatedly querying it and other models with a summary of my understanding? What do you all do?


> What do you all do?

Google for non-AI sources. Ask several models to get a wider range of opinions. Apply one’s own reasoning capabilities where applicable. Remain skeptical in the absence of substantive evidence.

Basically, do what you did before LLMs existed, and treat LLM output like you would have a random anonymous blog post you found.


In that case, LLMs must be written off as very knowledgeable crackpots because of their tendency to make things up. That's how we would treat a scientist who's caught making things up.


Conspicuously missing is a direct mention of AI tools. Is MIT, like others, side-stepping the use of AI by students to (help them) complete homework assignments and projects?


A question. If you think AI use by students to "bypass homework" is anything remotely approaching a problem, then I must ask you how you felt/feel about:

- University being cost prohibitive to 90 percent of all humans as financial driven institutions, not performance.

- Before AI, 20 + years of google data indexing/searches fueling academia

- study groups before that allowing group completion (or, cheating, in your view)

- The textbook that costs 500 dollars, or the textbook software from pearson that costs 500, that has the homework answers.

I think it's a silly posit that students using AI is...anything to even think about. I use it at my fortune 500 job every day, and have learned about my field's practical day-to-day from it than any textbook, homework assignment, practical etc.


>study groups before that allowing group completion (or, cheating, in your view)

Totally dependent on school/department/professor policy.

Some are very strict. Others allow working together on assignments. (And then there are specific group projects.)


If you click through the lectures they are mentioned in several of them.


Link to the About page that clearly describes the effort and rationale.

https://missing.csail.mit.edu/about/


Back in the day, wustl.edu was seen as a leader in computer applications. Sad now that it cannot just create its own systems to handle its tasks, especially with AI’s around to offer coding help. Imagine spending a fraction of this money and vectoring it to students to develop said systems.


“Just let students vibe code your ERP” is a hell of a take.


What do you think the consultants are doing? They're mostly last year's graduates anyway.


Is it any worse than an army of consultants? It would be one thing if it was some off the shelf software but a huge chunk of this project seems to be a new custom application intended for student and faculty use.

It just sounds like Accenture-ware with a new name.


If you’re going to open with “is having cs students hack their way to a solution via AI actually worse than Workday?”, this isn’t a fair discussion for who you’re replying to. I assume you’re young and well-intentioned. I would have said the same thing when I was young. The problem is that approach leaves you with, best case with overworked students looking for glory instead of pay, failing classes and enjoy hacking, ~80% of the work done and no staff for maintenance. This isnt an opportunity for hero hacker story, its a real business, it affect people’s lives at their most vulnerable (higher education, paid education, hospital system)


Admitting "Our students cant design or code" is also pretty wild.


What you describe is possibly the one thing that would be worse than implementing Workday.


One trick I have tried is asking the LLM to output a specification of the thing we are in the middle of building. A commenter above said humans struggle with writing good requirements - LLMs have trouble following good requirements - ALL of them - often forgetting important things while scrambling to address your latest concern.

Getting it to output a spec lets me correct the spec, reload the browser tab to speed things up, or move to a different AI.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: