One thing I have noticed and drives me up the wall with AI-generated summaries is that they don't provide decent summaries most of the time. They are summaries of an actual summary.
For instance: "This document describes a six-step plan to deploy microservices to any cloud using the same user code, leading to various new trade-offs."
OK, so what are these six steps and what are the trade-offs? That would be the real summary I want, not the blurb.
The point of a summary is to tell me what the most important ideas are, not make me read the damn document. This also happens with AI summaries of meetings: "The team had a discussion on the benefits of adopting a new technology." OK, so what, if any, were the conclusions?
Unfortunately, LLMs have learned to summarize from bad examples, but a human can and ought to be able to provide a better one.
The AI labs use each other's models constantly. It's also pragmatic: there are cases where one model can't do something but a different model can blow right through it.
The Monroe Doctrine is over 200 years old and simple enough for your average dictator to understand. Don't expect the US to turn a blind eye to investments in key strategic assets of your country by its strategic rivals.
I don't think it's a coincidence that a special envoy of Xi met Maduro hours before being captured. It was probably the final straw.
Realpolitik can only ever be an explanation, not a justification. We don't need to accept this from our leaders, especially if we live in any of the more powerful nations of the globe.
The last US president to seriously question their country's foreign policy got their head blown off. It goes without saying that Trump is not a serious person.
Everyone wants to live in the most powerful nation of the globe. Nobody wants to acknowledge what it takes to be the most powerful nation in the world.
Even if a peace deal is reached, and even if that is part of the peace deal, that doesn't mean we are accepting it: we [Europeans] fought (mostly economically) a good 5 years to try to prevent it, and lost. Accepting this would have meant not providing any aid to Ukraine and instead just saying "Russia has a clear doctrine of not allowing NATO control of the Ukraine region" as if this justified and their actions.
Russia is one of Europe's major trading partner. This issue is also the same for Australia, where they go against their largest trading partner to appease the US Empire.
Per the speech given by the Chairman of the Joint Chiefs of Staff, this operation was already green light for an unspecified period of time, but they waited on ideal weather conditions to launch the operation.
>On December 2021, The New York Times published the Civilian Casualty Files. These files reveal that the US military, under the Obama and Trump administrations, deliberately killed civilians
Ah yes that well known conspiracy site known as “The New York Times” /s
Technical product manager for ML and data infrastructure (B2C, B2B, and deep tech) with a PhD in theoretical physics. I own problem discovery, architecture trade-offs, and end-to-end delivery with measured outcomes ($110M+ impact).
~15 years building data/ML infrastructure (cloud and air-gapped on-prem).
Built and operated ML platforms and streaming systems in environments exceeding ~$5B+ in annual transaction volume.
Former engineer and still hands-on; active open-source contributor (Zoose Quantum, IBM Qiskit ecosystem).
Owned delivery of a large physics-based foundation model trained over multi-month HPC runs from lab to production.
Trained and deployed large-scale production models; built on-device (edge) inference prototypes.
Earlier work includes database/query optimization: oracle.rtfd.io (~100 pages).
> engineers report the most mixed results on quality later in the survey (51% better but 21% worse, the highest “worse” of any role).
Hardly surprising.
PMs, designers, and founders do work that has no immediate feedback on right/wrong, so anything plausible is good enough. Code that doesn't compile or pass the tests or doesn't do what it's supposed to fails the quality threshold immediately.
Feels like a discussion on game theory ought to have been included from the employee's perspective:
1. If you don't speak up, no one will ever know you had an idea or solution.
2. If you speak up and someone in power is offended, you may limit your career, end up PIP'ed, or even fired. It's also possible nothing will happen, in which case speaking up has no benefit.
3. If you speak up and you're heard, you might get praise or even a promotion in the long run. Likely, it'll just go unnoticed or with limited impact.
Not speaking up is the safest bet, especially in corporations.
Seriously, why would I give some random service my email address to send a cartoon of a banana? This feels more like a litmus test for basic online intelligence.
One thing I have noticed and drives me up the wall with AI-generated summaries is that they don't provide decent summaries most of the time. They are summaries of an actual summary.
For instance: "This document describes a six-step plan to deploy microservices to any cloud using the same user code, leading to various new trade-offs."
OK, so what are these six steps and what are the trade-offs? That would be the real summary I want, not the blurb.
The point of a summary is to tell me what the most important ideas are, not make me read the damn document. This also happens with AI summaries of meetings: "The team had a discussion on the benefits of adopting a new technology." OK, so what, if any, were the conclusions?
Unfortunately, LLMs have learned to summarize from bad examples, but a human can and ought to be able to provide a better one.
reply