With the issues since November where one has to add environment variables, block statsig hosts, modify ~/.claude.json, etc. does anyone have experience in managed setups where versions are centrally set and bumped on company level? Is this worth the hassle?
`gh pr diff num` is an alternative if you have the repo checked out. One can then pipe the output to one's favorite llm CLI and create a shell alias with a default review prompt.
> My company uses some automatic AI PR review bots, and they annoy me more than they help. Lots of useless comments
One way to make them more useful is to ask to list the topN problems found in the change set.
"We also suggest you make use of the minimumReleaseAge setting present both in yarn and pnpm. By setting this to a high enough value (like 3 days), you can make sure you won't be hit by these vulnerabilities before researchers, package managers, and library maintainers have the chance to wipe the malicious packages."
If ChatGPT brings in the traffic, then there is opportunity to increase conversion. If your business sees dropping traffic from Search and increasing traffic from ChatGPT, being able to monetize the traffic. On top, if your competition meets customers where they are, you're kinda forced to go there as well and compete.
At some point, they won't even need your personal data; the models will be good enough to be 99% accurate in predicting your consumer relevant behaviors simply by modeling most of your demographics. They'll be able to do the truly scary cognitohazard things like model voting behaviors with super sophisticated and complex A/B sequences for eliciting behaviors at scale. I think something in this vein will likely cause the first major AI "incident" resulting in legislation, but if the wrong players get ahead with the massive modeling and manipulation sims, they might effectively be immune from regulation. Think about all the old subliminal messaging and brainwashing scares, and while ads on tv and subtle media hacks could see limited success, stuff like simulating populations wholesale could allow a politician to craft hyperpersuasive campaigns for which none of us have any defenses, and it might as well appear to be a natural series of messages and debates and conversations.
I don't know if infinite feeds are particularly apt for this use case, but it certainly looks that way - having an AI carefully tune the pace, visual appeal, timing, and messaging of events could almost trivially program people to see the world in a particular way desired by the operators. I don't think OpenAI is particularly susceptible to going down this path, but there are plenty of entities out there in the world who'd make use of this tech. It'll go from authentic human debate and conversation around current events to carefully managed AI driven salvos of content, using the "authentic" human content in the same way that Bumblebee uses word snippets from radio broadcasts to put his sentences together. You won't even notice the manipulation, and will agree or be mad or be shocked on demand, whatever is most likely to nudge your future behavior in the direction desired by the manipulators.
We should probably get around to outlawing the levers these systems can use, like feed manipulation, large population psychometrics, and so on.
In short it's companies turning to issuing debt, fuelled by increase in M&A activity, potential IPO of OpenAI, followed by collapse as tricks to increase revenue will fail to meet expectations and companies that mismanage debt will go bust.
When working with documents, the citations that NotebookLM provide increase the confidence in answers. This allows to use NotebookLM for team-level knowledge bases.
reply