From the paper: "sleep insufficiency was significantly associated with lower life expectancy when controlling for traditional predictors of mortality, with only smoking displaying a stronger association."
Both the US and China rely on uranium-based fission, but they seem to be diverging on their next bet. China is exploring thorium based fission, while US is leaning toward fusion [1]
Not as much as US. Everyone from trump to Altman is betting on fusion. But China is more pragmatic by focusing on making fission resistant to supply chain shocks in uranium. Since they are fast follower, their plan might be to catch up, once fusion is viable for practical use.
Their government is chipping in a lot - there's a CNBC video about it with footage of a lot of stuff going on there and the US https://youtu.be/nyn0HUqluVM Says China has 10x as many fusion PhDs and more patents. It'll be interesting to see how it pans out. They kind of overtook in batteries, solar and EVs by doing the 10x as many engineers thing.
Tsinghua scientists use light to do AI math extremely fast (12.5 GHz, trillionths of a second), enabling real-time decisions like trading or robotics. [1]
Ever since "Attention Is All You Need", I've been reading research papers directly instead of waiting for tech news coverage. My information supply chain has evolved from news sites as explainer to following experts on Twitter to ChatGPT these days. I'm experimenting with one more step: what if the papers themselves were memes?
For example, mapping AlexNet's 50-year journey to the Pirates of the Caribbean sinking ship scene [1]. Or using Sheldon's milking stool argument to explain transformer architecture [2]. The absurdity seems to make the concepts more memorable. Each meme has a quiz to dig deeper into the paper.
What do you think? Is humor a legitimate tool for learning about research papers, or does it undermine the seriousness of the work?
Three years ago, when we started making profit as a bootstrapped startup, I was stunned by how little money I could reinvest in my own company compared to a funded competitor. We paid ourselves 20% of the profit and paid 40% in taxes and reinvested the rest i.e. 40% into our business. Meanwhile, a VC-backed competitor could show losses and invest 100% of the revenue plus the $10 million or $50 million they raised from investors. In this essay, I explain you how screwed up the incentives are for bootstrapped companies and a solution to fix this:
Think Stripe Atlas for bootstrapped companies, a service that incorporates you in countries with favorable tax treatment for reinvestment, where you only pay taxes on founder distributions, not on profits you reinvest.
OP here. In my previous post [1], I argued that code generation is the kingpin behind reasoning models. The bottleneck is LLMs generate code lossily due to tokenization fragmentation and treating code like natural language instead of structured graphs. In this post I propose:
1. Parsing user prompts into input graphs (using controlled English like ACE)
2. Parsing code into output graphs (AST trees)
3. Using graph transformers to map input graphs → output graphs
Core hypothesis: eliminating tokenization fragmentation ("ladlen" → ["lad", "len"]) and preserving tree structure could improve FrontierMath accuracy from 26% to 35-40%. No benchmarks yet. Just theory and a plan to test the improvement.
I've built compilers, not transformers, so would love technical feedback on:
- Is tokenization & linear structure really the bottleneck in code generation, or am I missing bigger issues?
- Is 35-40% improvement plausible, or overly optimistic?
- For those working on graph transformers: what approaches look promising?
Personally, I’m subscribing to multiple AI services in the hope of increasing my productivity. I don’t think I’ve ever subscribed to this many SaaS products before. My expectation is if these tools are taking a share of my spending, its coming from services I would otherwise hire contractors or employees for.
reply