But then he ruins everything by using volumetric units (don't even get me started on these units, let's just focus on volume).
Indicating proportions with respect to weight is much simpler. Just put a scale under your mixing bowl and weight stuff as you add them. Less stuff to clean, less waste, easier to dose.
> Mr. Gates, in turn, praised Mr. Epstein’s charm and intelligence. Emailing colleagues the next day, he said: “A very attractive Swedish woman and her daughter dropped by and I ended up staying there quite late.”
What if I told you that the child sitting on Epstein's lap, the teenager he French-kissed, the girl whose skin he covered with fragments from Nabokov's Lolita, the one who had an entire corridor filled with her pictures in one of his properties, who appeared in every framed photograph on his desk and whose name is on the CD-ROMs, the only woman Epstein said he would ever marry – what if that girl is the daughter Bill Gates mentions? And that she and her mother were Epstein's main romantic interests and most percussive tools?
For coding it is horrible. I used it exclusively for a day and switching back to Opus felt like heaven. Ok, it is not horrible, it is just significantly worse than competitors.
Although it sounds counter-intuitive, you may be better off with Gemini 3 Fast (esp. in Thinking mode) rather than Gemini 3 Pro. Fast beats Pro in some benchmarks. This is also the summary conclusion that Gemini itself offers.
I’m not claiming a 100% faithful physical recreation in the strict scientific sense.
If you look at my other comment in this thread, my project is about designing proprioceptive touch sensors (robot skin) using a soft-body simulator largely built with the help of an AI. At this stage, absolute physical accuracy isn’t really the point. By design, the system already includes a neural model in the loop (via EIT), so the notion of "accuracy" is ultimately evaluated through that learned representation rather than against raw physical equations alone.
What I need instead is a model that is faithful to my constraints: very cheap, easily accessible materials, with properties that are usually considered undesirable for sensing: instability, high hysteresis, low gauge factor. My bet is that these constraints can be compensated for by a more circular system design, where the geometry of the sensor is optimized to work with them.
Bridging the gap to reality is intentionally simple: 3D-print whatever geometry the simulator converges to, run the same strain/stress tests on the physical samples, and use that data to fine-tune the sensor model.
Since everything is ultimately interpreted through a neural network, some physical imprecision upstream may actually be acceptable, or even beneficial, if it makes the eventual transfer and fine-tuning on real-world data easier.
Well I'm glad you find new ways to progress on whatever you find interesting.
This honestly though does not help me to estimate if what you claim to be is what it is. I'm not necessarily the audience for either project but my point remains :
- when somebody claims to recreate something, regardless of why and how, it helps to understand how close they actually got.
It's not negative criticism by the way. I'm not implying you did not faithfully enough recreate the DSP (or the other person the NES). I'm only saying that for outlookers, people like me who could be potentially interested, who do NOT have a good understanding of the process nor the initial object recreated, it is impossible to evaluate.
Oh. just to be clear first, I’m not the OP. Sorry for the confusion.
I do understand your point, and I think it’s a fair one: when someone claims to "recreate" something, it really helps readers to know how close the result is to the original, especially for people who don’t already understand the domain.
I was mostly reacting to the idea that faithfulness always has to be the primary axis of evaluation. In practice, only a subset of users actually care about 100% fidelity. For example with DSP plugins or NES emulators, many people ultimately judge them by how they sound or feel, especially when the original artifact is aesthetic in nature.
My own case is a bit different, but related. Even though I’m working on a sensor, having a perfectly accurate physical model of the material is secondary to my actual goal. What I’m trying to produce is an end result composed of a printable geometry, a neural model to interpret it, and calibration procedures. The physics simulator is merely a tool, not a claim.
In fact, if I want the design to transfer well from simulation to reality, it probably makes more sense to intentionally train the model across multiple variations of the physics rather than betting everything on a single "accurate" simulator. That way, when confronted with the real world, adaptation becomes easier rather than harder.
So I fully agree that clarity about "how close" matters when that’s the claim. I’m just suggesting that in some projects, closeness to the original isn’t always the most informative metric.
One reason I find my case illuminating is that it makes the "what metric are we optimizing?" question very explicit.
Sure, I can report proxy metrics (e.g. prediction error between simulated vs measured deformation fields, contact localization error, force/pressure estimation error, sensitivity/resolution, robustness across hysteresis/creep and repeated cycles). Those are useful for debugging.
But the real metric is functional: can this cheap, printable sensor + model enable dexterous manipulation without vision – tasks where humans rely heavily on touch/proprioception, like closing a zipper or handling thin, finicky objects – without needing $500/sq-inch "microscope-like" tactile sensors (GelSight being the canonical example)?
If it gets anywhere close to that capability with commodity materials, then the project is a success, even if no single simulator configuration is "the" ground truth.
What could OP’s next move be? Designing and building their own circuit. Likewise, someone who built a NES emulator might eventually try designing their own console. It doesn’t feel that far-fetched.
Ah that makes more sense, I couldn't make the connection!
So on "So I fully agree that clarity about "how close" matters when that’s the claim. I’m just suggesting that in some projects, closeness to the original isn’t always the most informative metric." reminds me of https://en.wikipedia.org/wiki/Goodhart%27s_law
That being said as OP titled " I used AI to recreate X" then I expect I would still argue that the audience has now expectation that whatever OP created, regardless of why and how, should be relatively close to X. If people are expert on X then they can probably figure out quite quickly if it is for them "close enough" but for others it's very hard.
Ah that makes more sense, I couldn't make the connection!
So on "So I fully agree that clarity about "how close" matters when that’s the claim. I’m just suggesting that in some projects, closeness to the original isn’t always the most informative metric." reminds me of https://en.wikipedia.org/wiki/Goodhart%27s_law
That being said as OP titled " I used AI to recreate X" then I would still argue that the audience has now expectation that whatever OP created, regardless of why and how, should be relatively close to X. If people are expert on X then they can probably figure out quite quickly if it is for them "close enough" but for others it's very hard.
Very nice work. I’m curious: what kinds of projects are you guys currently working on that genuinely push you out of your comfort zone?
I had a small epiphany a couple of weeks ago while thinking about robot skin design: using conductive 3D-printed structures whose electrical properties change under strain, combined with electrical impulses, a handful of electrodes, a machine-learning model to interpret the measurements, and computational design to optimize the printed geometry.
While digging into the literature, I realized that what I was trying to do already has a name: proprioception via electrical impedance tomography. It turns out the field is very active right now.
That realization led me to build a Bergström–Boyce nonlinear viscoelastic parallel rheological simulator using Taichi. This is far outside my comfort zone. I’m just a regular programmer with no formal background in physics (apart from some past exposure to Newton-Raphson).
Interestingly, my main contribution hasn’t been the math. It’s been providing basic, common-sense guidance to my LLM. For example, I had to explicitly tell it which parameters were fixed by experimental data and which ones were meant to be inferred. In another case, the agent assumed that all the red curves in the paper I'm working with referred to the same sample, when they actually correspond to different conducting NinjaFlex specimens under strain.
Correcting those kinds of assumptions, rather than fixing equations, was what allowed me to reproduce the results I was seeking. I now have an analytical, physics-grounded model that fits the published data. Mullins effect: modeled. Next up: creep.
We’ll see how far this goes. I’ll probably never produce anything publishable, patentable, or industrial-grade. But I might end up building a very cheap (and hopefully not that inaccurate), printable proprioceptive sensor, with a structure optimized so it can be interpreted by much smaller neural networks than those used in the Cambridge paper.
If that works, the gesture will have been worth it.
For me it's LSP servers taking 2 gigs of RAM. With Antigravity, Google managed to go beyond this, it is totally unusable for me (but other VScode clones work fine, apart from the 2 Go LSP servers).
Indicating proportions with respect to weight is much simpler. Just put a scale under your mixing bowl and weight stuff as you add them. Less stuff to clean, less waste, easier to dose.
reply