Hacker Newsnew | past | comments | ask | show | jobs | submit | brandonb's commentslogin

This is cool! It'd be interesting to correlate this with Vitamin D too.

Thanks! I'm looking into Vitamin D estimation for a future version, the main issue is there are a ton of factors that need to be considered, the main one is simply how much skin you have exposed when you are outdoors. A simple question but it requires a dedicated user to update the app with their wardrobe every day.

This is a little different since the Apps SDK lets developers create specialized tool calls to their servers, and create specialized in-chat UI components. It's an evolution of the same concept as the GPT store, but a very different take on the idea.


HbA1c, or just diabetes as a binary variable, has been one of the main inputs into predicting heart attack risk for a long time.

The main marker of kidney function, eGFR, was added with the AHA/ACC's PREVENT equations in 2023.

I wrote a bit about the science behind heart risk calculators, and their various inputs like cholesterol, blood pressure, A1c, eGFR, and so on here: https://www.empirical.health/blog/heart-attack-risk-calculat...


I learned speech recognition from the 2nd edition of Jurafsky's book (2008). The field has changed so much it sometimes feels unrecognizable. Instead of hidden markov models, gaussian mixture models, tri-phone state trees, finite state transducers, and so on, nearly the whole stack has been eaten from the inside out by neural networks.

But, there's benefit to the fact that deep learning is now the "lingua franca" across machine learning fields. In 2008, I would have struggled to usefully share ideas with, say, a researcher working on computer vision.

Now neural networks act as a shared language across ML, and ideas can much more easily flow across speech recognition, computer vision, AI in medicine, robotics, and so on. People can flow too, e.g., Dario Amodei got his start working on Baidu's DeepSpeech model and now runs Anthropic.

Makes it a very interesting time to work in applied AI.


In addition to all this, I also feel we have been getting so much progress so fast down the NN path that we haven't really had time to take a breath and understand what's going on.

When you work closely with transformers for while you do start to see things reminiscent of old school NLP pop up: decoder only LLMs are really just fancy Markov Chains with a very powerful/sophisticated state representation, "Attention" looks a lot like learning kernels for various tweaks on kernel smoothing etc.

Oddly, I almost think another AI winter (or hopefully just an AI cool down) would give researchers and practitioners alike a chance to start exploring these models more closely. I'm a bit surprised how few people really spend their time messing with the internals of these things, and every time they do something interesting seems to come out of it. But currently nobody I know in this space, from researchers to product folks, seems to have time to catch their breath, let along really reflect on the state of the field.


> we haven't really had time to take a breath and understand what's going on.

The field of Explainable AI (or other equivalent names, interpretable AI, transparent AI etc) is looking for talent, both in academia and industry.


There are sectors where pre-ML approaches still dominate.

Among screen reader users for example, formant-based TTS is still wildly popular, and I don't think that's going to change anytime soon. The speed, predictability and responsiveness are unmatched by any newer technology.


> Gaussian mixture models

In what fields did neural networks replace Gaussian mixtures?


The acoustic model of a speech recognizer used to be a GMM, which mapped a pre-processed acoustic signal vector (generally MFCCs-Mel-Frequency Cepstral Coefficients) to an HMM state.

Now those layers are neural nets, so acoustic pre-processing, GMM, and HMM are all subsumed by the neural network and trained end-to-end.

One early piece of work here was DeepSpeech2 (2015): https://arxiv.org/pdf/1512.02595


Interesting, thanks!


It's a cool idea! This is going to become easier in the next year since TEFCA will let patients request their own medical records through the health information exchanges that have already been set up for treatment.


(OP here) Happy to answer any questions on this work!


If the fire alarm didn't go off, you didn't sear hard enough. :)


D’oh. You’re right. HN doesn’t let me edit the URL after posting, so will re-submit.


> When I see that it is widely accepted that ApoB is better to measure than LDL-C, but the industry continues to measure LDL-C, but not ApoB, I wonder why. It makes me skeptical.

Part of this is just that insurance coverage lags science. We've known that ApoB is more accurate than LDL since the 1990's or 2000's, but to be covered by insurance, several more steps have to happen.

First, the major professional societies (like the American College of Cardiology or National Lipid Associations) have to issue formal guidelines.

Then, the USPSTF (US Preventive Services Task Force) needs to review all of the evidence. They tend to do reviews only every 5 or 10 years. (Countries aside from the US have different organizations that perform a similar role.)

If the USPSTF issues an "A" or "B" rating, then insurance companies are legally obligated to cover ApoB testing. But that also introduces a year or two lag since medical policies are revised and apply to the next plan year.

The net effect is that the entire system is 17 years, on average, behind research.


ApoB blood tests are relatively cheap. You can pay out of pocket about $70 if you really want one and insurance won't cover it.

Most commercial health plans will cover an ApoB test for members with certain cardiac risk factors or medical conditions. But they generally won't cover it as a preventive screening for all members. I don't think we have enough evidence to justify broad screening yet, although that may be coming.


Unfortunately, this is a limitation of nearly all nutritional studies.

Partly we use mechanistic evidence to separate cause from effect--that's part of why the article goes into detail about, e.g., how soluble fiber binds to bile in the liver. If there's an association between A and B, and a known physiological mechanism where A causes C which causes B, it makes it more likely that ultimately A is the cause of B.


It's a limitation of all epidemiological studies. These are a great way to target more expensive randomized controlled trials that are also more invasive by definition. Nutrition science has lots of those and needs more.

In an N=1 intervention study, I too found fiber to be health supporting, but that wasn't randomized or controlled.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: