re: your last point that is not true. we can measure arbitrarially quickly (Nottingham group does some 3d EVI at ~100ms TRs). You can also reduce volumes and just look at single slices etc, a lot of the fundamental research did this (wash U / Minnesota / etc in the 90s). Its just not all that useful because the SNR tanks and the underlying neurovascular response is inherently low-pass. There is a much faster 'initial-dip' where the bold signal swings the other way and crosses zero (from localized accumulation of DeoxyHg before the inrush of OxyHg from the vascular response). Its a lot better correlated with LFP / spiking measures but just very hard to measure on non-research scanners...
Yes, I didn't mention this because you sacrifice so much spatial resolution and/or info doing this that it hardly matters, unless you believe in some very extreme and implausible forms of localization of function. (EDIT: I mean looking at a single slice seems to imply some commitment to localization assumptions; this isn't relevant for reducing spatial resolution.)
For readers who don't know, we can measure at a higher temporal resolution better if we use some tricks, and also massively sacrifice spatial resolution ("reduce volumes") and/or how much of the brain is scanned (look at single slices), but the spatial resolution in most fMRI given e.g. a 0.5 TR (2 images per second) is usually already quite poor (generally already getting difficult to clearly even make out gyri and basic brain anatomy: see for example Figures 7 and on here, noting the TRs in the captions: https://www.frontiersin.org/journals/neuroscience/articles/1...).
Still, it's a good point, and you're right of course newer and better scanners and techniques might improve things here on both fronts, but my understanding is that the magnetic field strengths needed to actually get the right combo of spatial and temporal resolution are, unfortunately, fatal, so we are really up against a physical/biological limit here.
And as you said, it isn't that useful anyway, because the BOLD response is already so slow, and obviously something just emerging from the sum of a massive amount of far more rapid electrochemical signaling that the fMRI just can't measure anyway.
Yes this is ancient news for experts, but, IMO, most fMRI research outside of methodological research is quite practically useless at the moment because of deep measurement issues like these.
So if awareness of this increases the skepticism of papers claiming to have learned things about the brain/mind from fMRI, then I'd say it is a net plus.
agree. especially the comments saying "just address it". Its a lot of technically complicated interactions between the physics, imaging parameters, and processing techniques.
Unfortunately the end users (typically neuroscience/psych grad students in labs with minimal oversight) usually run studies that just "throw everything at the wall and see what sticks" not realizing that is the antithesis of the scientific method. No one goes in to a resting state study saying "we're going to test if the resting state signal in the <region> is <changed somehow> becuase of <underlying physiology>".
They instead measure a bunch of stuff find some regions that pass threshold in a group difference and publish it as "neural correlates of X". Its not science, and its why its not reproducible. People have build whole research programs on noise.
The meaningless NHST ritual is so harmful here. Imagine what we might know by now if all those pointless studies had used their resources to do proper science...
it doesn't measure the oxygen level directly either. the bold signal is correlated to dephasing induced by the oxy/deoxy hg ratio that isn't even necessarially localized to the voxel (flow or long range magnetic susceptibility perturbations from nearby accumulated deoxyhg (veins)).
I've used it, and am still using it, to generate lots of value in a very large org. Having a language where I can bring Go, Node, etc developers over and get relatively better performance without having to teach OOP and all the implicit conventions that are on the C# side is a bit like a cheat code. With modern .NET, its better than Java perf, with better GC, and having the ability to code generic Python/JS looking code whilst still having type checking (HM inference). There are C# libraries we do use but with standard templates for those few with patterns to interface to mostly F# layers you can get very far in a style of code more fitting of a higher more dynamic language. Ease of use vs perf, its kind of in the middle - and it has also benefited from C# features (e.g. spans recently)
Its not one feature with F# IMO, its little things that add up which generally is the reason it is hard to convince someone to use it. To the point when the developers (under my directive) had to write two products in C# they argued with me to switch back.
I used it for many years but ended up switching to C#. The language needs better refactoring tools. And for that it needs something like Roslyn. The existing compiler library is too slow.
No, it is not. Referential transparency <<< tooling.
Plus F# as a functional language has significant gaps that prevent effective refactoring, such as lack of support for named arguments to curried functions.
Can you give me an example where lack of support for named arguments to curried functions makes refactoring difficult? I'm having trouble understanding how that would happen.
For one there's no way to add a curried parameter without doThing4-style naming and lack of named arguments implies you can't have a default value for the new parameter.
Another one is if you want to add a curried parameter to the end of the parameter list, and you have code like
honestly this sounds like you've never really done it.
FP is much better for ergonomics, developer productivity, correctness. All the important things when writing code.
I like FP, but your claim is just as baseless as the parent’s.
If FP was really better at “all the important things”, why is there such a wide range of opinions, good but also bad? Why is it still a niche paradigm?
It’s niche because the vast, vast majority of programmers just continue to do what they know or go with the crowd. I spend roughly 50% of my time doing FP and 50% doing imperative (most OOP) programming. I am dramatically more effective writing functional code.
Like other posters, I am not going to claim that it is better at all things. OOP’s approach to polymorphism and extensibility is brilliant. But I also know that nearly all of the mistakes I make have to do with not thinking carefully enough about mutability or side-effects, features that are (mostly) verboten in FP. It takes some effort to re-learn how to do things (recursion all the things!) but once you’ve done it, you realize how elegant your code can be. Many of my FP programs are also effectively proofs of their own correctness, which is not a property that many other language styles can offer.
reply