The vast majority of the mathematics required to really understand ML is just probability, calculus and basic linear algebra.
If you know these already and still struggle it's because the notation is often terse and assumes a specific context. Regarding this the only answer is to keep reading key papers and work through the math, ideally in code as well.
For most current gen deep learning there's not even that much math, it's mostly a growing library of what are basically engineering tricks. For example an LSTM is almost exclusively very basic linear algebra with some calculus used to optimize it. But by it's nature the calculus can't be done by hand and the actual implementation of all that basic linear algebra is tricky and takes practice.
You'll learn more by implementing things from scratch based on the math than you will trying to read through all the background material hoping that one day it will all make sense. It only ever makes sense when implementing and by continuous reading/practice.
Amen!! This is the best way to learn anything technical -- put things into practice to understand the theory. It's also important to keep revisiting the theory to understand results, rather than parroting some catchphrase to "explain" results.
For most current gen deep learning there's not even that much math, it's mostly a growing library of what are basically engineering tricks. For example an LSTM is almost exclusively very basic linear algebra with some calculus used to optimize it. But by it's nature the calculus can't be done by hand and the actual implementation of all that basic linear algebra is tricky and takes practice.
You'll learn more by implementing things from scratch based on the math than you will trying to read through all the background material hoping that one day it will all make sense. It only ever makes sense when implementing and by continuous reading/practice.