Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Generating Coherent Noise Using Fourier Transforms (farazzshaikh.medium.com)
61 points by achat on June 3, 2021 | hide | past | favorite | 29 comments


> The only benefit this has over its contemporaries is that it is tileable. Although, it does repeat making this benefit useless considering that Perlin and Simplex noise are non-repeating and infinite.

Perlin and Simplex are also easily tileable too. Just make your hash function periodic and the resulting noise while tile at the same period.

This is a neat article and a neat technique, but probably not super practical. If you know how synthesizers (like the musical instruments) work, then you can think of Perlin noise as additive synthesize and the article here as subtractive synthesis.

Taking the FFT, modifying frequency amplitudes, and then taking the IFFT is one way to implement a filter. A more direct way is to filter in the time (well, space here) domain using something like a FIR or IIR. In spatial terms, that means applying a convolution filter, which is exactly how most blurring algorithms in programs like Photoshop work.

So, another way to look at this, is that you can generate pretty terrains by taking white noise and blurring it with the right convolution kernel.


  >> Taking the FFT, modifying frequency amplitudes
Modifying amplitude and phase. In the time domain you can't modify frequency amplitude without modifying the phase. If you modify just the FFT amplitude, you can end up with non-causal impulse responses.


Quote the article:

  1. Generate some White Noise.
  2. Perform a Fourier transform on the White Noise.
Are the two separate steps necessary? It should be possible to directly generate the Fourier Transform of the white noise, rather than applying FFT to the waveform, right?


    3. apply filter
    4. apply inverse FT
It is equivalent to replace 1, 2 and 3 with a proper stochastic but direct sampling of the 1/f function to get the Fourier amplitudes and a uniform sampling for Fourier phase.

This would save processing time by avoiding the calculation of one 2D FFT and the application of the filter (a 2D array multiplication).


Even simpler: create your desired amplitude spectrum to match your desired filtered noise profile. This is trivial for any noise spectrum with a simple linear filter - it's just a linear function with the desired slope. It's only slightly less trivial for more complex spectra.

Randomise the phases. (i)FFT. Done.


The 1/f (or any) noise spectrum represents only the *mean* amplitude in frequency domain. In reality, the amplitude at each frequency itself has an associated distribution. Since frequency domain is complex the amplitude represents a radius in 2D space and if the underlying random walk is Gaussian then the amplitude is distributed according to Rayleigh.

For generating artistic images, the artist may ignore this fact and generate only the random phases. Image to image, humans probably won't know the shortcut. But in fact, each image will have exactly the same amplitude spectra and that's not physical for real stochastic processes. Depending on the usage of the images, the shortcut could be fatal.

Eg, if we train a neural net on these generated images we should not expect that NN to perform well on the real data which our generated images were meant to represent.


Right (and that would be more efficient, I guess), but the Fourier transform of White Noise yields complex white noise with real and imaginary part such that their std deviation a and b obeys var_x = a^2 + b^2 (This follows from FFT being Unitary or Parveval's theorem), so each frequency component has sqrt(2)/2 times the (non-transformed) signal standard deviation. So simply generate two i.i.d. white noise fields with scaled variance (1/2).

It's important to keep in mind when speaking of noise that we are referring to average values (or statistical values), e.g. although the power spectrum of white noise is on average flat, as we discussed it's really random (the expected amplitude spectrum in fact has average 0 anywhere, since it's also white noise!).


At least when using Gaussian white noise, the DFT of the noise is the same distribution with a smaller variance: https://dsp.stackexchange.com/questions/24170/what-are-the-s...

I don't think the same neat result holds when you use uniform white noise, but I haven't done the math.


The normal distribution is special in being closed under linear transformation.


Yup! That's basically what Perlin noise does.

Generate a bunch of sine waves at various frequencies ("octaves") and add them together.


Hey guys Author here. Thank you all so much for the reads and suggestions. Just some context because I think the article comes across as me being ignorant of a lot of stuff

1. I am a noob at math and the original paper really did surprise me because I find it weird and interesting that you can morph white noise to be coherent. I am sure to the people who know what they’re doing it seems obvious but it wasn’t to me.

2. The article is simply an explanation of what I think is the reason behind the mechanism as explained in the paper. Of course now I know with the help of all of you that there are better ways of doing things and most of the steps are redundant.

I’ll make a revised post with the improvements sometime in the future. Thank you again :)


One thing that gave me pause is calling this "coherent noise".

I'm used to "coherency" being a property between two different "channels" or sources of signal.

Here, I guess it is being used to mean coherency between the X and Y dimensions. Is that right?

If so, I think this is not truly coherent noise but rather mimicry. We see large "patches" in X-Y because the 1/f increases the relative power in lower frequency. It is thus a "natural accident" to have low frequency in X and Y form some patches somewhere. But, it's not a coherent effect.

I'd be curious to learn if I'm misunderstanding the use of the term.


The oldest work I'm aware of that uses this approach for producing fractals (clouds in this case) is Gardner's[1], from 1985.

I dunno if Gardner's earlier paper from 1979, "Computer-generated texturing to model real-world features", contains the idea already because I could never find a digital version of that one.

[1] https://www.cs.drexel.edu/~david/Classes/Papers/p297-gardner...


Maybe I've been in computer graphics land for too long, but I'm somewhat surprized by the author's initial surprize. Isn't it obvious that you get a fractal surface if you sum up frequencies with 1/f amplitude?


Nothing about computer graphics, I think everyone who has worked with signal processing would be surprised by the authors initial surprise (I was). The question is more what else would one expect?


Indeed.

I suspect perhaps the author is surprised because squinting/defocussing your eyes at the original noise doesn't much look like the final result.

Thats because as well as removing the high frequency components (like squinting), this algorithm also is rescaling the amplitude.


And for people like me, unfamiliar with it but still knowing what a Fourier transform is, just reading the algorithm I really see no reasons why it particularly "shouldn't work", as the author said.


Up to phase I think this is equivalent to just integrating the noise, so you should get some kind of Brownian function.


Yes integration is a 1/f Fourier multiplier. But if you want to do (1/f)^alpha then it's not so straightforward in the time domain.


> Yes integration is a 1/f Fourier multiplier.

Can you explain this? I don’t see the connection. I can see how the zero-frequency value would be equal to the integral (well, the average).

Edit: figured it out. Derivative operator multiplies each basis function by its index. D exp(inx) = inexp(inx). Apply the inverse operation (divide by index) to get the integral.


In the theoretical PDEs world, non-integer alpha represents a fractional derivative.


Word. I did my time in the Sobolev spaces.


What's a situation where noise like this is useful? In any case it's very pretty and nice and I enjoyed the article.


Game textures often use this kind of noise for terrain heights, smoke, etc.

A similar kind of noise known as blue noise can be generated by taking the Fourier transform and not applying a 1/f filter but a high-pass filter instead. You end up with noise that only has high frequencies in it, and not low frequencies. Thus the noise does not have large-scale features, which is ideal for use in dithering.

Blue noise (and its DFT) look like this: https://demofox2.files.wordpress.com/2018/08/vc.png

And an example of dithering with white and blue noise: https://demofox2.files.wordpress.com/2019/06/randomvsblue.jp...


Interestingly enough, in the white noise vs blue noise dithering, I appreciate the white noise one (left) much more because the blue-noise one (right) looks blurry.

I guess it depends a lot on the input though, a bit like how nearest-neighbor is a much better algorithm than bi-cubic to scale up pixel art while the result is horrible if you use it on a real-world picture.


I see them as both blurry, but the but the one on the left is more grainy.


There are scientific applications for this kind of procedure. If an experiment has a noise source with a known frequency distribution, you can simulate the experiment by generating many thousands of realizations of noise superimposed with your (expected) signal. The variance in your measurement introduced by the noise can be used to assess the systematic uncertainty of the experiment.

For example, in ground-based experiments that measure the cosmic microwave background radiation, there is a substantial foreground noise from the atmosphere that can be modeled as a 1/f distribution. And actually the observations themselves are subject to a random variance (see cosmic variance) due to the fact that we get to observe the early universe from only one point in space. So you can use a similar trick to sample many random realizations of the CMB for given physical constants, and decide if our one-off observation is compatible with the theory.


You’d never do a DFT for generating a fBM, but the same technique using a different noise spectrum is how we’ve been generating ocean waves in the VFX industry since forever: https://people.cs.clemson.edu/~jtessen/reports/papers_files/...


It turns out when you can approximate any function. There is a lot you can do? Who da thunk?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: