Monday, December 12, 2011

Gauss's 1805 mixed-radix FFT algorithm: Decimation in frequency or decimation in time? Which is it?

Gauss's 1805 mixed-radix FFT algorithm: Decimation in frequency or decimation in time? Which is it?


Does your favorite FFT algorithm have a decimation in time (DIT) structure, or could it be a decimation in frequency (DIF) algorithm?  This useful-sounding clue to the architecture of FFT algorithms is poorly described in many sources, including perhaps the most widely read (otherwise quite superb) reference work on FFTs, Oran Brigham's The fast Fourier transform (Prentice-Hall, 1974).  Other useful clues to FFT algorithm architecture are the radix (which may be mixed), and whether it is basically Cooley-Tukey, Sande-Tukey, mixed radix, split radix, prime factor, Winograd Fourier transform algorithm (WFTA), or a hybrid mixture. There are also Winograd's efficient small-N cyclic convolution algorithms.

Why am I interested in sorting this topic out? The very first FFT algorithm, invented by German math genius and human computer, Carl Gauss, 160 years ago, may have been misidentified by Heideman, Johnson, and Burrus ("Gauss and the history of the FFT" IEEE ASSP Magazine, Oct. 1984, page 18), all of whom are authorities in the field. They write: "...Gauss's algorithm is as general and powerful as the Cooley-Tukey common-factor algorithm and is, in fact, equivalent to a decimation-in-frequency algorithm adapted to a real data sequence." Depending on how I draw the flow graph it seems to me to be DIT, or a mixture of DIT and DIF (if this latter case is even possible). If I've overlooked some important fact, please let me know.

Before I show the evidence for why I think Gauss's algorithm is DIT, here's some more details of what Gauss achieved so long ago, approximately 100 years B.C. (before computers). In Gauss's published works,* which fortunately for us contain a transcription of all of his notebooks (written in an obscure dialect called neo Latin!), after some 50 pages of FFT theory, Gauss spends ten pages giving three numerical examples: a length-12 algorithm done two ways, as a mixed radix 4 x 3, and then to check his results, factored the other way around, 3 x 4; lastly a length-36 as a radix-6, with factors 6 x 6. Gauss noted that his method "greatly reduces the tediousness of mechanical calculations; success will teach the one who tries it." By the way, Gauss stated that if the factors are themselves composite, his splitting algorithm could be applied recursively. As far as we know, this is something he never did, but that he saw the advantages of applying his FFT method recursively, he was obviously very ahead of his time! Gauss also did not discuss the (Log2N)/N savings to be had if the transform length was a power of 2.

Brigham (p. 178) says that the term DIT arises from the fact that alternate derivations of the algorithm are structured to appeal to the concept of sample rate reduction or throwing away samples. Likewise, Brigham says that the term DIF arises for reasons that are analogous to that for DIT. Crystal clear? Not to me.

The Cooley-Tukey radix-2 algorithm splits the input into interleaved even and odd sets, where each has half the sampling rate. This is clearly DIT!

The Sande-Tukey radix-2 algorithm computes the DFT terms in two interleaved sets, even and odd, each with effectively half of the frequency resolution. This is clearly DIF!

The Gauss mixed radix N = r1 x r2 = 4 x 3, mixed radix 3 x 4, and radix-6 FFT algorithms follow the same general DIT floorplan, as follows (for the 4 x 3 example):

1) The length-12 input sequence is split into three interleaved sequences (i.e., the samples are "decimated"), numbered 0, 3, 6, 9;  1, 4, 7, 10;  2, 5, 8, 11.

2) The DFT is computed for each of these three sequences using a length-4 algorithm (no details given, but is no big deal whether an efficient algorithm or not).

3) What follows next can be viewed two different ways. (Gauss's calculation stopped at X6, as is appropriate for real input data.)

Either (3a) Corresponding output triples are multiplied by twiddle factors and added together.
E.g., X1 =  X0,1  + W^1 . X1,1 + W^2 . X2,1 , where the first subscript 0,1,2 of X refers to an output from one of the three length-4 DFTs.  W is the twiddle factor. The outputs are naturally in a 1D array, and are in natural order.

Or (3b) The corresponding output triples from the length-4 DFTs are fed to four length-3 DFTs, which compute the following 2D array of X values: 0, 4, 8;  1, 5, 9;  2, 6, 10;  3, 7, 11. These outputs are in scrambled order, however this is not enough to reclassify Gauss's algorithm as a DIF. 3a does exactly the same thing as 3b, but 3b cannot be made to come out in natural order without a 2D-to-1D mapping.

Please refer to the relevant flow graphs, attached below.  Comments?



______________
*Download free from Google Books:  Gauss, Carl Friedrich, [1866] “Nachlass: Theoria interpolationis methodo nova tractata,” pp. 265–327, in Carl Friedrich Gauss, Werke, Band 3 Königlichen Gesellschaft der Wissenschaften, Göttingen.

Welcome to the Lair of the Algorithmage!

The name of this blog is a play on three words: algorithm, age, mage.

al·go·rithm 

noun \ˈal-gə-ˌri-thəm\  :a step-by-step procedure for solving a problem or accomplishing some end, especially by a computer

age

noun   \ˈāj\  :a period in history or human progress

mage  

noun\ˈmāj\ :magician

Mage is old Middle English for magician (one skilled in magic), and is an old name for the most advanced technologists of that era. The Hacker's Dictionary: A Guide to the world of Computer Wizards (G.L. Steele Jr., et al, Harper, 1983) defines magic as “something as yet unexplained, or too complicated to explain.” In a free interpretation, magic therefore refers to something algorithmic that is unexplained (unpublished), or that is too complicated (or too advanced) to explain to anyone who is not initiated into the relevant wizard/magician/technologist circle.

My favorite author, Arthur C. Clark, once said that any sufficiently advanced technology is indistinguishable from magic. Therefore, since algorithms are a major ingredient of most modern technologies, Clark would no doubt have agreed that sufficiently advanced algorithms are themselves tantamount to magic. The obvious corollary is that since technologists create algorithms, any sufficiently advanced technologist must be a magician, or at least be indistinguishable from one, as far as a general audience is concerned.