[1/4]
The new and improved version of >>18 and >>25 uses stacked self-generation of the S sequence and second-order S-groups to achieve O(log(log(n))) space consumption and O(n^0.25) runtime.
The S sequence already generated itself in >>18, and to do so it used a portion of itself proportional to the square root of the number of items it was asked to produce. The natural idea is to apply self-generation to the generator sequence, to bring down the retained portion of S to n^(1/4). Adding another generator on top brings it down to n^(1/8). Dynamically adding a sufficient number of generators until the topmost generator is never asked to produce its next element eliminates the retained portion entirely. All the state that is kept is a single entry at the leading edge of each level of generation. This means that one entry is kept for S, one for the generator that will end at n^(1/2), one for the generator that will end at n^(1/4), one for n^(1/8) and so on. Additionally, a generator only comes into existence when its second item is requested, and is not stored anywhere prior to that. This means that the number of generators increases by 1 when n is squared. The function that increases by 1 when n is doubled is log(n), while the one that increases by 1 when n is squared, which means that the exponent that yields n is doubled, is log(log(n)). Therefore the space consumption of this algorithm is O(log(log(n))). This is the stacked self-generation of the S sequence.
Further up the thread I mistakenly understated the space efficiency of the algorithm by using a single logarithm. The data table will back up the relationship between the number of generators and n. I am fairly certain that this O(log(log(n))) is the theoretical minimum for the space consumption for generating the R sequence, but formal proof is left as an exercise for the reader.
Care must be taken with this technique in order to ensure that the computational process of generation remains linear instead of degenerating into tree recursive. This is achieved by asking any one generator to produce one particular item exactly once. No generator is ever asked twice for the same item. When generator k needs the next item from generator k+1 to satisfy a request from k-1, it asks k+1 for this item exactly once, remembers it, and uses it to satisfy an entire group of requests from k-1. Only after k-1 has exhausted an entire group of requests to k, will k ask k+1 for another item. Without this trick the process would indeed degenerate into tree recursive, but with this trick each generator walks its own portion linearly. Because the number of items yielded by a generator is the square root of the number of items yielded by the preceding generator, the total work is
q + sqrt(q) + sqrt(sqrt(q)) + sqrt(sqrt(sqrt(q))) + ...
where q denotes the number of items yielded by the lowest generator. The series cuts off as soon as a value drops below 2, because a generator is only stored once its second item is requested. With this the sum can easily be seen to be bounded from above by 2*q, and since it is evidently bounded from below by q, it follows that it belongs to O(q). In our case q will be n^(1/4).
In >>18 the S sequence was partitioned into groups and one group was folded into the R value in one step. These are the first-order S-groups and they give >>18 a runtime of O(n^0.5). In this new version the stream of S-groups is again partitioned into groups, which are now groups of groups of S entries, and one group of groups of S entries is folded into the R value in one step. These are the second-order S-groups and they cause the lowest generator to produce q=n^0.25 items. Since the process is linear in q, it follows that the runtime is O(n^0.25).