It would be nice to have timings with names and versions of Scheme implementations for R(10^10) for >>93 and >>118.
I can oblige. I was going to use `statprof' but for whatever reason this made jump-sequence.scm >>98 crash at R(10^12), so instead I used guile to cache the compilation and then used the time program. This probably makes the results more consistent across languages anyway. I ran each thrice and took the best time for each value, except for R(10^13), which I only ran on each once, and I've included your python3 implementation given in >>18 which should be of the same order of growth.
jump-sequence.scm
R(10^8) 5000934571821489 0.224 secs
R(10^9) 500029664631299282 0.556 secs
R(10^10) 50000940106333921296 1.397 secs
R(10^11) 5000029765607020319048 4.252 secs
R(10^12) 500000941936492050650505 13.444 secs
R(10^13) 50000029798618763894670256 67.324 secs
first-order.scm
R(10^8) 5000934571821489 0.355 secs
R(10^9) 500029664631299282 0.899 secs
R(10^10) 50000940106333921296 2.591 secs
R(10^11) 5000029765607020319048 7.796 secs
R(10^12) 500000941936492050650505 26.916 secs
R(10^13) 50000029798618763894670256 112.138 secs
flake.hofs
R(10^8) 5000934571821489 0.227 secs
R(10^9) 500029664631299282 0.361 secs
R(10^10) 50000940106333921296 0.811 secs
R(10^11) 5000029765607020319048 2.305 secs
R(10^12) 500000941936492050650505 7.006 secs
R(10^13) 50000029798618763894670256 22.340 secs
>>93 and >>118 seem to grow at roughly the same rate, although it seems the sequence approach given in >>98 is superior by a constant factor of roughly two. The python implementation given in >>18 grows like my Scheme translation in >>118 off set by one, this is probably because I am throwing away the last value of R in (cdr r) that I compute. I'll attempt to resolve this.