prog


What are you working on?

1 2018-12-22 22:05

This is the type of thread that you can write in every time you visit the site! You can do anything from use this as a personal log for your project, to sharing what you're hacking on at the moment, or to critique something some else did in the thread. I'm looking forward to hearing what you guys are up to!

Personally I've been working on a radix tree implementation in R7RS, it seems moderately efficient and not too bad to use, although I haven't finished many of the operations, and I'm not sure I will due to the awareness of a better way to do it if I only had the primitives (or even just a standardised FFI). Anyway thought I'd share it just because it's something to talk about. Due to the file limit I'm only posting the introduction, imports, exports, and type here, but a link to the full source can be found here: http://ix.io/1wB1.

; This is a implementation of a radix tree in the style of Clojure or
; Scala, meaning a not particularly good implementation. Ideally this
; would be a adaptive radix tree as described in the following paper:
; http://www-db.in.tum.de/~leis/papers/ART.pdf
; additionally it'd be nice to be able to have a variant for using SIMD
; functions on unboxed values if possible. Regardless scheme does not
; have the necessary primitives for implementing a adaptive radix tree
; to its full potential, namely access or abstractions of SIMD functions
; for the search function.

; much of this library may seem unidiomatic, this is simply because I'm
; trying quite hard to A) avoid allocations and B) factor out functions
; when they can be used in multiple situations, even if this means adding
; a bit of complexity.

; one of the goals of this library was to be exceedinly standard compliant,
; it's written entirely in R7RS small with SRFI 1 & 60. That being said
; this doesn't actually make this a particularly portable library due to
; the lack of upto date compliance with the standard and implementation
; of SRFI 60. To my knowledge the following implementations would likely
; be able to run this library: Foment, Gauche, Larceny, & Sagittarius.

(define-library (data radix)
  (export radix-ref radix-set radix->list radix->vector
          vector->radix list->radix radix radix-append
          radix-data radix-length radix-depth))

(import
  (scheme base)
  (srfi 1)   ; append! in radix->list
  (srfi 60)) ; bitwise operations

(define-record-type radix #t #f data length depth)
2 2018-12-23 11:34

What you should really do is get rid of this dl/dd nonsense that is cut off by ellipses on narrow displays without being scrollable, and put the code blocks into plain pre > code.

3 2018-12-23 16:01

>>2
While I appreciate the complement, I'm afraid this isn't my site. The anon who wrote it did visit a IRC channel I hang out in awhile back and said he wanted to stick to the earliest revision of the HTML standard as possible though, so maybe it has something to do with this? I'll look around at the original standard and what anon is doing to see if there is a better way to do this: https://gitlab.com/naughtybits/schemebbs
Anyway, what are you working on anon?

4 2018-12-23 21:38

>>2

wanted to stick to the earliest revision of the HTML standard as possible though, so maybe it has something to do with this?

The code is already inside a dl > dd > pre > code chain.

Anyway, what are you working on anon?

Some graphics stuff with spirals from text stamps using cairo. I'll post a simplified version on a board with images, probably lainchan.

5 2018-12-23 21:39

meant to reply to >>3, obviously

6 2018-12-24 03:49

>>4,5

The code is already inside a dl > dd > pre > code chain.

The original standard didn't have DIV or TABLE so I was thinking that maybe he was using DD/DL for CSS styles. I can't actually see the CSS because I use W3M, but when I opened it in netsurf I didn't see any reason why they would be necessary. I'll try to make a pull request tomorrow.

Some graphics stuff with spirals from text stamps using cairo. I'll post a simplified version on a board with images, probably lainchan.

Neat! Would you happen to be the lain that frequently posts this type of thing such as in the challenge thread? I honestly don't know anything about graphics programming, but I lurk /lambda/ so I'll take a look if I see it in a thread.

I ended up working on my radix library some more, I figured I'm so close to done that I might as well finish it. Some bug fixes, and code clean up but I also got slice, delete, and insert working: http://ix.io/1wGr

7 2018-12-24 13:04

>>6

The original standard didn't have DIV or TABLE

>>3

wanted to stick to the earliest revision of the HTML standard as possible

The "thread list" uses a table element, as does the SchemeBBS main page. Additionally, I find your statement quite puzzling for the following reason. The DOCTYPE on this page contains "ISO/IEC 15445:2000//DTD HyperText Markup Language//EN". The first edition of 15445:2000 from 2000-05-15 can be found at
https://www.scss.tcd.ie/misc/15445/15445.html
In "Annex B Entities, element types and attributes" the block elements are listed as
<!ENTITY % block "BLOCKQUOTE | DIV | DL | FIELDSET | FORM | HR | OL | P | PRE | TABLE | UL" >

Would you happen to be the lain that frequently posts this type of thing such as in the challenge thread?

The "frequently" bit makes me unsure about what you're asking but I did post the trig and zigzag fizzbuzzes there. Maybe the spiral should be a fizzbuzz as well.

8 2018-12-24 13:46

The "thread list" uses a table element, as does the SchemeBBS main page. Additionally, I find your statement quite puzzling for the following reason. The DOCTYPE on this page contains "ISO/IEC 15445:2000//DTD HyperText Markup Language//EN".

How odd, I assumed he was using HTML 2.0: https://tools.ietf.org/html/rfc1866 my apologies. I'll go work on that pull request now.

The "frequently" bit makes me unsure about what you're asking but I did post the trig and zigzag fizzbuzzes there. Maybe the spiral should be a fizzbuzz as well.

ah, well honestly I just remember those two and maybe one or two more from awhile back with the same image as the zigzag. Anyway good luck on your project anon, I look forward to seeing it.

9 2018-12-24 16:51

Apparently DD DL DT have some nice properties for formating text on devices without CSS support, I suspect that's why they're being used (something I should've known) there turned out to be a much simpler fix anyway. I emailed it to the BO because I didn't really want to make a bitbucket account.

diff --git a/static/styles/2ch.css b/static/styles/2ch.css
index 01e5cda..6e408e4 100644
--- a/static/styles/2ch.css
+++ b/static/styles/2ch.css
@@ -181,8 +181,7 @@ dd pre {
   margin: 1.2em 0;
   padding: 2px;
   line-height: 1.45em;
-  overflow: hidden;
-  text-overflow: ellipsis;
+  overflow: scroll;
 }
 samp {
   font-size: .83em;
diff --git a/static/styles/default.css b/static/styles/default.css
index ef558a8..f5774fe 100644
--- a/static/styles/default.css
+++ b/static/styles/default.css
@@ -118,8 +118,7 @@ dd pre {
   padding: 2px;
   background-color: #f7f7f7;
   line-height: 1.4em;
-  overflow: hidden;
-  text-overflow: ellipsis;
+  overflow: scroll;
 }
 blockquote {
   border-left: solid 2px #cccccc;
10 2018-12-24 21:14

>>9
Thanks for looking into this. I really should have noticed the text-overflow: ellipsis; line. Might I suggest a small change from overflow: scroll; to overflow: auto;. The difference in my firefox is that auto makes the up/down arrows scroll the page when the keyboard focus is in the pre>code, while with scroll they do nothing in the same situation, so auto is more usable for me.

11 2018-12-24 21:48

>>10

scroll: This value indicates that the content is clipped to the padding box, but can be scrolled into view (and therefore the box is a scroll container). Furthermore, if the user agent uses a scrolling mechanism that is visible on the screen (such as a scroll bar or a panner), that mechanism should be displayed whether or not any of its content is clipped. This avoids any problem with scrollbars appearing and disappearing in a dynamic environment. When the target medium is print, overflowing content may be printed; it is not defined where it may be printed.
auto: Like scroll when the box has scrollable overflow; like hidden otherwise. Thus, if the user agent uses a scrolling mechanism that is visible on the screen (such as a scroll bar or a panner), that mechanism will only be displayed if there is overflow.

In addition to what you've mentioned auto according to the standard is apparently better in this case because it hides the scrollbars when there is no overflow while scroll leaves them always present. In netsurf and presumably other browsers they seem to have the same effect because browsers aren't following the standard. Sorry about that, ```overflow: auto``` is a better choice.

12 2018-12-25 12:41

http://ix.io/1wGr

Some preliminary observations:
- It is unclear to me why 5, 32 and #b11111 are hardcoded throughout.
- There is a mixed use of the term 'depth' both with and without multiplication by 5.
- The abstraction of extract-5 is violated in the operation call in radix-copy-path.
- Using the number of the bit group instead of the shift as parameter to extract-5 would have made your bit packing scheme quicker to grok for me, along with a comment stating that groups are numbered from 1 rather than 0 and that the number is the distance from a leaf rather than the root, effectively making it a reverse depth.
- In radix-copy-path the same new-radix is returned on both branches, so it can go after the 'if'. Then the branches can lose the 'begin's. Since the let body is a lambda and has an implicit 'begin', they can be removed entirely.
- copy-path-iter! is linear recursive rather than linear iterative, so the name is misleading. The same goes for some of the other helpers named *-iter.

13 2018-12-26 20:04

>>12
Thanks so much for taking the time to give it a look over! Your comments made me realise some ways I can improve my programming in general, (I'm apparently god awful at naming things especially) and I've started to implement your suggestions, but haven't gone about the two most major ones yet, undoing the hard-coding and rewriting my extract-5 into layer-position, I'll try to do that over the next few days. Other than starting this and adding some testing I've been mostly confined to reading and hanging out with my family over the holiday so not much change on the programming front. I'll post my source again once I finish optimising the delete/insert functions and implement the two improvements.
Are you a different anon? been up to anything neat this holiday season?

>>10
I saw your latest spiral on /lambda/ I really like the look of this one. I think it's my favourite aesthetically.

I've been reading the Unix Hater's Handbook for fun, (I've enjoyed it thus far) and a couple of papers on parsers in preparation for my next project (a fast parser combinator library).

14 2018-12-27 01:07

>>13

Are you a different anon?

It's only been the two of us in this thread so far.

I saw your latest spiral on /lambda/ I really like the look of this one. I think it's my favourite aesthetically.

Thanks. Feel free to critique the code if you have cairo experience or see something that could be improved.

Some further observations:
- (extract-5 position end) is computed twice on the alternate branch of copy-path-iter!.
- The slice-length computation of copy-slice-append-iter contains an inlined version of (min 32 (- read-end read-position)). Making this explicit seems cleaner to me.
- Your code uses both (modulo read-position 32) and (logand position #b11111), which is a bit curious.
- A named let near the top level, like the *-iters, is perfectly fine. But named lets that are buried deep inside other computations, like largest-divisible-exponent-32 or the two versions of make-path, seem a bit excessive. They might be good candidates for splitting out into helper functions, like pow32? is split out.

These are still only superficial observations, mainly because your use of the continuation parameter of naive-append-vector is a nice trick that I still need to have a think on.

15 2018-12-27 08:58

I still have to finish my chan ncurses browser made in scheme.
I got a new project idea though.
I recently bought a TV and there's this app in it that allows me to see images/videos like they're an album.
But I have to host a local server which would provide the json files it requires to extrapolate images, title, layout, etc.
I'll be using guile, and I'll need to learn the web server module, and also the app's JSON api itself of course.
As for the content to see though, idk exactly what.
Obviously I can just provide a to see a folder, or even my filesystem entirely. But I'm also thinking of just providing an interface to certain sites, like maybe 4chan's webm boards

16 2018-12-27 15:24

>>13
I just noticed something strange about copy-slice-append-iter. Correct me if I'm wrong but it seems to me that, for every leaf vector except the first, each leaf vector is looked up twice, starting all the way from the root, once by the second vector-copy! in the current round, and again by the first vector-copy! in the next round. These two calls:
(radix-ref-depth read-radix (+ read-position 32) 5)
(radix-ref-depth read-radix read-position 5)
perform the exact same work across a pair of consecutive rounds, where read-position has been incremented by either 32 or the distance to read-end. I would think that avoiding such duplicate work at the algorithm level would rate higher than whether SIMD was available.

17 2018-12-27 17:25

>>14,16
I've now corrected everything you've mentioned but splitting out the functions from naive-append-vector!, removing the hard coding, and the inefficiencies in copy-append! there (you are completely correct, and of course it matters more than SIMD being available). I'm not a particularly experienced programmer I'm afraid to say, and as you've no doubt uncovered that from looking at my library here. I tried to look over your spiral code to help you a little since you've helped me so much but I've never done anything with graphics, python, or objects, and I'm afraid I couldn't come up with anything which might be of use. I do really appreciate everything you've done for me, and maybe one day when I have a bit broader set of knowledge or when you're working in my wheel house I can give some sort of a valuable suggestion. Anyway don't feel obligated to continue helping me, you've done so much for me already and I completely understand if there is some other better use of your time, if you are going to continue or if you just want to see the impact of your changes thus far here's a updated version: http://ix.io/1wVh
>>15
Those sound like pretty neat projects anon. I actually started on a image archiver for imageboards in guile awhile back, but it turns out that guile (& gnuTLS) don't play well with openbsd so I had to abandon it before I could even give it a test compile, it's not much (pretty rough if anything) but it might be of some use to you: http://ix.io/1wVg If you have any questions or you need help I or someone else here might be able to assist you.

I finished The Unix Hater's Handbook last night. I used to be really hooked on the New Jersey stuff, but these days I'm really fond of "The right thing". It's nice to be able to laugh at how far you've come. Also I think I will go ahead and try to optimise my radix tree a bit more, now that with the help of anon I've abstracted out my extract-5 function in favor of a function that simply returns the correct position in the current vector I can change the underlining function more easily, making it seem more reasonable to implement the relaxed variant of the radix tree (a sort of halfway house between the adaptive and the general) https://infoscience.epfl.ch/record/169879/files/RMTrees.pdf

18 2018-12-28 11:35

http://ix.io/1wVh

Some minor points:
- Please align the return of new-radix in copy-path-recur! with the 'if' block. Right now it's aligned as if it were a fourth argument to vector-set!. My python sense is tingling.
- There is an extract-5 in radix-delete for some reason.
- Reusing the name list->radix for the helper in (radix . elements) seems a bit inelegant to me.

And a note about radix->list. You are first testing whether the radix will be exhausted on this round and only after that whether the radix has already been exhausted. This is backward and your second case will never trigger. It is impossible that the radix will keep going after the current round, yet be exhausted already. Since you are already relying on the first case gracefully handling exhaustion by copying, listifying and appending an empty piece, you can simply dump the second case entirely rather than switching it to first. However, my suggestion is to construct the modulo piece first, then write a simple loop from below the modulo downward to zero and append the partial result to the new piece. Since the radix is accessed with radix-ref-height the same amount of accessing work is done by iterating backward. However, this way you would be appending to a piece of constant length at each step, which is linear behavior overall, whereas in your current version each step appends to a piece of linearly increasing length, which is quadratic behavior overall. Additionally, there is nothing forcing you to stick with vector->list and append!. You can have a helper deal with each new piece by consing a series of vector-refs backwards onto the result list, eliminating the need for vector->list and ditching append! and its import entirely.

I'm not a particularly experienced programmer

From your ease of use of named lets and the continuation trick of naive-append-vector!, I doubt that.

19 2018-12-28 20:08

>>18

There is an extract-5 in radix-delete for some reason.

I corrected this shortly after I posted that version, I guess I replaced those manually? very odd.

Reusing the name list->radix for the helper in (radix . elements) seems a bit inelegant to me.

I actually should've moved that into list->radix awhile ago, apply also has a bit of overhead that this avoids.

However, my suggestion is to construct the modulo piece first, then write a simple loop from below the modulo downward to zero and append the partial result to the new piece. Since the radix is accessed with radix-ref-height the same amount of accessing work is done by iterating backward. However, this way you would be appending to a piece of constant length at each step, which is linear behavior overall, whereas in your current version each step appends to a piece of linearly increasing length, which is quadratic behavior overall. Additionally, there is nothing forcing you to stick with vector->list and append!. You can have a helper deal with each new piece by consing a series of vector-refs backwards onto the result list, eliminating the need for vector->list and ditching append! and its import entirely.

This is a awesome suggestion!

(define (radix->list radix-struct)
  (define length (radix-length radix-struct))

  (define (vector->partial-list vector list)
    (let vector->partial-list-iter ((list list) (index 31))
      (if (= index 0)
        (cons (vector-ref vector index) list)
        (vector->partial-list-iter
          (cons (vector-ref vector index) list)
          (- index 1)))))

  (let radix->list-iter
    ((list
      (vector->list
        (vector-copy
          (radix-ref-height radix-struct length 1)
          0
          (modulo length 32))))
     (position
       (- length 32 (modulo length 32))))
    (if (< position 0)
      list
      (radix->list-iter
        (vector->partial-list
          (radix-ref-height radix-struct position 1)
          list)
        (- position 32)))))

From your ease of use of named lets and the continuation trick of naive-append-vector!, I doubt that.

Thanks anon.

20 2018-12-30 00:52

http://ix.io/1wVh

I just noticed that naive-append-vector! may only be called with a 32-aligned write-position, because it never increases the height of the write-radix on the else branch of the cond. This is fine since it's an internal function and its callers follow this restriction, but this should be stated explicitly. It should also be stated for copy-append! which inherits this restriction from naive-append-vector!.

In the same way, radix-copy-path may only be called with a position that is not 32-aligned at the end of the radix, and radix-set respects this. This is also fine because radix-copy-path is an intternal function, but this restriction should also be stated explicitly. If the position did not respect this, radix-copy-path would malfunction. For example, when called with position 32 on a radix of size 32 and height 1, radix-copy-path will completely ignore the high bit and act exactly as for position 0, including the operation call. When called with position 64 on a radix of size 64 and height 2, radix-copy-path will access the uninitialized third element of the root vector, and pass this to vector-copy.

The problems start in radix-append which does not follow the radix-copy-path condition. It calls radix-copy-path with (radix-length radix-struct-write) even when it is 32-aligned, which can already trigger the problems pointed out above. To make matters worse, the final-vector-remainder computation yields 32 rather than 0 for a 32-aligned length, so the lambda will attempt to use its potentially bogus vector and position arguments for an actual copy. Therefore radix-append looks quite broken from here and needs both these issues fixed.

Can you confirm this chain of reasoning from your end?

21 2018-12-30 15:49

I just noticed that naive-append-vector! may only be called with a 32-aligned write-position, because it never increases the height of the write-radix on the else branch of the cond. This is fine since it's an internal function and its callers follow this restriction, but this should be stated explicitly. It should also be stated for copy-append! which inherits this restriction from naive-append-vector!.

Okay will do!

In the same way, radix-copy-path may only be called with a position that is not 32-aligned at the end of the radix, and radix-set respects this. This is also fine because radix-copy-path is an intternal function, but this restriction should also be stated explicitly. If the position did not respect this, radix-copy-path would malfunction. For example, when called with position 32 on a radix of size 32 and height 1, radix-copy-path will completely ignore the high bit and act exactly as for position 0, including the operation call. When called with position 64 on a radix of size 64 and height 2, radix-copy-path will access the uninitialized third element of the root vector, and pass this to vector-copy.

I can't confirm this, I've tried calling radix-copy-path seperately and it seems to function as expected. I think I tracked down the actual error to the way I'm calculating the remainding space in the current vector in final-vector-remainder. So what was happending was that when the (radix-length radix-struct-write) equaled 32 the function would take the modulo 32 of this which is 0 and subtract it from 32 resulting in it saying there was 32 spaces left rather than zero. The following change to radix-append seems to have resolved things:

(define final-vector-remainder
    (- 31 (modulo (- (radix-length radix-struct-write) 1) 32)))

Thanks as always for the tips anon. I think I'll start the relaxed rewrite this afternoon, hopefully it goes well.

22 2018-12-30 22:06

>>21

I think I tracked down the actual error to the way I'm calculating the remainding space in the current vector in final-vector-remainder. So what was happending was that when the (radix-length radix-struct-write) equaled 32 the function would take the modulo 32 of this which is 0 and subtract it from 32 resulting in it saying there was 32 spaces left rather than zero.

From >>20

To make matters worse, the final-vector-remainder computation yields 32 rather than 0 for a 32-aligned length

I can't confirm this, I've tried calling radix-copy-path seperately and it seems to function as expected.

We can very easily resolve which one of us is right. Make a new radix of 64 elements. Append any other radix to it with the radix-append you posted. According to >>20 this will call vector-copy on the uninitialized third element of the root vector. In Guile this is immediately an error:

guile> (vector-copy (vector-ref (make-vector 32) 2))

Backtrace:
In standard input:
   3: 0* [vector-copy {#<unspecified>}]

standard input:3:1: In procedure vector-copy in expression (vector-copy (vector-ref # 2)):
standard input:3:1: Wrong type (expecting array): #<unspecified>
ABORT: (wrong-type-arg)
guile>

It would be very surprising if your Scheme variant was able to successfully call vector-copy on the default value that (make-vector 32) uses for the elements.

As stated in >>20 final-vector-remainder was merely one issue in radix-append. The other is the unconditional call to radix-copy-path.

23 2018-12-30 23:36

>>22
you're completely correct, I should really learn to read properly, sorry.

24 2019-01-02 03:54

I've made a little progress on the relaxed rewrite of the radix tree, and I started a new small project to allocate parliament seats proportionally based on a party-list Borda count. The election system is nearly done, I've got a paper I'm trying to decipher so that I can handle partial ballots after which all I've got to do is clean up and document. I'll probably wait until it's complete before I post anything.
I think I'm winding down a bit, or at least refocusing, I've been thinking about computational things for awhile now and honestly as much as I find it interesting like any great puzzle I also find it a bit pointless. I was always a bit of a humanities guy at heart so I'm going to try to find some ways to explore humanities topics (maybe something a bit deeper than a election system) with programming. I'm considering giving Mercury a look, or even just the logic programming section of SICP as a humanities DSL. On the other hand maybe I'll just write a bit more and that'll lift my mood a bit idk. That's about it for me for now.

Do you have any new years resolutions/plans?

25 2019-01-02 12:29

Happy New Year!

http://ix.io/1wVh

I just noticed that radix-slice is also broken, in the following ways. The 'end' parameter is used as an exclusive index, so the range test of (> end (radix-length radix-struct)) is correct, as is the copy-append! call. But this also means that radix-slice may be called with the length as 'end'. Since naive-append-vector! only increases the height of the radix on the (pow32? write-position) branch, a new radix of 32^k elements will be fully packed and have height k, where k>=1. The (= start 0) branch of radix-slice uses (= (vector-position end height) 0) to peel away levels from the radix and decrease the height. Therefore, a radix-slice call on a new radix of 1024 elements with start 0 and end 1024 will see that the peeling test passes and go down one level, because (vector-position 1024 2) is 0. On the next round it will make and return a new radix consisting solely of the first leaf vector of the source radix, with a height of 1 but claiming a length of 1024 elements. Radix-slice is similarly broken for all sources of size 32^k with k>=2, start=0 and end=length.

A second issue is that a radix-slice call on a new radix of 1024 elements with start 0 and end 32 will not pass the peeling test and will return a radix with 32 elements but height 2. This will break a subsequent naive-append-vector! because the (pow32? write-position) branch assumes the radix is fully packed and needs its height raised.

26 2019-01-03 10:35

http://ix.io/1wVh

Radix-delete has a leftover extract-5 as pointed out previously. It also uses 'end' in the level peeling test inside find-head when it obviously meant to use 'start'. Find-head is a near verbatim copy of the one inside radix-slice, so it should obviously be factored out -- the global comments claim:

I'm trying quite hard to [...] B) factor out functions when they can be used in multiple situations

More to the point, the radix-delete fixed with 'start' and 'vector-position' is still broken in ways similar to radix-slice. For example, when called on a new radix with length in [33...1024] and height 2, with start=32 and end=length, it will not pass the level peeling test and it will return a radix with 32 elements but height 2. At that point >>25 applies:

This will break a subsequent naive-append-vector! because the (pow32? write-position) branch assumes the radix is fully packed and needs its height raised.

The other class of errors can be seen when called on a new radix of 1024 elements with both start and end set to length. The first level peeling test will pass and the next round will return a new radix consisting solely of the first leaf vector of the source radix, with a height of 1 but claiming a length of 1024 elements.

And as a matter of elegance, on the (= end (radix-length radix-struct)) branch the (- (radix-length radix-struct) (- end start)) computation in the make-radix call can be entirely replaced with start.

27 2019-01-04 10:29

http://ix.io/1wVh

Radix-insert is implemented in terms of radix-append and radix-slice, both of which have the problems pointed out previously. In addition it uses 'radix-struct' on the second branch which is an unbound name at that point, and was meant to be 'radix-struct-write'. I think this about covers the public methods.

28 2019-01-04 16:02

>>25,26,27
Happy new year to you too anon. Thanks so much as always! Sorry for falling a bit behind I was working on some other things for a moment and I forgot to respond to your posts. It seems the last several functions are in a very sorry state, but I'm unsure if I should focus on resolving the last few issues with this implementation or try to work more on the rewrite. I'm curious what you thing of this anon: the advantage of the rewrite is that the library gets Olog(n) rather than O(n) append, delete, & insert but this comes at the cost of consistent read performance, with radix-ref becoming 2x slower if the radix-tree becomes unbalanced from a number of append, delete, & insert calls on it. That seems like it would make it both more difficult to reason about the performance of the radix tree, and in certain situations even reduce performance so I don't know if it's worth it or not.
Anyway I ended up finishing my electoral system for now, the implementation seems solid but I need to rework some of my ideas I think. When I first started writing it I used only abstractions in the functions, for example instead of dictating that ballots were vectors I just had wrapper functions around the vector functions to use on ballots, and instead of dictating that candidates were numbers I had variables like last-candidate, and first-candidate, and functions like next-candidate or candidate-less-than etc. I decided this was a bit verbose despite having the nice quality of being able to change out the representation and scrapped it. Was this a bad choice? http://ix.io/1xv1
Only other plans I've got is to finish reading SICP (I got 500 pages in before I decided to do other things) even if I'm not going to do the remaining exercises yet, and some things not /prog/ related.

I hope you're doing well anon.

29 2019-01-04 21:54

>>29

I think this about covers the public methods.

I still can't believe you took this much time to correct my work, you're truly amazing. I think I'll reread your suggestions here to hopefully absorb them a bit more fully.

30 2019-01-05 03:53

>>28

I'm curious what you thing of this

For a project that is not an externally imposed assignment you should work on whichever aspect holds your interest, provided that you do not advertise a project with outstanding issues as bug free to others.

it would make it both more difficult to reason about the performance of the radix tree, and in certain situations even reduce performance so I don't know if it's worth it or not.

If one class of operations loses a small constant factor while another class of operations gains an order of growth improvement, the change is probably worth it. Having said that, you have an outstanding double lookup issue in the append operation since >>16 so performance issues seem to receive a rather mercurial weight.

I decided this was a bit verbose despite having the nice quality of being able to change out the representation and scrapped it. Was this a bad choice?

As a matter of principle, replacing an abstraction layer and inlining a particular implementation is indeed a bad choice. You might do that in a C accelerator module only after profiling shows that your abstraction layer specifically constitutes a performance bottleneck.

>>29

I still can't believe you took this much time to correct my work

I could only allocate a portion of my leisure time to this, which is why it took 13 days. It was still fast enough that there are several explained but still unresolved organization, efficiency and correctness issues.

31 2019-01-05 11:49

http://ix.io/1xv1

While I won't be touching the political stuff on a programming board, here are some programming observations:
- Both the 'down' and 'incomp' functions contain lambdas that repeat the same (vector-ref ballot party) lookup on each round. For the purposes of the lambdas this is a constant computation and should be done once, before the ballot-filter-sum invocations.
- Your *-sum function increments a 0-based accumulator by either 0 or 1 based on the suitability of each item. This is called a *-count.
- Both 'ballot-max' and 'ballot-filter-sum' are vector folds. Writing them in terms of a vector-fold would be cleaner.
- The 'tally' function contains an inlined vector-inc!, which could be made explicit.

32 2019-01-05 16:53

>>30

For a project that is not an externally imposed assignment you should work on whichever aspect holds your interest, provided that you do not advertise a project with outstanding issues as bug free to others.

I'm very interested in doing the right thing if I can, and I'm certainly not interested in wasting such a valuable chance to improve my self as the one you've presented with your criticism. I think I'll throw on some music, grab some caffeine and make a bit of a programming day out of starting through the backlog today.

If one class of operations loses a small constant factor while another class of operations gains an order of growth improvement, the change is probably worth it.
As a matter of principle, replacing an abstraction layer and inlining a particular implementation is indeed a bad choice. You might do that in a C accelerator module only after profiling shows that your abstraction layer specifically constitutes a performance bottleneck.

I think I knew both of these things but I needed someone else to tell them to me so that I stop being lazy (that weird kind of lazy which is more work in the end) and just do them.

Having said that, you have an outstanding double lookup issue in the append operation since >>16 so performance issues seem to receive a rather mercurial weight.

I naively thought this issue as well as a number of issues with the final few functions would simply go away with the rewrite. When the tree is insufficiently balanced I'm of course going to be doing all the same operations though.

I could only allocate a portion of my leisure time to this, which is why it took 13 days. It was still fast enough that there are several explained but still unresolved organisation, efficiency and correctness issues.

I certainly could've done better with keeping up with your suggestions I'll try to resolve that a bit today. Even with it being a little bit spread over 13 days time is so valuable and you're beautiful anon.
>>31

While I won't be touching the political stuff on a programming board, here are some programming observations

I probably should've simply removed the original blurb for the sake of posting here, sorry about that.

Both the 'down' and 'incomp' functions contain lambdas that repeat the same (vector-ref ballot party) lookup on each round. For the purposes of the lambdas this is a constant computation and should be done once, before the ballot-filter-sum invocations.
Your *-sum function increments a 0-based accumulator by either 0 or 1 based on the suitability of each item. This is called a *-count.
Both 'ballot-max' and 'ballot-filter-sum' are vector folds. Writing them in terms of a vector-fold would be cleaner.
The 'tally' function contains an inlined vector-inc!, which could be made explicit.

I finished all but correcting ballot-max and tally this morning, these I still need to consider. I can't find documentation on vector-inc! & I'm trying to figure out how to get a index out of vector-fold nicely, or otherwise a different procedure to do it. I've also started adding back the abstractions and thought of a new weight function that I think might give more desirable results, I haven't added it yet however. Anyway on to the radix library for the rest of the day. http://ix.io/1xzN Thanks as always anon.

33 2019-01-05 23:54

>>32

I can't find documentation on vector-inc!

There is no standard vector-inc! function, that I know of. I simply meant your own function that would do as the name suggests. You are already using your Scheme variant's vector-for-each which does not take an index argument, in contrast with the SRFI-43 version: https://srfi.schemers.org/srfi-43/srfi-43.html#vector-for-each

I'm trying to figure out how to get a index out of vector-fold nicely, or otherwise a different procedure to do it.

The idea is to use sufficient information as the fold accumulator to comfortably perform the computation, then wrap the fold in an extractor that gets that part of the accumulator that you wish to return. When the information to be returned is insufficient on its own to perform the fold, the standard technique is to make the accumulator a superset of the final return.

http://ix.io/1xzN

It is unclear to me why the new 'ranked-parties-below' and 'incomparable-parties' take both a 'party' and a 'rank' argument. If the rank is computed externally, there does not seem to be a need for 'party'. If 'party' is passed in, the rank can be extracted locally, prior to the vector-count.

34 2019-01-06 04:08

>>33

There is no standard vector-inc! function, that I know of. I simply meant your own function that would do as the name suggests.

Oh that makes sense, sorry. I'll work on this in the morning.

You are already using your Scheme variant's vector-for-each which does not take an index argument, in contrast with the SRFI-43 version: https://srfi.schemers.org/srfi-43/srfi-43.html#vector-for-each

vector-for-each is a R7RS standard function, and SRFI-43 was replaced with SRFI-133 due to the standard breaking compliance with SRFI-43: https://srfi.schemers.org/srfi-133/srfi-133.html#vector-for-each That being said it's still basically my Scheme variant's version because there aren't all that many implementations of R7RS :(

The idea is to use sufficient information as the fold accumulator to comfortably perform the computation, then wrap the fold in an extractor that gets that part of the accumulator that you wish to return. When the information to be returned is insufficient on its own to perform the fold, the standard technique is to make the accumulator a superset of the final return.

Ah, I did try this, it just felt a bit unsatisfying, like there was something really good lurking just below the surface. Oh well, I'll do this in the morning as well then.

It is unclear to me why the new 'ranked-parties-below' and 'incomparable-parties' take both a 'party' and a 'rank' argument. If the rank is computed externally, there does not seem to be a need for 'party'. If 'party' is passed in, the rank can be extracted locally, prior to the vector-count.

This is now fixed, it was just silliness as you might imagine.

I didn't make quite as much progress as I would've liked today, I know for a fact I introduced at least one new bug while resolving only 3 of your older suggestions and 2 I'd been meaning to do. I'll have to give it another go tomorrow: http://ix.io/1xBR

35 2019-01-07 04:04

>>34

vector-for-each is a R7RS standard function, and SRFI-43 was replaced with SRFI-133 due to the standard breaking compliance with SRFI-43

Right, you're using R7RS. I'm still living in the R5RS world. Thanks for the correction.

http://ix.io/1xBR

Some preliminary observations:
- The focus system seems designed to accelerate near-sequential lookups or iteration. Is this your trick? How was it chosen over, say, a doubly linked list running through the leaf vectors?
- The rationale behind the new-start computation of the let* of focus-set! is not immediately obvious to me. Could you explain a bit? Why is the multiplication factor 32 rather than node-size?
- I think a note would be appropriate somewhere around focus-set! that you are choosing to compute focus-start and focus-end as the maximum possible range of positions that the focus could cover, but those positions are not all guaranteed to actually exist in the current radix.
- Naive-append-vector! looks much more manageable now. I do have to wonder though about the make-radix call in the proc call of the else branch, when the let* has new-radix.
- As explained in >>20 about radix-copy-path, radix-ref-height must also not be called with a position that is not 32-aligned at the end of the radix. Radix->list-iter does not respect this, and in >>19 this was almost harmless because the radix was stateless and the vector-copy handled the empty slice. But now the radix is stateful so the modulo piece should become the empty list immediately when the modulo is 0, without a call to (radix-ref-height radix-struct length 1).
- The final-vector-remainder of radix-append regressed from >>21.

36 2019-01-07 04:19

Correction:

must also not be called with a position that is not 32-aligned at the end of the radix.

the second 'not' is to be elided:

must also not be called with a position that is 32-aligned at the end of the radix.

37 2019-01-07 16:35

>>35

The focus system seems designed to accelerate near-sequential lookups or iteration. Is this your trick?

Yep, the idea is to optimise for locality of reference by reducing indirection to recently accessed sub-radixs. I think this is what Scala does for its vectors as well.

How was it chosen over, say, a doubly linked list running through the leaf vectors?

It seems like a doubly liked list on the leaves would be better. It's fewer indirections to neighbours when the tree is tall, and you don't have to store height. Perhaps when doing concurrent unordered operations the focus system is fewer indirections, and that's why scala did it this way? but why would you care about locality then? I'll probably redo it as a doubly linked list as you suggest.

The rationale behind the new-start computation of the let* of focus-set! is not immediately obvious to me. Could you explain a bit? Why is the multiplication factor 32 rather than node-size?

This was the mistakes that was causing radix->list to vomit random data all over the place, nice!

I think a note would be appropriate somewhere around focus-set! that you are choosing to compute focus-start and focus-end as the maximum possible range of positions that the focus could cover, but those positions are not all guaranteed to actually exist in the current radix.

Okay done.

Naive-append-vector! looks much more manageable now. I do have to wonder though about the make-radix call in the proc call of the else branch, when the let* has new-radix.

Thanks, I think there is still a good bit of room for improvement there, the double make-radix is silliness as usual though, it's now fixed.

As explained in >>20 about radix-copy-path, radix-ref-height must also not be called with a position that is 32-aligned at the end of the radix. Radix->list-iter does not respect this, and in >>19 this was almost harmless because the radix was stateless and the vector-copy handled the empty slice. But now the radix is stateful so the modulo piece should become the empty list immediately when the modulo is 0, without a call to (radix-ref-height radix-struct length 1).

I haven't dealt with this yet but I will soon.

The final-vector-remainder of radix-append regressed from >>21.

Oops, I moved to a old save because I was working on the rewrite on the current branch. I really need some sort of version control.

Yesterday I just ended up working on the election system, I came up with a different weighting function than the one I thought would work two days ago, and I ended up resolving many of my problems by using fancy vector functions, main thing left is to find a efficient means of resolving ballot level ties: http://ix.io/1xJM I'll try to focus on the radix library today.

38 2019-01-07 19:01

>>37

Yep, the idea is to optimise for locality of reference by reducing indirection to recently accessed sub-radixs. I think this is what Scala does for its vectors as well.

I forgot to mention that the idea was to find a general optimisation for >>16 that's why I did this now.

39 2019-01-08 00:18

>>37

I'll probably redo it as a doubly linked list as you suggest.

This wasn't a design suggestion. I was merely curious about how this focus trick came to be. It's a nice trick.

http://ix.io/1xJM

- Ballot-max exists solely as a comment.
- I'm not sure about the direction of comparison in ranked-parties-below.
- On the consequent of (vector-every zero? results) you are dynamically computing the sum of 1 and 0, and doing it for each 0.

40 2019-01-08 04:47

>>39

This wasn't a design suggestion. I was merely curious about how this focus trick came to be. It's a nice trick.

Oh neat, I guess this is because of the cost of creating and maintaining the doubly linked list? That makes sense.

Ballot-max exists solely as a comment.
On the consequent of (vector-every zero? results) you are dynamically computing the sum of 1 and 0, and doing it for each 0.

Fixed and fixed.

I'm not sure about the direction of comparison in ranked-parties-below.

It's functioning correctly, I'm just really aweful at naming things is probably why you said that.

I believe I've now corrected everything you've suggested but the hard coding which I will likely put off until I want to implement either a set or a map using the same underlieing library. http://ix.io/1xMb I've got a few more things I know of to do on this part before I consider either moving on to add enough functionality to make it useable or to continue on the relaxed rewrite.

41 2019-01-08 12:50

http://ix.io/1xMb

- This is a nice upgrade, but there's really no need to mention me in the comments.
- Find-head now uses 'position' as an exclusive end index. This is fine for its purpose, but it also means it shouldn't be called with position=0.
- Radix-slice disallows find-head calls with end=0, although the second error names radix-delete.
- Radix-delete allows find-head calls with start=0, which is a problem both for find-head and because deleting everything would produce an empty radix. Also, the second error should be for a strict test, since equality means deleting nothing and should short-circuit to an identity operation.
- You have both (make-radix #() 0 0) and (make-radix #() 0 1).
- Radix-append's final-vector-remainder is 32 for a 32-aligned length. For a new radix of 64 elements radix-copy-path will be called with 63. This will copy the path to the second full leaf vector, then call the lambda with 31. This will then call vector-copy! at 32 on a 32-vector with copy count 32.

42 2019-01-09 05:47

>>41

This is a nice upgrade, but there's really no need to mention me in the comments.

If you'd actually like to be removed I can just change it to "anonomous contributer" and remove the context. I certainly don't want to remove the testement to the work you've done, even if only you and I know who it's refering to. It's just a means to give credit where it's due and expressing my appresiation for all you've done for me. The name spiral-anon reminds me to ask if you've had the time to work on anything for your self lately?

Find-head now uses 'position' as an exclusive end index. This is fine for its purpose, but it also means it shouldn't be called with position=0.
Radix-slice disallows find-head calls with end=0, although the second error names radix-delete.
Radix-delete allows find-head calls with start=0, which is a problem both for find-head and because deleting everything would produce an empty radix. Also, the second error should be for a strict test, since equality means deleting nothing and should short-circuit to an identity operation.
You have both (make-radix #() 0 0) and (make-radix #() 0 1).
Radix-append's final-vector-remainder is 32 for a 32-aligned length. For a new radix of 64 elements radix-copy-path will be called with 63. This will copy the path to the second full leaf vector, then call the lambda with 31. This will then call vector-copy! at 32 on a 32-vector with copy count 32.

I think I've fixed or documented all of these now along with a whole host of other changes, I'm kind of exhausted so I might have a slower day tommorrow: http://ix.io/1xRN
Other than this I watched a video lecture on Backus's "function level programing" which was pretty interesting.

43 2019-01-09 12:55

>>42

If you'd actually like to be removed I can just change it to "anonomous contributer" and remove the context.

It's your source code so you can mention whomever you want. I simply meant that it's not necessary to give credit for a few observations produced in my leisure time.

The name spiral-anon reminds me to ask if you've had the time to work on anything for your self lately?

I've been trying to make a flower crown out of composing sine waves in cairo. It's not going well.

http://ix.io/1xRN

- I see you changed your mind about the 'proc' of naive-append-vector!. As one of the benefits, the copy-append-recur! helper is now iterative, as is vector->radix-recur.
- The final-vector-remainder of radix-append is 32 for a 32-aligned length. While the lambda no longer attempts to overflow its vector, the make-radix call receives a length that is incremented regardless of the lambda's decision, so the length does not match the no-copy case. The copy-append! also receives a read-start that does not match the no-copy case, so the first 32 radix-2 elements are skipped when (radix-length radix-1) is 32-aligned.
- In radix-append, the second argument of the make-radix call contains an inlined version of the fifth argument of the vector-copy! call, which should thus be computed once and reused.
- The second branch of radix-slice retains "or equal to", and the third branch test has an extra 0. The full-slice case should also be handled faster than via find-head.
- With the proliferation of (make-radix #() 0 0) calls, they should be factored out.
- The second branch of radix-delete retains "or equal to". A find-head call can still be triggered with start=0. The full-delete case should also be handled.
- With the advent of find-head it seems that a global decision should be made on whether an empty radix is a first-class radix or just an internal convenience targeting the first branch of naive-append-vector!. If it is decided that it is first-class, then all radix consumers should be prepared to receive it, among them radix-append.
- A radix-insert call with position=0 will cause problems for the current radix-append.

44 2019-01-09 23:45

>>42

http://ix.io/1xRN
fold, map, filter

- There seems to be a lot of (radix-length radix) here.
- It seems that the consequent of the quick exit test of radix-fold would also work for length 32.
- If you switch radix-fold-iter's leaf-length to (min 32 (- length-of-radix position)) you can avoid the modulo call.
- When you have an 'if' block with 'begin's on both branches, it's better to use a 'cond' with a test and an 'else', to avoid the 'begin's.

45 2019-01-10 16:15

>>41,42

I've been trying to make a flower crown out of composing sine waves in cairo. It's not going well.

That sounds like it will be really neat, I'm sorry it's not going well. I could do research assistant like jobs if you need anything looked into for this maybe.

I see you changed your mind about the 'proc' of naive-append-vector!.

Yes, now that I'm just mutating the struct and I was able to remove the storage variable it seems silly to pass a function just to call it on one argument; honestly, the reason I had it that way in the first place is that I have a bit of a aversion to multiple return values when they're not wrapped up in a struct, this was one of the reason I felt unsatisfied by the fold in 'vector-max' as well.

With the advent of find-head it seems that a global decision should be made on whether an empty radix is a first-class radix or just an internal convenience targeting the first branch of naive-append-vector!. If it is decided that it is first-class, then all radix consumers should be prepared to receive it, among them radix-append.

The plan is for it to be first-class like with vectors and lists in scheme, I've been slowly adding the feature to functions over the past couple days. radix-append has now been given this functionality as well.

I believe I've resolved all these issues (many this morning) and other than that it's was a slow day as promised: http://ix.io/1xY0

46 2019-01-10 20:43

>>45
I've missed a few things actually.

47 2019-01-10 23:56

>>45

I could do research assistant like jobs if you need anything looked into for this maybe.

Thanks, but this is a bit of an office contest type thing over here, so that wouldn't be fair.

I have a bit of a aversion to multiple return values when they're not wrapped up in a struct

Since the struct and an ad hoc representation like a list would be isomorphic as far as their information content is concerned, I take it that this is an aesthetic preference type of aversion.

I've been slowly adding the feature to functions over the past couple days
(many this morning)
I've missed a few things actually

Obviously there is no obligation to fix any particular issue in any particular version, such as the next one to be posted. Observations may even be overruled and entirely discarded since, again obviously, the code writer is the decision maker.

48 2019-01-11 11:13

http://ix.io/1xY0

- Why call internal-radix-constructor with hardcoded -1 for null-radix when make-radix handles this via undef? If you want null-radix to be a singleton rather than a constructor you can move it after make-radix.
- (radix->vector-iter result length length) in radix->vector-iter is simply result.
- There's a lot of (vector-length vector) recomputation in vector->radix.
- The leaf-length computation of vector->radix-iter duplicates work already done in vector-copy-32, which could return the vector and length as a cons.
- radix-copy-path calls 'proc' with the output of vector-position, which has a range of [0...31], so the 'unless' test of radix-append needs another look.

49 2019-01-11 17:51

>>47

Thanks, but this is a bit of an office contest type thing over here, so that wouldn't be fair.

Oh man, that sounds like fun, good luck then anon.

Since the struct and an ad hoc representation like a list would be isomorphic as far as their information content is concerned, I take it that this is an aesthetic preference type of aversion.

Yep, I think record-types tend to be implemented as vectors, but it's really just some scars from learning C early on. (you've seen many of them)

Obviously there is no obligation to fix any particular issue in any particular version, such as the next one to be posted. Observations may even be overruled and entirely discarded since, again obviously, the code writer is the decision maker.

Of course, it does seem like good practice to prioritise bug fixes when they're first discovered though so they don't end up confusing you at a higher level of abstraction though. This actually reminded me that at some point I need to get a bit better about writing tests. On overruling your observations, they've all just been completely correct, the only two or three times I considered it, one or two times was due to my misunderstanding and the other was me focusing on micro optimisations instead of building up a abstraction that would let me get a big-O gain later down the line.
>>48

There's a lot of (vector-length vector) recomputation in vector->radix.

I'm really bad about this, it's a pretty low cost because scheme has enforced vector bounds, but that doesn't mean it's free. It's also just not very aesthetic.

The leaf-length computation of vector->radix-iter duplicates work already done in vector-copy-32, which could return the vector and length as a cons.

ugh... okay done. (it's a good catch honestly)

radix-copy-path calls 'proc' with the output of vector-position, which has a range of [0...31], so the 'unless' test of radix-append needs another look.
Why call internal-radix-constructor with hardcoded -1 for null-radix when make-radix handles this via undef? If you want null-radix to be a singleton rather than a constructor you can move it after make-radix.
(radix->vector-iter result length length) in radix->vector-iter is simply result.

fixed, fixed, & fixed. Along with the remainder from your last post. http://ix.io/1y3G

50 2019-01-12 05:29

>>49
I didn't work much on the library today other than the bug fixes this morning, and I think I'm going to take a bonefied break tommorrow from programming generally. I did finally got around to responding to Cartesian Product thread though, and I think my solution turned out pretty nice.

51 2019-01-12 11:37

http://ix.io/1y3G

- Radix-append looks good now. One thing I think has become unnecessary is the (> length-1 0) part of the final-remainder test, since empty radices are handled entirely separately.
- If you write a vector-fold that uses a vector and a usable length, the leaf-* helpers from "Iterators" become vector folds and they only need the accumulating lambda, without repeating the iteration code.
- Similarly, the *-iter helpers from "Iterators" are all instances of a level-1-fold that provides the level-1 vectors and their usable length to the accumulating function.
- Are you sure you want to use 'value' as the next accumulator for leaf-cumulate-iter rather than the return of 'proc'?

52 2019-01-12 12:35

>>50

I did finally got around to responding to Cartesian Product thread though, and I think my solution turned out pretty nice.

While I completely agree with your assessment, you are returning the tree, not the list:

guile> (cartesian-product '(1 2 3) '(4 5 6) '(7 8 9))
((((1 4 7) (1 4 8) (1 4 9)) ((1 5 7) (1 5 8) (1 5 9)) ((1 6 7) (1 6 8) (1 6 9))) (((2 4 7) (2 4 8) (2 4 9)) ((2 5 7) (2 5 8) (2 5 9)) ((2 6 7) (2 6 8) (2 6 9))) (((3 4 7) (3 4 8) (3 4 9)) ((3 5 7) (3 5 8) (3 5 9)) ((3 6 7) (3 6 8) (3 6 9))))

and this is your call pattern:

(define (cartesian-product . An)
  (let cartesian-product-iter
    ((values (lambda (x) x)) (An An))
    (simple-format #t "c-p-i ~A" An)
    (newline)
    (if (null? An)
      (values '())
      (map
        (lambda (x)
          (cartesian-product-iter
            (lambda (y) (values (cons x y)))
            (cdr An)))
        (car An)))))

guile> (cartesian-product '(1 2 3) '(4 5 6) '(7 8 9))
c-p-i ((1 2 3) (4 5 6) (7 8 9))
c-p-i ((4 5 6) (7 8 9))
c-p-i ((7 8 9))
c-p-i ()
c-p-i ()
c-p-i ()
c-p-i ((7 8 9))
c-p-i ()
c-p-i ()
c-p-i ()
c-p-i ((7 8 9))
c-p-i ()
c-p-i ()
c-p-i ()
c-p-i ((4 5 6) (7 8 9))
c-p-i ((7 8 9))
c-p-i ()
c-p-i ()
c-p-i ()
c-p-i ((7 8 9))
c-p-i ()
c-p-i ()
c-p-i ()
c-p-i ((7 8 9))
c-p-i ()
c-p-i ()
c-p-i ()
c-p-i ((4 5 6) (7 8 9))
c-p-i ((7 8 9))
c-p-i ()
c-p-i ()
c-p-i ()
c-p-i ((7 8 9))
c-p-i ()
c-p-i ()
c-p-i ()
c-p-i ((7 8 9))
c-p-i ()
c-p-i ()
c-p-i ()
53 2019-01-12 14:51

>>50
Shouldn't it return a tree so that you can keep track of where each product belongs? Should that just be done by the relative position of the products in a list?
That call pattern just confuses me even more, there were cases where when the data was small to medium size where the memoized function was even slower than the non-memoized version, and when the value was large they were equivilent, I can confirm that the memoize function is functioning as expected. (although due to the curried function I'm not actually memoizing the correct values)
>>51

If you write a vector-fold that uses a vector and a usable length, the leaf-* helpers from "Iterators" become vector folds and they only need the accumulating lambda, without repeating the iteration code.
Similarly, the *-iter helpers from "Iterators" are all instances of a level-1-fold that provides the level-1 vectors and their usable length to the accumulating function.

Oh man, that's super sweet, I was thinking I was going to have to use macros for this for some reason.

Radix-append looks good now. One thing I think has become unnecessary is the (> length-1 0) part of the final-remainder test, since empty radices are handled entirely separately.
Are you sure you want to use 'value' as the next accumulator for leaf-cumulate-iter rather than the return of 'proc'?

Anyway I'll stay good on my promise to myself and work on these tommorrow, thanks for the help as always though anon.

54 2019-01-13 00:22

>>53

Shouldn't it return a tree so that you can keep track of where each product belongs?

You can return whatever you like, but this way you are shifting the burden of tree iteration onto the consumer of your output. It also seems curious to state how you "avoid any sort of reverse or append of the lists" when you produce a tree.

That call pattern just confuses me even more

It shows that you are making 27 leaf calls to produce a collection of 27 items, so there's nothing to memoize since the lambdas contain information essential to the return value.

55 2019-01-13 04:16

You can return whatever you like, but this way you are shifting the burden of tree iteration onto the consumer of your output. It also seems curious to state how you "avoid any sort of reverse or append of the lists" when you produce a tree.

Ah, well I guess I was trying to return a value for a human rather than another function, my naive implementation used a reverse rather than a curried function, and append was mentioned in the thread by a reply which I misinterpreted as attempting to return the same thing I was.

It shows that you are making 27 leaf calls to produce a collection of 27 items, so there's nothing to memoize since the lambdas contain information essential to the return value.

Yes I undestand that, but I'm even more confused than before because the (timed) performance is equivilent to the memoized version despite this.

56 2019-01-13 11:02

>>55

but I'm even more confused than before because the (timed) performance is equivilent to the memoized version despite this.

Given your claim of producing a collection of 2000^5000 items, which dwarfs the number of elementary particles in our universe, your methodology is more than a little suspect.

57 2019-01-14 02:24

>>56

Given your claim of producing a collection of 2000^5000 items, which dwarfs the number of elementary particles in our universe, your methodology is more than a little suspect.

I think one of the primary things holding me back at the moment is not adequately testing things, It seems both my test and one of my functions I didn't test at all. I'll make this a priority, also it seems my day off turned into two, I'll focus up tommorrow.

58 2019-01-15 03:07

>>57

I'll focus up tommorrow.

I lied, perhaps three's the lucky number.

59 2019-01-16 01:18

It wasn't a very productive day but at least I did something: http://ix.io/1yqP
Other than this the only thing I did today was start to layout a plan for migrating everything into emacs. I think I'll spend the remaining hours of the day reading though.

60 2019-01-17 04:01

>>59
another slow day, the small changes made today should mean that the library can run on every standard compliant R7RS implementation though which is nice. http://ix.io/1yvE
I also finished skimming my implementations manual for implementation specifics in standard compliant functions, mostly reading about supported SRFI today. I also got about half way through the revised report which I'll hopefully be able to finish tommorrow morning before I go out of town for a few days.

61 2019-01-18 12:33

http://ix.io/1yvE

- If null-radix? exists it might as well be used in radix-append.
- Gen-leaf-fold is not a leaf fold, it doesn't use 'leaf' at all. It folds an inlined iota. Since 'proc' will inevitably use 'pos' to extract a leaf item, and since the purpose of the fold is to factor out common work, this extraction should be in the fold and the item passed to 'proc'.
- Similarly, the level-1 fold also folds an inlined iota. It should extract the level-1 vector from the radix and pass it to 'proc-2'. It should also remove its current return asymmetry by ditching 'proc-1' and the first branch entirely.
- The lambda of leaf-for-each should return 'acc' to suppress the return of 'proc'.
- The 'begin' in the lambda of leaf-map is pointless.
- Once the level-1 fold stabilizes it should be used to drastically simplify radix->vector.

62 2019-01-22 18:19

>>61
Sorry for not replying, I was quite busy these last few days. I ended up finishing the standard the other day but I haven't done any programming other than writing some emacs configuration which is what I'll be continuing to do today.

63 2019-02-12 01:57

>>62
I saw your flower crown, it's quite beautiful and certainly a step up in complexity, it alone should've been reason to post here, I'm sorry about that. I've only been reading non-programming books since my last post, I'll take a break to do some programming after I finish the Iliad later this week though. I hope you're doing well spiral-anon.

64 2019-02-12 10:54

>>63

I saw your flower crown, it's quite beautiful

Thanks, Anon.

I hope you're doing well spiral-anon.

I am, thanks. I hope you're doing well, too.

65 2019-02-13 02:13

>>64

I am, thanks. I hope you're doing well, too.

that's good, I'm also doing quite well.

66 2019-02-15 22:13

I finished the Iliad, I'm going to move onto my next book, but I'm also going to try to be a bit more balanced about things. I wrote a different weighting procedure that solved the compromising and burying problems I was having, although I discovered that it really encourages party splitting so I need to look into that, I tried to work my head around how to memoize the cartesian product function I made to no avail, and most importantly I implemented the suggestions you mentioned in >>61 except for the radix-vector because I didn't get around to it today and the radix-fold suggestion because I'd like it to return #<undef> rather than the accumulator and I don't know how to do that yet. Although I broke radix-map in the process, and I still haven't used level-1 and gen-leaf-fold to their full potential yet. Not a massively productive day but it's a day where I have lots of free time to work on my other studies and I got something done in CS which I'm pretty happy about. http://ix.io/1B95 (oh, also some formatting changes due to emacs wizardry)

67 2019-02-16 21:21

>>66
oops, I see what broke radix-map now, carelessness.

68 2019-02-17 00:48

http://ix.io/1B95

- You can't return the vector-set! from leaf-map's lambda. That would lose the acc, which is the new (make-node).
- A function having an undefined return value means that the return is not specified, so the caller may not assume anything about it. It does not mean that the function guarantees to return anything in particular, like #<unspecified> or #<undef> -- that would actually make it have a defined return value.

69 2019-02-17 15:17

>>68

You can't return the vector-set! from leaf-map's lambda. That would lose the acc, which is the new (make-node).

I found this one, I was just being careless.

A function having an undefined return value means that the return is not specified, so the caller may not assume anything about it. It does not mean that the function guarantees to return anything in particular, like #<unspecified> or #<undef> -- that would actually make it have a defined return value.

Yes I know, I'd just rather be consistent with the surrounding system if at all possible (it might not be without using implementation specific extentions).

Mostly just aesthetic changes (I think I'm done with these) and fixing radix-map yesterday: http://ix.io/1Bix

70 2019-02-18 00:58

http://ix.io/1Bix

- Leaf-cumulate-iter continues to suffer from the last point of >>51.
- Leaf-cumulate is an instance of gen-leaf-fold with a pair holding the 'proc' output and the new-leaf as accumulator, as seen on the alternate branch. There's no need to repeat the iteration code.
- Radix-cumulate-iter and its containing 'if' are ripe for replacement by level-1-fold.

71 2019-02-18 01:00

http://ix.io/1Bix

Radix-fold-right can be unified with the normal fold by writing a function that folds an extracting map over a range iteration. A range would have start, step and end values, and would support negative steps. The extractor would receive the source object and the current index. The folding procedure would receive the accumulator, the current index and the extractor output. These steps would be combined to avoid the need to store an intermediate map, since map is eager in Scheme. This would make the normal and reversed folds differ only in the upward and downward ranges. It will also allow the unification of gen-leaf-fold and level-1-fold, since both are instances of a range-extract-fold with different ranges and extractors. You would also get some new things for free, like a radix-reversed-for-each.

72 2019-02-18 01:03

http://ix.io/1Bix

Radix-filter has two interleaved responsibilities. It produces a stream of items by filtering, and it consumes the stream with a radix builder. If you separate production and consumption, and interleave them in a function that links up a producer and a consumer, the radix building consumer becomes a fold. This can be written once and used to unify radix-filter, vector->radix and list->radix by using different producers. If you choose this path, copy-append! will also belong here by using an appending radix builder, which you already have. Then >>16 can be addressed by a generic buffering producer that wraps the real producer. The producer+consumer design can then be unified with range-extract-fold by making the fold a consumer and range-extract a producer.

73 2019-02-18 23:36

>>70-72
Oh man! Looks like I've got a programming day in my future! The past two days what I did was start to clean up my init.el, read some about literate programming, and to implemented a little program to convert between hsl rgb & to measure contrast between two rgb colors: http://ix.io/1Brq I might look into generating little color swashes at some point. The goal with these random programs is to help me make a psuedo-scientific (knowledge of rods/cones & some studies) color pallet to avoid eye strain.

74 2019-02-21 13:09

>>73
Past few days I've only been doing 'dev ops' (I really hate this kinda thing), doing some research on type design, and researching color proccessing further:

;; Primarily based off of doi:10.1167/11.5.8 and the meta analysis done there
;; on the state of the art. They come to the conclusion that .2 degrees is the
;; critical point for fonts, that is to say the smallest size at which the
;; majority of readers can read at maximum speed. In addition to this they
;; reviewed the fonts of print publications discovering the majority to lie
;; within the .22 to .25 range.

;; Programmers have several unique considerations when it comes to typography.
;; Due to the nature of programming having more context on screen at a time
;; reduces the amount of scrolling and searching neccessary to understand any
;; particular function. For this reason many programmers prefer smaller type
;; faces than they use in their other applications. Smaller typefaces are more
;; ledgeable when they have larger x-height ratios, this is something
;; programmers also tend to like even though they don't neccessarily recognise
;; it. Programmers also tend to use Monospace fonts, these fonts make the
;; structure of programs more clear.

;; The last thing of note is the history of digital type design, and a
;; associated complaint. Digital type design was historically done on raster
;; grid unlike its free flowing print cousin. This was done due to the
;; resolution of the screens and the physical capabilities of computers. Even
;; vector fonts designed for digital useage, so called 'web safe fonts', such
;; as Verdana, Arial, & Georgia were done in this way. Additionally, and likely
;; for this same reason digital types, even those being used for body text,
;; tend to be sans-serif. This is likely for the same reason, on older displays
;; the restrictions on the resolution of the font. That being said many
;; programmers today still prefer fonts designed in this style and even bitmap
;; fonts due to the increased clarity they offer, and the reduction in
;; anti-aliasing which some individuals get eye-strain from.

(define (ideal-font-size x-height-ratio inches-from-screen
			 software-dpi physical-dpi)
  (let* ((ideal-x-height-radians (/ 3.141592 900)) ; .2 degrees
	 (ppi-scaling (/ software-dpi physical-dpi))
	 (points-from-screen (* inches-from-screen 72)))
    (/ (* ideal-x-height-radians points-from-screen)
       ppi-scaling x-height-ratio)))
75


VIP:

do not edit these