>>20
Massively parrallel systems are a relatively recent popularization, and I'm largely unfamiliar with the GPU terminology you've used here. Regardless ILLIAC IV existed, and there were a few similar designs. By constraining the argument to “capable of doing cheaply” you naturally limit any discussion to consumer hardware, and so clearly from this view consumer hardware is where all the innovation occurred. I've also just realized that GPUs also existed within my timeline, so there is that as well. Just as a reminder here are my claims:
1. There hasn't been any innovation in hardware in 15+ years, just exploitation of the improved manufacturing process.
2. This is because we're optimizing not for novel experiences, and existing experiences and it would be very costly to port all the existing experiences, while developing novel experiences.
You seem to both disagree with my definition of innovation in hardware, and that the optimization for novel experiences has some sort of general utility. While my initial claims were in the context of x86 we've expanded them to GPUs which I am okay with. Now the basic contention is that the backwards compatible (and therefore by my definition not innovative) RSX (which I know little about) is innovative by your definition, and useful in neural networks (which I see as a mechanism for idiot software) and is therefore generally useful. I think I'm willing to leave the disagreement here, I'm sure we've both wasted enough time.
>>21
iirc -j
refers to compiling multiple files which don't depend on one another concurrently on CPU. Rust seems to be talking about concurrency as well, the crate thing would probably be done using SIMD in parrallel if I had to guess though. I do wonder if there are any compilers which run on the GPU, or do considerable work in parallel (rather than concurrently).