Now don't pitch me Hindley-Milner because _that_ one I grok well enough to know it has some forbidding limitations making it a poor fit for mutation-/imperative-permitting languages. Instead of tagged sum types like ML-alikes sport, I'm hankering after more-structural/not-nominal and type-annotation-free covering at least TypeScript's capabilities. Now I _did_ find the 2012 Flow Typing paper and an inferencer for it by ayazhafiz.com but it's a total symbol soup and there's just no coder-lingo treatments of such out there, only greek algebraic gibberish. I also don't see them handling imperative mutations that expand/change a variable's type at that "temporal location" for so long until it is next mutated elsewhere (perhaps even in an anon closure passed off to elsewhere to run elsewhen). Also that "calculus" seemingly ignoring recursion (mutual or direct). And even so adapting the (quite hefty) OCaml version it seems like a real cryptic soup that I see no sense in even adopting unless I grok it deeply. But I didn't want to sign up for years of PhD study of, dunno, "type theory" or abstract algebra or whatever they call it just to get a type-inferencer done & working.
Feels like I'm missing the right kinda material to see the light and capture me that grokkage. Like the tuts out there when you want to learn ASM, C, 3D graphics? I can stomach that, in fact love to. But all this type theory stuff drives me mad and has me only mystified. But I need to cover the full spectrum from Never to Any, the logical Union/Intersection/Negation types, on top of the usual basic prim types. And — inferred, not via annotation.
Oh yeah, hacking it together ad-hoc and patching it up until it works really don't fly in this arena. Tried more than once. Will keep at it too if have to. But it really is a weird tar-pit. But I still think since dynamic languages can do it at runtime (figure out the type of a var), it must be just as possible statically from just the wholly-unannoted code-base at hand itself, plus the known arg/ret types of prim ops (your arithmetic & logical impls).
Abstract interpretation? Dunno. I feel there has to some elegant-core akin to a bare-bones minimal Lisp that already covers its own self-interpretation, that can deliver this computation without a gigantic hairball of special-casing rules that have been brute-forced into being and seem inscrutable in and of themselves?
I'm sure I'm missing a puzzle piece and one of you out here has it. Spill it!