bradrn wrote: ↑Sun Feb 26, 2023 5:19 pm
Smalltalk is a very nice language indeed! (Mostly because it implements
proper OOP, not the half-baked version popularised by C++ and Java.)
Haskell is also a very nice language. I just think that, like all languages, it has issues:
- For neural networks, we want to check, modify and drop intermediate nodes. People [weasel words] have said this is made unnecessarily difficult when mutation is disabled.
- While Haskell's syntax is minimalistic, it's so determined to distinguish operations that are even slightly different that it creates an explosion in the number of operators. For example, IIRC Haskell strongly distinguishes between multiplying by a scalar and a dot product. The maker of an AI library said this kind of thing introduces unnecessary complexity.
bradrn wrote: ↑Sun Feb 26, 2023 5:19 pm
But I strongly suspect that the ‘teaching to babies’ aspect of things has more to do with its very nice live environment than anything else. Compare some other programming languages widely used as a first language: BASIC, Scratch and Logo have very little in common, except that they run in a live environment.
True, but I don't think it's just the environment. A lot of languages have BASIC-like REPLs. The concept of telling entities to do things lends itself to anthropomorphism, something humans are naturally good at. Logo leverages the same concept with the turtle.
bradrn wrote: ↑Sun Feb 26, 2023 5:19 pm
Yeah, if by ‘type theoretic’ you include stuff like ‘you can’t add a number to a string’!
Why not? A type is a set with an identity function defined over its elements. You're mistaken if you're implying that Haskell's type system is limited to practical programming types. It was explicitly designed to be compatible with advanced mathematical properties in category theory.
You can write Haskell in a top-down style where you fill in low level details involving mundane types afterwards. It's a bit like interfaces in Java. If you're willing to put in the effort, Haskell allows you to structure your code like a proof of correctness.
bradrn wrote: ↑Sun Feb 26, 2023 5:19 pm
I’m not a fan of
HPFPP. Starting with lambda calculus was entirely the wrong approach.
I enjoyed working through it TBH. I learned Lambda Calculus by reading Wikipedia. The untyped Lambda Calculus is very simple:
1. A function is written as: (lambda <parameters>.<expression>). The dot separates the parameter list from the expression, which is the return value. To evaluate this, substitute the supplied values of the parameters in the expression.
2. Apply a function to values by writing it in front of them.
Take (lambda x.x) 2. The parameter is x, the one before the dot. Since the function is applied to 2, the substitution "x = 2" is applied to the expression. In this case, the "x" after the dot. The result is 2.
This means "(lambda x.x)" is the identity function. It takes the applied value and returns it.
3. An infinite loop, suggesting that this is a model of computation:
(lambda x.xx)(lambda x.xx)
The second "(lambda x.xx)" is the value that the first one is being applied to. In the first's expression "xx", substitute "x = (lambda x.xx)". This yields the expression:
= (lambda x.xx)(lambda x.xx)
bradrn wrote: ↑Sun Feb 26, 2023 5:19 pm
But secondly: you’re confusing the mathematical formalism of algorithms with the physical substrate used to implement them. Every quantum algorithm can be implemented on a classical computer just as easily, and every classical algorithm can be run on a quantum computer too.
...
Thus, if you want to
reason about speedups, then you do need to develop a machine-specific representation, no matter how much that feels like a ‘strike against’ that representation. For classical computers, that involves thinking about assembly language, memory access times, pipelining, branch prediction, etc. For quantum computers, that involves writing algorithms with matrix algebra and thinking about error-correction codes. For the abstract algorithm, it involves interaction combinators. And so on.
As a holder of a Masters degree and a PhD candidate in Computer Science, I feel compelled to point out that this is a misunderstanding of
asymptotic complexity theory. All considerations about space and time have been collapsed into a single complexity tree hierarchy. For example, we know that logspace is a subset of polynomial time:
https://en.wikipedia.org/wiki/L_(complexity) Using a quantum processor introduces novel structure into this tree that, to the best of our knowledge, cannot be obtained in any other way:
https://en.wikipedia.org/wiki/BQP https://en.wikipedia.org/wiki/Quantum_complexity_theory
bradrn wrote: ↑Sun Feb 26, 2023 5:19 pm
Sure, some algorithms provide an asymptotic speedup on a quantum computer, but there’s nothing particularly special about that — the
abstract algorithm gives an asymptotic speedup on some algorithms too, and
that’s just classical lambda calculus! Meanwhile, a lot of algorithms are much easier to implement on a classical computer than on a quantum computer.
But a sequence of operations is what the substrate does. No chip works on Lambda Calculus. Since physics is dynamic, it's unclear how to build a chip which evaluates such operations on a fundamental level.
Why make it difficult to reason about the underlying architecture by encasing it in unnecessary layers of abstraction? There are good answers to this question, and those answers indicate the use cases that Haskell is good at.