Are computer languages meaningfully... in english?

Topics that can go away
rotting bones
Posts: 1301
Joined: Tue Dec 04, 2018 5:16 pm

Re: Are computer languages meaningfully... in english?

Post by rotting bones »

Forth's implementation of reusability detracts from readability... whatever. This is a synthetic distillation of experience. It doesn't entirely come from an analysis of concepts.

---

Don't get me wrong, I think "safety" concerns (AKA proving theorems) is often important in fields like cybersecurity. Besides, I love Haskell since Curtis Yarvin hates it. If anything, I don't think Haskell goes far enough. My fatal passion is not Haskell but Agda, the true analogue of research variants of Java.

Nevertheless, there are issues with normalizing Haskell's methods:

1. If the first lesson in programming is Lambda Calculus and basic I/O is the fifteenth, that seems unreasonably elitist to me. https://youtu.be/ADqLBc1vFwI

2. Excessive rigidity negatively affects usability and modular development: https://youtube.com/playlist?list=PLLce ... Yl7wnth-bT

3. Code optimization is a mystery even to veterans: https://youtu.be/yRVjR9XcuPU

4. If your theory of computation is drawn from first principles rather than being based on physical models, then you won't be able to update it with, eg, the development of Quantum Mechanics.

...
bradrn
Posts: 5743
Joined: Fri Oct 19, 2018 1:25 am

Re: Are computer languages meaningfully... in english?

Post by bradrn »

rotting bones wrote: Mon Feb 20, 2023 10:15 pm 1. If the first lesson in programming is Lambda Calculus and basic I/O is the fifteenth, that seems unreasonably elitist to me. https://youtu.be/ADqLBc1vFwI
I’ve had this discussion many times on the FP Discord server. The consensus we’ve come to is that this is indeed a bad way of teaching, and introducing IO early would be better.
2. Excessive rigidity negatively affects usability and modular development: https://youtube.com/playlist?list=PLLce ... Yl7wnth-bT
Good Haskell code shouldn’t be rigid. I find that the design of the type system gives my Haskell code significantly more flexibility than, say, Java.
3. Code optimization is a mystery even to veterans: https://youtu.be/yRVjR9XcuPU
Fair. (Though ‘mystery’ is a bit strong. Experts can and do reason about performance in Haskell.)
4. If your theory of computation is drawn from first principles rather than being based on physical models, then you won't be able to update it with, eg, the development of Quantum Mechanics.
I have no idea how to even respond to this. How is QM at all relevant? How is Haskell different to any other current programming language in this regard?
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
rotting bones
Posts: 1301
Joined: Tue Dec 04, 2018 5:16 pm

Re: Are computer languages meaningfully... in english?

Post by rotting bones »

bradrn wrote: Mon Feb 20, 2023 11:41 pm I’ve had this discussion many times on the FP Discord server. The consensus we’ve come to is that this is indeed a bad way of teaching, and introducing IO early would be better.
So you'd just not tell them what monads are until later?
bradrn wrote: Mon Feb 20, 2023 11:41 pm Good Haskell code shouldn’t be rigid. I find that the design of the type system gives my Haskell code significantly more flexibility than, say, Java.
Well, I'm entirely self-taught in Haskell. Where can I learn non-fussy Haskell coding?
bradrn wrote: Mon Feb 20, 2023 11:41 pm I have no idea how to even respond to this. How is QM at all relevant? How is Haskell different to any other current programming language in this regard?
Quantum computing? What's the Lambda Calculus equivalent of the Quantum Turing Machine?

---

By the way, I have tried to clearly outline what's useful about classes. If anyone did the same for Haskell, I must have missed it.

- Depending on how it's written, Haskell's syntax can be elegant, maybe?

- I've heard people say that Haskell programs are short. However, I'm not sure what Haskell can accomplish that can't be reproduced in modern C++ with consts, enums, lambdas and tail recursion. The tower of abstractions, maybe? But I've heard people who have tried to implement Machine Learning libraries say that Haskell's abstractions are a hindrance.
bradrn
Posts: 5743
Joined: Fri Oct 19, 2018 1:25 am

Re: Are computer languages meaningfully... in english?

Post by bradrn »

rotting bones wrote: Tue Feb 21, 2023 12:03 am
bradrn wrote: Mon Feb 20, 2023 11:41 pm I’ve had this discussion many times on the FP Discord server. The consensus we’ve come to is that this is indeed a bad way of teaching, and introducing IO early would be better.
So you'd just not tell them what monads are until later?
Yes, pretty much. At least for teaching purposes, do-notation is entirely usable without requiring knowledge of monads.
bradrn wrote: Mon Feb 20, 2023 11:41 pm Good Haskell code shouldn’t be rigid. I find that the design of the type system gives my Haskell code significantly more flexibility than, say, Java.
Well, I'm entirely self-taught in Haskell. Where can I learn non-fussy Haskell coding?
Er… not sure; it all depends on how much you know so far. But I like What I Wish I Knew When Learning Haskell as an overall guide to what’s possible in the ecosystem.
bradrn wrote: Mon Feb 20, 2023 11:41 pm I have no idea how to even respond to this. How is QM at all relevant? How is Haskell different to any other current programming language in this regard?
Quantum computing? What's the Lambda Calculus equivalent of the Quantum Turing Machine?
True, quantum computing would require entirely different paradigms. But that being said, I have some knowledge here, and am fairly confident in saying that quantum computing is not in any way a replacement for classical computing. I like to compare it to GPUs in how it might eventually be applied — and note that GPUs don’t necessarily use ‘standard’ paradigms either.
- Depending on how it's written, Haskell's syntax can be elegant, maybe?
Yes, very much so. Haskell’s syntax is generally simple — minimalistic, even (at least visually). Internally, it all gets desugared to only ~6 different syntactic constructions.
- I've heard people say that Haskell programs are short. However, I'm not sure what Haskell can accomplish that can't be reproduced in modern C++ with consts, enums, lambdas and tail recursion.
Strictly speaking, the answer would be ‘very little’… but that ignores the fact that what is awkward in C++ is easy in Haskell, and vice versa. Doing advanced template stuff in C++ can be excruciatingly painful.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
User avatar
alice
Posts: 914
Joined: Mon Jul 09, 2018 11:15 am
Location: 'twixt Survival and Guilt

Re: Are computer languages meaningfully... in english?

Post by alice »

You may be interested to know that at my work I successfully introduced functional programming techniques, including a simple implementation of monads, into the Python project I'm working on, only to see the whole thing derailed by rogue I/O exceptions which no amount of correctness proofs could have explained away.
Self-referential signatures are for people too boring to come up with more interesting alternatives.
rotting bones
Posts: 1301
Joined: Tue Dec 04, 2018 5:16 pm

Re: Are computer languages meaningfully... in english?

Post by rotting bones »

alice wrote: Tue Feb 21, 2023 2:50 am You may be interested to know that at my work I successfully introduced functional programming techniques, including a simple implementation of monads, into the Python project I'm working on, only to see the whole thing derailed by rogue I/O exceptions which no amount of correctness proofs could have explained away.
Right? Meanwhile, the famous "pure" object-oriented language, Smalltalk, is so eminently usable, it was used to teach coding to babies.

I did a little Googling on uncommon languages with Machine Learning abilities. Turns out there's a book called Agile Artificial Intelligence in Pharo. Pharo Smalltalk was forked from Squeak. You can get it by installing the launcher first and downloading the system through the launcher by clicking New.

Smalltalk is so adorable. Just interacting with it gives me the warm and cuddly feelings that keep me motivated. I wish my first language had been Smalltalk instead of the, AFAIK, much more limited Logo.

By contrast, in strict functional programming, your code doesn't run until the type theoretic proof goes through. This is what I meant by "rigidity". If Haskell becomes universal, maybe my code won't work, but hopefully Curtis Yarvin's code won't work either.

Even this rigidity has benefits in certain fields. Eg. In cybersecurity, you want your program to fail if it doesn't meet predefined security parameters no matter how convoluted it needs to be as a result. Agda uses an even more rigid and extensible version of this framework to prove actual mathematical theorems. But rigidity is not universal across functional programming. Eg. Q isn't like that. It's very practical for coding.
Last edited by rotting bones on Sun Feb 26, 2023 10:01 am, edited 1 time in total.
rotting bones
Posts: 1301
Joined: Tue Dec 04, 2018 5:16 pm

Re: Are computer languages meaningfully... in english?

Post by rotting bones »

bradrn wrote: Tue Feb 21, 2023 12:22 am Yes, pretty much. At least for teaching purposes, do-notation is entirely usable without requiring knowledge of monads.
If you need to introduce programming with an underexplained do notation, doesn't that prove imperative programming is more intuitive than functional programming? There's a reason recursion is taught after loops.
bradrn wrote: Tue Feb 21, 2023 12:22 am Er… not sure; it all depends on how much you know so far. But I like What I Wish I Knew When Learning Haskell as an overall guide to what’s possible in the ecosystem.
Thanks, I'll check it out.

I don't remember the details of how I learned Haskell. I know it involved Haskell Programming from First Principles and loads of online advice.
bradrn wrote: Tue Feb 21, 2023 12:22 am True, quantum computing would require entirely different paradigms. But that being said, I have some knowledge here, and am fairly confident in saying that quantum computing is not in any way a replacement for classical computing. I like to compare it to GPUs in how it might eventually be applied — and note that GPUs don’t necessarily use ‘standard’ paradigms either.
There are physical models of distributed computing, and so on.

If Lambda Calculus is your only theory of computation, how will you reason about the speedups from Quantum Computing? I consider the fact that we can't explicitly represent certain algorithms in it to be a strike against Lambda Calculus. Also, this objection extends to any physics we solve in the future. Eg. The models of fluid computation Terence Tao was working on.
rotting bones
Posts: 1301
Joined: Tue Dec 04, 2018 5:16 pm

Re: Are computer languages meaningfully... in english?

Post by rotting bones »

rotting bones wrote: Sun Feb 26, 2023 8:33 am In cybersecurity, you want your program to fail if it doesn't meet predefined security parameters no matter how convoluted it needs to be as a result.
Or at the very least, you want to prove that your code fulfills these properties. It's much easier to prove this if your code rigidly tracks all mutations.
User avatar
alice
Posts: 914
Joined: Mon Jul 09, 2018 11:15 am
Location: 'twixt Survival and Guilt

Re: Are computer languages meaningfully... in english?

Post by alice »

A bit off-topic, but I decided to learn Rust by reimplementing my SCA in it, and found the whole experience so frustrating that I've given up on it for the time being.
Self-referential signatures are for people too boring to come up with more interesting alternatives.
rotting bones
Posts: 1301
Joined: Tue Dec 04, 2018 5:16 pm

Re: Are computer languages meaningfully... in english?

Post by rotting bones »

alice wrote: Sun Feb 26, 2023 2:09 pm A bit off-topic, but I decided to learn Rust by reimplementing my SCA in it, and found the whole experience so frustrating that I've given up on it for the time being.
I prefer Zig even though it's not finished yet.
rotting bones
Posts: 1301
Joined: Tue Dec 04, 2018 5:16 pm

Re: Are computer languages meaningfully... in english?

Post by rotting bones »

By the way, if anyone thinks I'm exaggerating the cuddliness of Smalltalk, look at this and tell me it's not adorable:

Image
bradrn
Posts: 5743
Joined: Fri Oct 19, 2018 1:25 am

Re: Are computer languages meaningfully... in english?

Post by bradrn »

rotting bones wrote: Sun Feb 26, 2023 8:33 am
alice wrote: Tue Feb 21, 2023 2:50 am You may be interested to know that at my work I successfully introduced functional programming techniques, including a simple implementation of monads, into the Python project I'm working on, only to see the whole thing derailed by rogue I/O exceptions which no amount of correctness proofs could have explained away.
Right? Meanwhile, the famous "pure" object-oriented language, Smalltalk, is so eminently usable, it was used to teach coding to babies.
Smalltalk is a very nice language indeed! (Mostly because it implements proper OOP, not the half-baked version popularised by C++ and Java.) But I strongly suspect that the ‘teaching to babies’ aspect of things has more to do with its very nice live environment than anything else. Compare some other programming languages widely used as a first language: BASIC, Scratch and Logo have very little in common, except that they run in a live environment.
By contrast, in strict functional programming, your code doesn't run until the type theoretic proof goes through.
Yeah, if by ‘type theoretic’ you include stuff like ‘you can’t add a number to a string’!
rotting bones wrote: Sun Feb 26, 2023 8:55 am
bradrn wrote: Tue Feb 21, 2023 12:22 am Yes, pretty much. At least for teaching purposes, do-notation is entirely usable without requiring knowledge of monads.
If you need to introduce programming with an underexplained do notation, doesn't that prove imperative programming is more intuitive than functional programming? There's a reason recursion is taught after loops.
No, all it proves is that most people are more familiar with imperative programming than with functional programming. If I were teaching a person learning Haskell as their first language, I might well start off with (>>=) straight away. (In fact I know someone online who recently learnt Haskell as their first language, and they did indeed prefer (>>=) for a long time.)

Besides, I’m not arguing that imperative programming is better in some cases. If you want to sequence operations, it’s a convenient way to do things. I just think that functional programming is more generally usable.
I don't remember the details of how I learned Haskell. I know it involved Haskell Programming from First Principles and loads of online advice.
I’m not a fan of HPFPP. Starting with lambda calculus was entirely the wrong approach.
If Lambda Calculus is your only theory of computation, how will you reason about the speedups from Quantum Computing? I consider the fact that we can't explicitly represent certain algorithms in it to be a strike against Lambda Calculus.
I feel there are a couple places of confusion here. Firstly, no-one has the lambda calculus as their only theory of computation (unless they’ve not learnt about anything else, I guess). There’s also Turing machines, the pi calculus, SKI combinators, the iota and jot calculi… a whole bunch.

But secondly: you’re confusing the mathematical formalism of algorithms with the physical substrate used to implement them. Every quantum algorithm can be implemented on a classical computer just as easily, and every classical algorithm can be run on a quantum computer too. Sure, some algorithms provide an asymptotic speedup on a quantum computer, but there’s nothing particularly special about that — the abstract algorithm gives an asymptotic speedup on some algorithms too, and that’s just classical lambda calculus! Meanwhile, a lot of algorithms are much easier to implement on a classical computer than on a quantum computer.

Thus, if you want to reason about speedups, then you do need to develop a machine-specific representation, no matter how much that feels like a ‘strike against’ that representation. For classical computers, that involves thinking about assembly language, memory access times, pipelining, branch prediction, etc. For quantum computers, that involves writing algorithms with matrix algebra and thinking about error-correction codes. For the abstract algorithm, it involves interaction combinators. And so on.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
rotting bones
Posts: 1301
Joined: Tue Dec 04, 2018 5:16 pm

Re: Are computer languages meaningfully... in english?

Post by rotting bones »

bradrn wrote: Sun Feb 26, 2023 5:19 pm Smalltalk is a very nice language indeed! (Mostly because it implements proper OOP, not the half-baked version popularised by C++ and Java.)
Haskell is also a very nice language. I just think that, like all languages, it has issues:

- For neural networks, we want to check, modify and drop intermediate nodes. People [weasel words] have said this is made unnecessarily difficult when mutation is disabled.

- While Haskell's syntax is minimalistic, it's so determined to distinguish operations that are even slightly different that it creates an explosion in the number of operators. For example, IIRC Haskell strongly distinguishes between multiplying by a scalar and a dot product. The maker of an AI library said this kind of thing introduces unnecessary complexity.
bradrn wrote: Sun Feb 26, 2023 5:19 pm But I strongly suspect that the ‘teaching to babies’ aspect of things has more to do with its very nice live environment than anything else. Compare some other programming languages widely used as a first language: BASIC, Scratch and Logo have very little in common, except that they run in a live environment.
True, but I don't think it's just the environment. A lot of languages have BASIC-like REPLs. The concept of telling entities to do things lends itself to anthropomorphism, something humans are naturally good at. Logo leverages the same concept with the turtle.
bradrn wrote: Sun Feb 26, 2023 5:19 pm Yeah, if by ‘type theoretic’ you include stuff like ‘you can’t add a number to a string’!
Why not? A type is a set with an identity function defined over its elements. You're mistaken if you're implying that Haskell's type system is limited to practical programming types. It was explicitly designed to be compatible with advanced mathematical properties in category theory.

You can write Haskell in a top-down style where you fill in low level details involving mundane types afterwards. It's a bit like interfaces in Java. If you're willing to put in the effort, Haskell allows you to structure your code like a proof of correctness.
bradrn wrote: Sun Feb 26, 2023 5:19 pm I’m not a fan of HPFPP. Starting with lambda calculus was entirely the wrong approach.
I enjoyed working through it TBH. I learned Lambda Calculus by reading Wikipedia. The untyped Lambda Calculus is very simple:

1. A function is written as: (lambda <parameters>.<expression>). The dot separates the parameter list from the expression, which is the return value. To evaluate this, substitute the supplied values of the parameters in the expression.

2. Apply a function to values by writing it in front of them.

Take (lambda x.x) 2. The parameter is x, the one before the dot. Since the function is applied to 2, the substitution "x = 2" is applied to the expression. In this case, the "x" after the dot. The result is 2.

This means "(lambda x.x)" is the identity function. It takes the applied value and returns it.

3. An infinite loop, suggesting that this is a model of computation:

(lambda x.xx)(lambda x.xx)

The second "(lambda x.xx)" is the value that the first one is being applied to. In the first's expression "xx", substitute "x = (lambda x.xx)". This yields the expression:

= (lambda x.xx)(lambda x.xx)
bradrn wrote: Sun Feb 26, 2023 5:19 pm But secondly: you’re confusing the mathematical formalism of algorithms with the physical substrate used to implement them. Every quantum algorithm can be implemented on a classical computer just as easily, and every classical algorithm can be run on a quantum computer too.

...

Thus, if you want to reason about speedups, then you do need to develop a machine-specific representation, no matter how much that feels like a ‘strike against’ that representation. For classical computers, that involves thinking about assembly language, memory access times, pipelining, branch prediction, etc. For quantum computers, that involves writing algorithms with matrix algebra and thinking about error-correction codes. For the abstract algorithm, it involves interaction combinators. And so on.
As a holder of a Masters degree and a PhD candidate in Computer Science, I feel compelled to point out that this is a misunderstanding of asymptotic complexity theory. All considerations about space and time have been collapsed into a single complexity tree hierarchy. For example, we know that logspace is a subset of polynomial time: https://en.wikipedia.org/wiki/L_(complexity) Using a quantum processor introduces novel structure into this tree that, to the best of our knowledge, cannot be obtained in any other way: https://en.wikipedia.org/wiki/BQP https://en.wikipedia.org/wiki/Quantum_complexity_theory
bradrn wrote: Sun Feb 26, 2023 5:19 pm Sure, some algorithms provide an asymptotic speedup on a quantum computer, but there’s nothing particularly special about that — the abstract algorithm gives an asymptotic speedup on some algorithms too, and that’s just classical lambda calculus! Meanwhile, a lot of algorithms are much easier to implement on a classical computer than on a quantum computer.
But a sequence of operations is what the substrate does. No chip works on Lambda Calculus. Since physics is dynamic, it's unclear how to build a chip which evaluates such operations on a fundamental level.

Why make it difficult to reason about the underlying architecture by encasing it in unnecessary layers of abstraction? There are good answers to this question, and those answers indicate the use cases that Haskell is good at.
bradrn
Posts: 5743
Joined: Fri Oct 19, 2018 1:25 am

Re: Are computer languages meaningfully... in english?

Post by bradrn »

rotting bones wrote: Sun Mar 05, 2023 10:17 pm - For neural networks, we want to check, modify and drop intermediate nodes. People [weasel words] have said this is made unnecessarily difficult when mutation is disabled.
This is a common misconception. Haskell does not disable mutation; it just forces you to be more careful with how you use it. I’ve implemented imperative algorithms in Haskell before, and it’s not hard.
- While Haskell's syntax is minimalistic, it's so determined to distinguish operations that are even slightly different that it creates an explosion in the number of operators. For example, IIRC Haskell strongly distinguishes between multiplying by a scalar and a dot product. The maker of an AI library said this kind of thing introduces unnecessary complexity.
You don’t have to distinguish scalar and dot-product multiplication if you don’t want to! Personally I can’t see how unifying them would be particularly useful, but here’s a piece of code which unifies them (and integer multiplication for that matter):

Code: Select all

class HeteroMult a b c | a b -> c  where
    (.*) :: a -> b -> c
instance HeteroMult Int Int Int where
    n .* m = n * m
instance HeteroMult Int [Int] [Int] where
    n .* ms = fmap (n*) ms
instance HeteroMult [Int] [Int] Int where
    ns .* ms = sum (zipWith (*) ns ms)
More usefully, you can use this style of thing to make e.g. APL-style shape-polymorphic arrays (paper link: https://benl.ouroborus.net/papers/2010- ... fp2010.pdf).
bradrn wrote: Sun Feb 26, 2023 5:19 pm But I strongly suspect that the ‘teaching to babies’ aspect of things has more to do with its very nice live environment than anything else. Compare some other programming languages widely used as a first language: BASIC, Scratch and Logo have very little in common, except that they run in a live environment.
True, but I don't think it's just the environment. A lot of languages have BASIC-like REPLs. The concept of telling entities to do things lends itself to anthropomorphism, something humans are naturally good at. Logo leverages the same concept with the turtle.
This certainly could be the case. But Scratch isn’t so anthropomorphised, and BASIC certainly isn’t — and those two have probably been more popular than Smalltalk and Logo combined.
bradrn wrote: Sun Feb 26, 2023 5:19 pm Yeah, if by ‘type theoretic’ you include stuff like ‘you can’t add a number to a string’!
Why not? A type is a set with an identity function defined over its elements. You're mistaken if you're implying that Haskell's type system is limited to practical programming types. It was explicitly designed to be compatible with advanced mathematical properties in category theory.
I don’t understand what point you’re trying to make here.
You can write Haskell in a top-down style where you fill in low level details involving mundane types afterwards. It's a bit like interfaces in Java. If you're willing to put in the effort, Haskell allows you to structure your code like a proof of correctness.
Of course you can write it that way… but you don’t have to. And in practice, from what I’ve seen it seems like the people who do that tend to prefer Agda.
bradrn wrote: Sun Feb 26, 2023 5:19 pm I’m not a fan of HPFPP. Starting with lambda calculus was entirely the wrong approach.
I enjoyed working through it TBH. I learned Lambda Calculus by reading Wikipedia. The untyped Lambda Calculus is very simple:
I’m already very familiar with the untyped lambda calculus.
bradrn wrote: Sun Feb 26, 2023 5:19 pm But secondly: you’re confusing the mathematical formalism of algorithms with the physical substrate used to implement them. Every quantum algorithm can be implemented on a classical computer just as easily, and every classical algorithm can be run on a quantum computer too.

...

Thus, if you want to reason about speedups, then you do need to develop a machine-specific representation, no matter how much that feels like a ‘strike against’ that representation. For classical computers, that involves thinking about assembly language, memory access times, pipelining, branch prediction, etc. For quantum computers, that involves writing algorithms with matrix algebra and thinking about error-correction codes. For the abstract algorithm, it involves interaction combinators. And so on.
As a holder of a Masters degree and a PhD candidate in Computer Science, I feel compelled to point out that this is a misunderstanding of asymptotic complexity theory. All considerations about space and time have been collapsed into a single complexity tree hierarchy. For example, we know that logspace is a subset of polynomial time: https://en.wikipedia.org/wiki/L_(complexity) Using a quantum processor introduces novel structure into this tree that, to the best of our knowledge, cannot be obtained in any other way: https://en.wikipedia.org/wiki/BQP https://en.wikipedia.org/wiki/Quantum_complexity_theory
As a Master’s student in a department full of theoretical quantum computing theorists, I do understand asymptotic complexity theory. And you’re right that I didn’t really address that side of things at all.

Also, on reflection, I misunderstood your original comment that ’we can't explicitly represent certain [quantum] algorithms in [lambda calculus]’. I guess I just get annoyed at seeing people think quantum computers let you do magic, and wanted to drive the point home that you can rewrite any quantum algorithm in the lambda calculus — just with a different complexity class. But I think you were considering that to be enough to make it a ‘different algorithm’, which was different to how I interpreted your comment. Apologies for that!
bradrn wrote: Sun Feb 26, 2023 5:19 pm Sure, some algorithms provide an asymptotic speedup on a quantum computer, but there’s nothing particularly special about that — the abstract algorithm gives an asymptotic speedup on some algorithms too, and that’s just classical lambda calculus! Meanwhile, a lot of algorithms are much easier to implement on a classical computer than on a quantum computer.
But a sequence of operations is what the substrate does. No chip works on Lambda Calculus. Since physics is dynamic, it's unclear how to build a chip which evaluates such operations on a fundamental level.
True, but again, I don’t quite understand what point you’re trying to make here.
Why make it difficult to reason about the underlying architecture by encasing it in unnecessary layers of abstraction? There are good answers to this question, and those answers indicate the use cases that Haskell is good at.
I disagree with the word ‘unnecessary’ here, but otherwise you’re quite right.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
jcb
Posts: 67
Joined: Thu Jul 07, 2022 4:36 pm

Re: Are computer languages meaningfully... in english?

Post by jcb »

Don't get me wrong, I think "safety" concerns (AKA proving theorems) is often important in fields like cybersecurity.
Indeed. I can't help but think of the heartbleed bug in 2014, which was caused by a simple buffer over-read error. Imo, the fact that buffer over-read bugs are still happening in 2014 (and today, tbh) should make the whole field of computer science ashamed. They could be easily eliminated by better language design and using only safe functions. Even more depressing is that the "experts'" "solution" to prevent another heartbleed-type bug from occurring was to just get more code auditors. It's only a matter of time until it happens again.

Imo, the OOP craze (Java, C++, etc) has stunted programming language evolution for a whole generation.
Besides, I love Haskell since Curtis Yarvin hates it.
Why does he hate it ? ... There are things I dislike about Haskell, but Elm fixed most of them.
If anything, I don't think Haskell goes far enough.
Such as ? I find most of Haskell's deeper features useless and needlessly complex (elm wisely never implemented them for this reason), but then again, I'm not an academic doing academic things with Haskell, which is why most of those features exist in the first place.
bradrn
Posts: 5743
Joined: Fri Oct 19, 2018 1:25 am

Re: Are computer languages meaningfully... in english?

Post by bradrn »

jcb wrote: Mon Mar 20, 2023 3:11 pm Imo, the OOP craze (Java, C++, etc) has stunted programming language evolution for a whole generation.
Not necessarily. Java and C++ have a rather terrible and distorted implementation of OOP, but the original idea of OOP (implemented in languages such as Smalltalk and Ruby) was rather good. It also introduced several ideas, such as encapsulation and late-binding, which continue to be extremely useful to this day.
Such as ? I find most of Haskell's deeper features useless and needlessly complex (elm wisely never implemented them for this reason), but then again, I'm not an academic doing academic things with Haskell, which is why most of those features exist in the first place.
As someone who uses Haskell for non-academic stuff (and is somewhat uninterested in the academic stuff anyway): my perspective on this is that those are actually surprisingly useful! You don’t need them all the time, but when you do they’re invaluable. Fundamentally, those features are there for two related reasons:
  1. Languages with poor type systems (e.g. Java) have trouble when they need to represent data with a flexible structure. One way around this is to get rid of static type checking in the first place, which is what dynamically typed languages do, but that removes all the guarantees static typing gives you. The other alternative is to enhance the type system so you can represent what you want. This is what Haskell does.
  2. A type system is only as helpful as the guarantees it can give you. Most type systems can’t give you guarantees much more complex than ‘this number is integral’. Haskell’s can enforce much more interesting properties, which is helpful for preventing mistakes in real-world programs.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Travis B.
Posts: 6308
Joined: Sun Jul 15, 2018 8:52 pm

Re: Are computer languages meaningfully... in english?

Post by Travis B. »

bradrn wrote: Mon Mar 20, 2023 8:14 pm
jcb wrote: Mon Mar 20, 2023 3:11 pm Imo, the OOP craze (Java, C++, etc) has stunted programming language evolution for a whole generation.
Not necessarily. Java and C++ have a rather terrible and distorted implementation of OOP, but the original idea of OOP (implemented in languages such as Smalltalk and Ruby) was rather good. It also introduced several ideas, such as encapsulation and late-binding, which continue to be extremely useful to this day.
Such as ? I find most of Haskell's deeper features useless and needlessly complex (elm wisely never implemented them for this reason), but then again, I'm not an academic doing academic things with Haskell, which is why most of those features exist in the first place.
As someone who uses Haskell for non-academic stuff (and is somewhat uninterested in the academic stuff anyway): my perspective on this is that those are actually surprisingly useful! You don’t need them all the time, but when you do they’re invaluable. Fundamentally, those features are there for two related reasons:
  1. Languages with poor type systems (e.g. Java) have trouble when they need to represent data with a flexible structure. One way around this is to get rid of static type checking in the first place, which is what dynamically typed languages do, but that removes all the guarantees static typing gives you. The other alternative is to enhance the type system so you can represent what you want. This is what Haskell does.
  2. A type system is only as helpful as the guarantees it can give you. Most type systems can’t give you guarantees much more complex than ‘this number is integral’. Haskell’s can enforce much more interesting properties, which is helpful for preventing mistakes in real-world programs.
I agree with all of the above. If you have bad dreams about OO, that is probably because of OO being synonymous with Java and C++ to you. And if you have bad memories of typing systems, that is probably because of of having to deal with C/C++/Java/C#/etc.-style typing systems, which do not do justice to what typing systems can truly do for you. In Haskell, e.g., the typing system will be your friend rather than your enemy if you only let it help you; as they say, once code written in Haskell actually compiles, there is a good chance that it is correct.
Yaaludinuya siima d'at yiseka ha wohadetafa gaare.
Ennadinut'a gaare d'ate ha eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
jcb
Posts: 67
Joined: Thu Jul 07, 2022 4:36 pm

Re: Are computer languages meaningfully... in english?

Post by jcb »

Not necessarily. Java and C++ have a rather terrible and distorted implementation of OOP, but the original idea of OOP (implemented in languages such as Smalltalk and Ruby) was rather good. It also introduced several ideas, such as encapsulation and late-binding, which continue to be extremely useful to this day.
I know; That's why I said "Java, C++, etc". I've not used Smalltalk or Ruby much though.
Languages with poor type systems (e.g. Java) have trouble when they need to represent data with a flexible structure. One way around this is to get rid of static type checking in the first place, which is what dynamically typed languages do, but that removes all the guarantees static typing gives you. The other alternative is to enhance the type system so you can represent what you want. This is what Haskell does.
A type system is only as helpful as the guarantees it can give you. Most type systems can’t give you guarantees much more complex than ‘this number is integral’. Haskell’s can enforce much more interesting properties, which is helpful for preventing mistakes in real-world programs.
What I had in mind by "deeper features" was the various pragmas that Haskell has. I understand the usefulness of ADTs very well.

What I would like to see is a system that lets one add units/measurements to types. So,

Code: Select all

3 feet + 5 meters
gives a compiler error (or auto converts units appropriately), but still lets me use any ft and m type with any function with an integer parameter, thus relieving me of having to duplicate function definitions for different types. ( (+) for both ft and m). Of course, the types would combine appropriately when multiplied and divided. Thus, a function for velocity would have a return type of m/s, and acceleration m/s^2, and would raise a compiler error if the value returned didn't match the signature.
bradrn
Posts: 5743
Joined: Fri Oct 19, 2018 1:25 am

Re: Are computer languages meaningfully... in english?

Post by bradrn »

jcb wrote: Tue Mar 21, 2023 12:12 am What I would like to see is a system that lets one add units/measurements to types. So,

Code: Select all

3 feet + 5 meters
gives a compiler error (or auto converts units appropriately), but still lets me use any ft and m type with any function with an integer parameter, thus relieving me of having to duplicate function definitions for different types. ( (+) for both ft and m). Of course, the types would combine appropriately when multiplied and divided. Thus, a function for velocity would have a return type of m/s, and acceleration m/s^2, and would raise a compiler error if the value returned didn't match the signature.
I’m working on this! There’s a couple of languages with some support for units (Frink, F#), but nothing which is both good and statically typed. I haven’t quite figured it out yet, but there’s enough literature on how such a language would work that I’m sure I’ll manage it eventually.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
jcb
Posts: 67
Joined: Thu Jul 07, 2022 4:36 pm

Re: Are computer languages meaningfully... in english?

Post by jcb »

I’m working on this! There’s a couple of languages with some support for units (Frink, F#), but nothing which is both good and statically typed. I haven’t quite figured it out yet, but there’s enough literature on how such a language would work that I’m sure I’ll manage it eventually.
Good ! The 1998 Mars climate orbiter crash inspired this, and again, the fact that this kind of problem still exists 25 years later should make the whole field of computer science ashamed.
Post Reply