Are computer languages meaningfully... in english?
-
- Posts: 1421
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Are computer languages meaningfully... in english?
Forth's implementation of reusability detracts from readability... whatever. This is a synthetic distillation of experience. It doesn't entirely come from an analysis of concepts.
---
Don't get me wrong, I think "safety" concerns (AKA proving theorems) is often important in fields like cybersecurity. Besides, I love Haskell since Curtis Yarvin hates it. If anything, I don't think Haskell goes far enough. My fatal passion is not Haskell but Agda, the true analogue of research variants of Java.
Nevertheless, there are issues with normalizing Haskell's methods:
1. If the first lesson in programming is Lambda Calculus and basic I/O is the fifteenth, that seems unreasonably elitist to me. https://youtu.be/ADqLBc1vFwI
2. Excessive rigidity negatively affects usability and modular development: https://youtube.com/playlist?list=PLLce ... Yl7wnth-bT
3. Code optimization is a mystery even to veterans: https://youtu.be/yRVjR9XcuPU
4. If your theory of computation is drawn from first principles rather than being based on physical models, then you won't be able to update it with, eg, the development of Quantum Mechanics.
...
---
Don't get me wrong, I think "safety" concerns (AKA proving theorems) is often important in fields like cybersecurity. Besides, I love Haskell since Curtis Yarvin hates it. If anything, I don't think Haskell goes far enough. My fatal passion is not Haskell but Agda, the true analogue of research variants of Java.
Nevertheless, there are issues with normalizing Haskell's methods:
1. If the first lesson in programming is Lambda Calculus and basic I/O is the fifteenth, that seems unreasonably elitist to me. https://youtu.be/ADqLBc1vFwI
2. Excessive rigidity negatively affects usability and modular development: https://youtube.com/playlist?list=PLLce ... Yl7wnth-bT
3. Code optimization is a mystery even to veterans: https://youtu.be/yRVjR9XcuPU
4. If your theory of computation is drawn from first principles rather than being based on physical models, then you won't be able to update it with, eg, the development of Quantum Mechanics.
...
Re: Are computer languages meaningfully... in english?
I’ve had this discussion many times on the FP Discord server. The consensus we’ve come to is that this is indeed a bad way of teaching, and introducing IO early would be better.rotting bones wrote: ↑Mon Feb 20, 2023 10:15 pm 1. If the first lesson in programming is Lambda Calculus and basic I/O is the fifteenth, that seems unreasonably elitist to me. https://youtu.be/ADqLBc1vFwI
Good Haskell code shouldn’t be rigid. I find that the design of the type system gives my Haskell code significantly more flexibility than, say, Java.2. Excessive rigidity negatively affects usability and modular development: https://youtube.com/playlist?list=PLLce ... Yl7wnth-bT
Fair. (Though ‘mystery’ is a bit strong. Experts can and do reason about performance in Haskell.)3. Code optimization is a mystery even to veterans: https://youtu.be/yRVjR9XcuPU
I have no idea how to even respond to this. How is QM at all relevant? How is Haskell different to any other current programming language in this regard?4. If your theory of computation is drawn from first principles rather than being based on physical models, then you won't be able to update it with, eg, the development of Quantum Mechanics.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
-
- Posts: 1421
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Are computer languages meaningfully... in english?
So you'd just not tell them what monads are until later?
Well, I'm entirely self-taught in Haskell. Where can I learn non-fussy Haskell coding?
Quantum computing? What's the Lambda Calculus equivalent of the Quantum Turing Machine?
---
By the way, I have tried to clearly outline what's useful about classes. If anyone did the same for Haskell, I must have missed it.
- Depending on how it's written, Haskell's syntax can be elegant, maybe?
- I've heard people say that Haskell programs are short. However, I'm not sure what Haskell can accomplish that can't be reproduced in modern C++ with consts, enums, lambdas and tail recursion. The tower of abstractions, maybe? But I've heard people who have tried to implement Machine Learning libraries say that Haskell's abstractions are a hindrance.
Re: Are computer languages meaningfully... in english?
Yes, pretty much. At least for teaching purposes, do-notation is entirely usable without requiring knowledge of monads.rotting bones wrote: ↑Tue Feb 21, 2023 12:03 amSo you'd just not tell them what monads are until later?
Er… not sure; it all depends on how much you know so far. But I like What I Wish I Knew When Learning Haskell as an overall guide to what’s possible in the ecosystem.
True, quantum computing would require entirely different paradigms. But that being said, I have some knowledge here, and am fairly confident in saying that quantum computing is not in any way a replacement for classical computing. I like to compare it to GPUs in how it might eventually be applied — and note that GPUs don’t necessarily use ‘standard’ paradigms either.
Yes, very much so. Haskell’s syntax is generally simple — minimalistic, even (at least visually). Internally, it all gets desugared to only ~6 different syntactic constructions.- Depending on how it's written, Haskell's syntax can be elegant, maybe?
Strictly speaking, the answer would be ‘very little’… but that ignores the fact that what is awkward in C++ is easy in Haskell, and vice versa. Doing advanced template stuff in C++ can be excruciatingly painful.- I've heard people say that Haskell programs are short. However, I'm not sure what Haskell can accomplish that can't be reproduced in modern C++ with consts, enums, lambdas and tail recursion.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Re: Are computer languages meaningfully... in english?
You may be interested to know that at my work I successfully introduced functional programming techniques, including a simple implementation of monads, into the Python project I'm working on, only to see the whole thing derailed by rogue I/O exceptions which no amount of correctness proofs could have explained away.
Self-referential signatures are for people too boring to come up with more interesting alternatives.
-
- Posts: 1421
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Are computer languages meaningfully... in english?
Right? Meanwhile, the famous "pure" object-oriented language, Smalltalk, is so eminently usable, it was used to teach coding to babies.alice wrote: ↑Tue Feb 21, 2023 2:50 am You may be interested to know that at my work I successfully introduced functional programming techniques, including a simple implementation of monads, into the Python project I'm working on, only to see the whole thing derailed by rogue I/O exceptions which no amount of correctness proofs could have explained away.
I did a little Googling on uncommon languages with Machine Learning abilities. Turns out there's a book called Agile Artificial Intelligence in Pharo. Pharo Smalltalk was forked from Squeak. You can get it by installing the launcher first and downloading the system through the launcher by clicking New.
Smalltalk is so adorable. Just interacting with it gives me the warm and cuddly feelings that keep me motivated. I wish my first language had been Smalltalk instead of the, AFAIK, much more limited Logo.
By contrast, in strict functional programming, your code doesn't run until the type theoretic proof goes through. This is what I meant by "rigidity". If Haskell becomes universal, maybe my code won't work, but hopefully Curtis Yarvin's code won't work either.
Even this rigidity has benefits in certain fields. Eg. In cybersecurity, you want your program to fail if it doesn't meet predefined security parameters no matter how convoluted it needs to be as a result. Agda uses an even more rigid and extensible version of this framework to prove actual mathematical theorems. But rigidity is not universal across functional programming. Eg. Q isn't like that. It's very practical for coding.
Last edited by rotting bones on Sun Feb 26, 2023 10:01 am, edited 1 time in total.
-
- Posts: 1421
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Are computer languages meaningfully... in english?
If you need to introduce programming with an underexplained do notation, doesn't that prove imperative programming is more intuitive than functional programming? There's a reason recursion is taught after loops.
Thanks, I'll check it out.bradrn wrote: ↑Tue Feb 21, 2023 12:22 am Er… not sure; it all depends on how much you know so far. But I like What I Wish I Knew When Learning Haskell as an overall guide to what’s possible in the ecosystem.
I don't remember the details of how I learned Haskell. I know it involved Haskell Programming from First Principles and loads of online advice.
There are physical models of distributed computing, and so on.bradrn wrote: ↑Tue Feb 21, 2023 12:22 am True, quantum computing would require entirely different paradigms. But that being said, I have some knowledge here, and am fairly confident in saying that quantum computing is not in any way a replacement for classical computing. I like to compare it to GPUs in how it might eventually be applied — and note that GPUs don’t necessarily use ‘standard’ paradigms either.
If Lambda Calculus is your only theory of computation, how will you reason about the speedups from Quantum Computing? I consider the fact that we can't explicitly represent certain algorithms in it to be a strike against Lambda Calculus. Also, this objection extends to any physics we solve in the future. Eg. The models of fluid computation Terence Tao was working on.
-
- Posts: 1421
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Are computer languages meaningfully... in english?
Or at the very least, you want to prove that your code fulfills these properties. It's much easier to prove this if your code rigidly tracks all mutations.rotting bones wrote: ↑Sun Feb 26, 2023 8:33 am In cybersecurity, you want your program to fail if it doesn't meet predefined security parameters no matter how convoluted it needs to be as a result.
Re: Are computer languages meaningfully... in english?
A bit off-topic, but I decided to learn Rust by reimplementing my SCA in it, and found the whole experience so frustrating that I've given up on it for the time being.
Self-referential signatures are for people too boring to come up with more interesting alternatives.
-
- Posts: 1421
- Joined: Tue Dec 04, 2018 5:16 pm
-
- Posts: 1421
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Are computer languages meaningfully... in english?
By the way, if anyone thinks I'm exaggerating the cuddliness of Smalltalk, look at this and tell me it's not adorable:
Re: Are computer languages meaningfully... in english?
Smalltalk is a very nice language indeed! (Mostly because it implements proper OOP, not the half-baked version popularised by C++ and Java.) But I strongly suspect that the ‘teaching to babies’ aspect of things has more to do with its very nice live environment than anything else. Compare some other programming languages widely used as a first language: BASIC, Scratch and Logo have very little in common, except that they run in a live environment.rotting bones wrote: ↑Sun Feb 26, 2023 8:33 amRight? Meanwhile, the famous "pure" object-oriented language, Smalltalk, is so eminently usable, it was used to teach coding to babies.alice wrote: ↑Tue Feb 21, 2023 2:50 am You may be interested to know that at my work I successfully introduced functional programming techniques, including a simple implementation of monads, into the Python project I'm working on, only to see the whole thing derailed by rogue I/O exceptions which no amount of correctness proofs could have explained away.
Yeah, if by ‘type theoretic’ you include stuff like ‘you can’t add a number to a string’!By contrast, in strict functional programming, your code doesn't run until the type theoretic proof goes through.
No, all it proves is that most people are more familiar with imperative programming than with functional programming. If I were teaching a person learning Haskell as their first language, I might well start off with (>>=) straight away. (In fact I know someone online who recently learnt Haskell as their first language, and they did indeed prefer (>>=) for a long time.)rotting bones wrote: ↑Sun Feb 26, 2023 8:55 amIf you need to introduce programming with an underexplained do notation, doesn't that prove imperative programming is more intuitive than functional programming? There's a reason recursion is taught after loops.
Besides, I’m not arguing that imperative programming is better in some cases. If you want to sequence operations, it’s a convenient way to do things. I just think that functional programming is more generally usable.
I’m not a fan of HPFPP. Starting with lambda calculus was entirely the wrong approach.I don't remember the details of how I learned Haskell. I know it involved Haskell Programming from First Principles and loads of online advice.
I feel there are a couple places of confusion here. Firstly, no-one has the lambda calculus as their only theory of computation (unless they’ve not learnt about anything else, I guess). There’s also Turing machines, the pi calculus, SKI combinators, the iota and jot calculi… a whole bunch.If Lambda Calculus is your only theory of computation, how will you reason about the speedups from Quantum Computing? I consider the fact that we can't explicitly represent certain algorithms in it to be a strike against Lambda Calculus.
But secondly: you’re confusing the mathematical formalism of algorithms with the physical substrate used to implement them. Every quantum algorithm can be implemented on a classical computer just as easily, and every classical algorithm can be run on a quantum computer too. Sure, some algorithms provide an asymptotic speedup on a quantum computer, but there’s nothing particularly special about that — the abstract algorithm gives an asymptotic speedup on some algorithms too, and that’s just classical lambda calculus! Meanwhile, a lot of algorithms are much easier to implement on a classical computer than on a quantum computer.
Thus, if you want to reason about speedups, then you do need to develop a machine-specific representation, no matter how much that feels like a ‘strike against’ that representation. For classical computers, that involves thinking about assembly language, memory access times, pipelining, branch prediction, etc. For quantum computers, that involves writing algorithms with matrix algebra and thinking about error-correction codes. For the abstract algorithm, it involves interaction combinators. And so on.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
-
- Posts: 1421
- Joined: Tue Dec 04, 2018 5:16 pm
Re: Are computer languages meaningfully... in english?
Haskell is also a very nice language. I just think that, like all languages, it has issues:
- For neural networks, we want to check, modify and drop intermediate nodes. People [weasel words] have said this is made unnecessarily difficult when mutation is disabled.
- While Haskell's syntax is minimalistic, it's so determined to distinguish operations that are even slightly different that it creates an explosion in the number of operators. For example, IIRC Haskell strongly distinguishes between multiplying by a scalar and a dot product. The maker of an AI library said this kind of thing introduces unnecessary complexity.
True, but I don't think it's just the environment. A lot of languages have BASIC-like REPLs. The concept of telling entities to do things lends itself to anthropomorphism, something humans are naturally good at. Logo leverages the same concept with the turtle.bradrn wrote: ↑Sun Feb 26, 2023 5:19 pm But I strongly suspect that the ‘teaching to babies’ aspect of things has more to do with its very nice live environment than anything else. Compare some other programming languages widely used as a first language: BASIC, Scratch and Logo have very little in common, except that they run in a live environment.
Why not? A type is a set with an identity function defined over its elements. You're mistaken if you're implying that Haskell's type system is limited to practical programming types. It was explicitly designed to be compatible with advanced mathematical properties in category theory.
You can write Haskell in a top-down style where you fill in low level details involving mundane types afterwards. It's a bit like interfaces in Java. If you're willing to put in the effort, Haskell allows you to structure your code like a proof of correctness.
I enjoyed working through it TBH. I learned Lambda Calculus by reading Wikipedia. The untyped Lambda Calculus is very simple:
1. A function is written as: (lambda <parameters>.<expression>). The dot separates the parameter list from the expression, which is the return value. To evaluate this, substitute the supplied values of the parameters in the expression.
2. Apply a function to values by writing it in front of them.
Take (lambda x.x) 2. The parameter is x, the one before the dot. Since the function is applied to 2, the substitution "x = 2" is applied to the expression. In this case, the "x" after the dot. The result is 2.
This means "(lambda x.x)" is the identity function. It takes the applied value and returns it.
3. An infinite loop, suggesting that this is a model of computation:
(lambda x.xx)(lambda x.xx)
The second "(lambda x.xx)" is the value that the first one is being applied to. In the first's expression "xx", substitute "x = (lambda x.xx)". This yields the expression:
= (lambda x.xx)(lambda x.xx)
As a holder of a Masters degree and a PhD candidate in Computer Science, I feel compelled to point out that this is a misunderstanding of asymptotic complexity theory. All considerations about space and time have been collapsed into a single complexity tree hierarchy. For example, we know that logspace is a subset of polynomial time: https://en.wikipedia.org/wiki/L_(complexity) Using a quantum processor introduces novel structure into this tree that, to the best of our knowledge, cannot be obtained in any other way: https://en.wikipedia.org/wiki/BQP https://en.wikipedia.org/wiki/Quantum_complexity_theorybradrn wrote: ↑Sun Feb 26, 2023 5:19 pm But secondly: you’re confusing the mathematical formalism of algorithms with the physical substrate used to implement them. Every quantum algorithm can be implemented on a classical computer just as easily, and every classical algorithm can be run on a quantum computer too.
...
Thus, if you want to reason about speedups, then you do need to develop a machine-specific representation, no matter how much that feels like a ‘strike against’ that representation. For classical computers, that involves thinking about assembly language, memory access times, pipelining, branch prediction, etc. For quantum computers, that involves writing algorithms with matrix algebra and thinking about error-correction codes. For the abstract algorithm, it involves interaction combinators. And so on.
But a sequence of operations is what the substrate does. No chip works on Lambda Calculus. Since physics is dynamic, it's unclear how to build a chip which evaluates such operations on a fundamental level.bradrn wrote: ↑Sun Feb 26, 2023 5:19 pm Sure, some algorithms provide an asymptotic speedup on a quantum computer, but there’s nothing particularly special about that — the abstract algorithm gives an asymptotic speedup on some algorithms too, and that’s just classical lambda calculus! Meanwhile, a lot of algorithms are much easier to implement on a classical computer than on a quantum computer.
Why make it difficult to reason about the underlying architecture by encasing it in unnecessary layers of abstraction? There are good answers to this question, and those answers indicate the use cases that Haskell is good at.
Re: Are computer languages meaningfully... in english?
This is a common misconception. Haskell does not disable mutation; it just forces you to be more careful with how you use it. I’ve implemented imperative algorithms in Haskell before, and it’s not hard.rotting bones wrote: ↑Sun Mar 05, 2023 10:17 pm - For neural networks, we want to check, modify and drop intermediate nodes. People [weasel words] have said this is made unnecessarily difficult when mutation is disabled.
You don’t have to distinguish scalar and dot-product multiplication if you don’t want to! Personally I can’t see how unifying them would be particularly useful, but here’s a piece of code which unifies them (and integer multiplication for that matter):- While Haskell's syntax is minimalistic, it's so determined to distinguish operations that are even slightly different that it creates an explosion in the number of operators. For example, IIRC Haskell strongly distinguishes between multiplying by a scalar and a dot product. The maker of an AI library said this kind of thing introduces unnecessary complexity.
Code: Select all
class HeteroMult a b c | a b -> c where
(.*) :: a -> b -> c
instance HeteroMult Int Int Int where
n .* m = n * m
instance HeteroMult Int [Int] [Int] where
n .* ms = fmap (n*) ms
instance HeteroMult [Int] [Int] Int where
ns .* ms = sum (zipWith (*) ns ms)
This certainly could be the case. But Scratch isn’t so anthropomorphised, and BASIC certainly isn’t — and those two have probably been more popular than Smalltalk and Logo combined.True, but I don't think it's just the environment. A lot of languages have BASIC-like REPLs. The concept of telling entities to do things lends itself to anthropomorphism, something humans are naturally good at. Logo leverages the same concept with the turtle.bradrn wrote: ↑Sun Feb 26, 2023 5:19 pm But I strongly suspect that the ‘teaching to babies’ aspect of things has more to do with its very nice live environment than anything else. Compare some other programming languages widely used as a first language: BASIC, Scratch and Logo have very little in common, except that they run in a live environment.
I don’t understand what point you’re trying to make here.Why not? A type is a set with an identity function defined over its elements. You're mistaken if you're implying that Haskell's type system is limited to practical programming types. It was explicitly designed to be compatible with advanced mathematical properties in category theory.
Of course you can write it that way… but you don’t have to. And in practice, from what I’ve seen it seems like the people who do that tend to prefer Agda.You can write Haskell in a top-down style where you fill in low level details involving mundane types afterwards. It's a bit like interfaces in Java. If you're willing to put in the effort, Haskell allows you to structure your code like a proof of correctness.
I’m already very familiar with the untyped lambda calculus.
As a Master’s student in a department full of theoretical quantum computing theorists, I do understand asymptotic complexity theory. And you’re right that I didn’t really address that side of things at all.As a holder of a Masters degree and a PhD candidate in Computer Science, I feel compelled to point out that this is a misunderstanding of asymptotic complexity theory. All considerations about space and time have been collapsed into a single complexity tree hierarchy. For example, we know that logspace is a subset of polynomial time: https://en.wikipedia.org/wiki/L_(complexity) Using a quantum processor introduces novel structure into this tree that, to the best of our knowledge, cannot be obtained in any other way: https://en.wikipedia.org/wiki/BQP https://en.wikipedia.org/wiki/Quantum_complexity_theorybradrn wrote: ↑Sun Feb 26, 2023 5:19 pm But secondly: you’re confusing the mathematical formalism of algorithms with the physical substrate used to implement them. Every quantum algorithm can be implemented on a classical computer just as easily, and every classical algorithm can be run on a quantum computer too.
...
Thus, if you want to reason about speedups, then you do need to develop a machine-specific representation, no matter how much that feels like a ‘strike against’ that representation. For classical computers, that involves thinking about assembly language, memory access times, pipelining, branch prediction, etc. For quantum computers, that involves writing algorithms with matrix algebra and thinking about error-correction codes. For the abstract algorithm, it involves interaction combinators. And so on.
Also, on reflection, I misunderstood your original comment that ’we can't explicitly represent certain [quantum] algorithms in [lambda calculus]’. I guess I just get annoyed at seeing people think quantum computers let you do magic, and wanted to drive the point home that you can rewrite any quantum algorithm in the lambda calculus — just with a different complexity class. But I think you were considering that to be enough to make it a ‘different algorithm’, which was different to how I interpreted your comment. Apologies for that!
True, but again, I don’t quite understand what point you’re trying to make here.But a sequence of operations is what the substrate does. No chip works on Lambda Calculus. Since physics is dynamic, it's unclear how to build a chip which evaluates such operations on a fundamental level.bradrn wrote: ↑Sun Feb 26, 2023 5:19 pm Sure, some algorithms provide an asymptotic speedup on a quantum computer, but there’s nothing particularly special about that — the abstract algorithm gives an asymptotic speedup on some algorithms too, and that’s just classical lambda calculus! Meanwhile, a lot of algorithms are much easier to implement on a classical computer than on a quantum computer.
I disagree with the word ‘unnecessary’ here, but otherwise you’re quite right.Why make it difficult to reason about the underlying architecture by encasing it in unnecessary layers of abstraction? There are good answers to this question, and those answers indicate the use cases that Haskell is good at.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Re: Are computer languages meaningfully... in english?
Indeed. I can't help but think of the heartbleed bug in 2014, which was caused by a simple buffer over-read error. Imo, the fact that buffer over-read bugs are still happening in 2014 (and today, tbh) should make the whole field of computer science ashamed. They could be easily eliminated by better language design and using only safe functions. Even more depressing is that the "experts'" "solution" to prevent another heartbleed-type bug from occurring was to just get more code auditors. It's only a matter of time until it happens again.Don't get me wrong, I think "safety" concerns (AKA proving theorems) is often important in fields like cybersecurity.
Imo, the OOP craze (Java, C++, etc) has stunted programming language evolution for a whole generation.
Why does he hate it ? ... There are things I dislike about Haskell, but Elm fixed most of them.Besides, I love Haskell since Curtis Yarvin hates it.
Such as ? I find most of Haskell's deeper features useless and needlessly complex (elm wisely never implemented them for this reason), but then again, I'm not an academic doing academic things with Haskell, which is why most of those features exist in the first place.If anything, I don't think Haskell goes far enough.
Re: Are computer languages meaningfully... in english?
Not necessarily. Java and C++ have a rather terrible and distorted implementation of OOP, but the original idea of OOP (implemented in languages such as Smalltalk and Ruby) was rather good. It also introduced several ideas, such as encapsulation and late-binding, which continue to be extremely useful to this day.
As someone who uses Haskell for non-academic stuff (and is somewhat uninterested in the academic stuff anyway): my perspective on this is that those are actually surprisingly useful! You don’t need them all the time, but when you do they’re invaluable. Fundamentally, those features are there for two related reasons:Such as ? I find most of Haskell's deeper features useless and needlessly complex (elm wisely never implemented them for this reason), but then again, I'm not an academic doing academic things with Haskell, which is why most of those features exist in the first place.
- Languages with poor type systems (e.g. Java) have trouble when they need to represent data with a flexible structure. One way around this is to get rid of static type checking in the first place, which is what dynamically typed languages do, but that removes all the guarantees static typing gives you. The other alternative is to enhance the type system so you can represent what you want. This is what Haskell does.
- A type system is only as helpful as the guarantees it can give you. Most type systems can’t give you guarantees much more complex than ‘this number is integral’. Haskell’s can enforce much more interesting properties, which is helpful for preventing mistakes in real-world programs.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Re: Are computer languages meaningfully... in english?
I agree with all of the above. If you have bad dreams about OO, that is probably because of OO being synonymous with Java and C++ to you. And if you have bad memories of typing systems, that is probably because of of having to deal with C/C++/Java/C#/etc.-style typing systems, which do not do justice to what typing systems can truly do for you. In Haskell, e.g., the typing system will be your friend rather than your enemy if you only let it help you; as they say, once code written in Haskell actually compiles, there is a good chance that it is correct.bradrn wrote: ↑Mon Mar 20, 2023 8:14 pmNot necessarily. Java and C++ have a rather terrible and distorted implementation of OOP, but the original idea of OOP (implemented in languages such as Smalltalk and Ruby) was rather good. It also introduced several ideas, such as encapsulation and late-binding, which continue to be extremely useful to this day.
As someone who uses Haskell for non-academic stuff (and is somewhat uninterested in the academic stuff anyway): my perspective on this is that those are actually surprisingly useful! You don’t need them all the time, but when you do they’re invaluable. Fundamentally, those features are there for two related reasons:Such as ? I find most of Haskell's deeper features useless and needlessly complex (elm wisely never implemented them for this reason), but then again, I'm not an academic doing academic things with Haskell, which is why most of those features exist in the first place.
- Languages with poor type systems (e.g. Java) have trouble when they need to represent data with a flexible structure. One way around this is to get rid of static type checking in the first place, which is what dynamically typed languages do, but that removes all the guarantees static typing gives you. The other alternative is to enhance the type system so you can represent what you want. This is what Haskell does.
- A type system is only as helpful as the guarantees it can give you. Most type systems can’t give you guarantees much more complex than ‘this number is integral’. Haskell’s can enforce much more interesting properties, which is helpful for preventing mistakes in real-world programs.
Yaaludinuya siima d'at yiseka wohadetafa gaare.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
Re: Are computer languages meaningfully... in english?
I know; That's why I said "Java, C++, etc". I've not used Smalltalk or Ruby much though.Not necessarily. Java and C++ have a rather terrible and distorted implementation of OOP, but the original idea of OOP (implemented in languages such as Smalltalk and Ruby) was rather good. It also introduced several ideas, such as encapsulation and late-binding, which continue to be extremely useful to this day.
What I had in mind by "deeper features" was the various pragmas that Haskell has. I understand the usefulness of ADTs very well.Languages with poor type systems (e.g. Java) have trouble when they need to represent data with a flexible structure. One way around this is to get rid of static type checking in the first place, which is what dynamically typed languages do, but that removes all the guarantees static typing gives you. The other alternative is to enhance the type system so you can represent what you want. This is what Haskell does.
A type system is only as helpful as the guarantees it can give you. Most type systems can’t give you guarantees much more complex than ‘this number is integral’. Haskell’s can enforce much more interesting properties, which is helpful for preventing mistakes in real-world programs.
What I would like to see is a system that lets one add units/measurements to types. So,
Code: Select all
3 feet + 5 meters
Re: Are computer languages meaningfully... in english?
I’m working on this! There’s a couple of languages with some support for units (Frink, F#), but nothing which is both good and statically typed. I haven’t quite figured it out yet, but there’s enough literature on how such a language would work that I’m sure I’ll manage it eventually.jcb wrote: ↑Tue Mar 21, 2023 12:12 am What I would like to see is a system that lets one add units/measurements to types. So,gives a compiler error (or auto converts units appropriately), but still lets me use any ft and m type with any function with an integer parameter, thus relieving me of having to duplicate function definitions for different types. ( (+) for both ft and m). Of course, the types would combine appropriately when multiplied and divided. Thus, a function for velocity would have a return type of m/s, and acceleration m/s^2, and would raise a compiler error if the value returned didn't match the signature.Code: Select all
3 feet + 5 meters
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Re: Are computer languages meaningfully... in english?
Good ! The 1998 Mars climate orbiter crash inspired this, and again, the fact that this kind of problem still exists 25 years later should make the whole field of computer science ashamed.I’m working on this! There’s a couple of languages with some support for units (Frink, F#), but nothing which is both good and statically typed. I haven’t quite figured it out yet, but there’s enough literature on how such a language would work that I’m sure I’ll manage it eventually.