I’m almost certain that most (if not all) of them are freely available.TomHChappell wrote: ↑Sat Jan 25, 2025 11:56 am If I had the budget to do so, I’d get one example of each type of tool mentioned in this thread!
But, I don’t.
It’s nice to at least read about them!
Beyond the SCA: strategies for improving conlanger productivity
Re: Beyond the SCA: strategies for improving conlanger productivity
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Re: Beyond the SCA: strategies for improving conlanger productivity
Detailed tracking of each word from the proto-lexicon to the daughters, allowing to apply one-off sound changes, analogy, and see the lexical relationships between words at each stage of the language.
That is to say, a tightly integrated SCA/lexicon. Come to think of it, maybe PolyGlot already does most of that, I'm just not hot on the file format or the PDF export.
Yaa unák thual na !
Re: Beyond the SCA: strategies for improving conlanger productivity
People say that a lot, but I’m not entirely sure how useful such a tool would actually be. Each of those tasks (dictionary management, regular sound change, irregular sound change, diachronic word tracking) is involved on their own, and software which combines them all is likely to end up either ferociously complicated or unusably simplified.
That said… I have a thought that this could be done in a more modular way. It would have to be built on a common dictionary format — I think MDF is well-suited to the task. Brassica already supports MDF input and output, and it should be possible to build something on top of that which can track which descendants are derived from which ancestors. With such a tool one could, say, change some words in the ancestor and get a diff showing what words must be changed in the descendant.
The trickiest task out of what you describe is that of one-off sound changes and analogy. I honestly don’t think there’s any better way to do that than manually altering individual words yourself. (Brassica supports sporadic sound changes to some extent, but ultimately expects the user to pick their preferred outcome.) If we had a dictionary-tracking tool such as the one I described, it would be reasonable for it to recognise when an irregular change has occurred, but I wouldn’t expect it to apply those changes itself.
PolyGlot is often mentioned but rarely used. I think the main problems are (a) its unappealing and complex UI, and (b) its limited support for practically anything beyond the basics in dictionary management. That is, in the trade-off of ‘ferociously complicated or unusably simplified’, PolyGlot ends up being both.That is to say, a tightly integrated SCA/lexicon. Come to think of it, maybe PolyGlot already does most of that, I'm just not hot on the file format or the PDF export.
EDIT: a third problem with PolyGlot — it’s very ‘opinionated’, to use the programmers’ term. I feel that it has very definite and inflexible expectations about how a dictionary and grammar should be organised, and they’re not always how I’d like to do it. For instance, ‘Phonology & Text’ are grouped in a single section for some reason, but ‘Phrasebook’ is separate even though you might want to include phrases in the dictionary more generally. The ‘Lexicon’ section assumes that each word has only a single pronunciation a single definition, and nothing more. And so on.
Furthermore, most of its features are things that I just don’t care about. I have no need for a logographic engine, or automated inference of pronunciation, or conjugation tables… I just want to organise my dictionary in a way which reflects the language. Which is precisely what it doesn’t let me do, because it’s so inflexible.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Re: Beyond the SCA: strategies for improving conlanger productivity
It would include no hardcoded tasks - it'd allow you to define about any task you'd like as you need it. Still a pipedream, I know...
/j/ <j>
Ɂaləɂahina asəkipaɂə ileku omkiroro salka.
Loɂ ɂerleku asəɂulŋusikraɂə seləɂahina əɂətlahɂun əiŋɂiɂŋa.
Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ.
Ɂaləɂahina asəkipaɂə ileku omkiroro salka.
Loɂ ɂerleku asəɂulŋusikraɂə seləɂahina əɂətlahɂun əiŋɂiɂŋa.
Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ.
Re: Beyond the SCA: strategies for improving conlanger productivity
So then, what sort of tasks would you want to define in it?
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Re: Beyond the SCA: strategies for improving conlanger productivity
The more I think about this, the more I want it. I’m imagining a tool which — let’s say — can show me a dictionary for the ancestor on the left, and a dictionary for the descendant on the right. Or perhaps it just lists the headwords, with an extra space to edit the selected dictionary entries. Either way, with this display, the interface would be able to match up descendant words with their ancestral etyma (or vice versa). Or conversely, it could show me, say, which words in the ancestor haven’t made it into the descendant for whatever reason.bradrn wrote: ↑Sun Jan 26, 2025 5:31 am That said… I have a thought that this could be done in a more modular way. It would have to be built on a common dictionary format — I think MDF is well-suited to the task. Brassica already supports MDF input and output, and it should be possible to build something on top of that which can track which descendants are derived from which ancestors. With such a tool one could, say, change some words in the ancestor and get a diff showing what words must be changed in the descendant.
The trickiest task out of what you describe is that of one-off sound changes and analogy. I honestly don’t think there’s any better way to do that than manually altering individual words yourself. (Brassica supports sporadic sound changes to some extent, but ultimately expects the user to pick their preferred outcome.) If we had a dictionary-tracking tool such as the one I described, it would be reasonable for it to recognise when an irregular change has occurred, but I wouldn’t expect it to apply those changes itself.tt
Of course the whole thing would be connected to an SCA. With this, I could use the tool to actually add new entries to the dictionary, by running their ancestors through the sound changes. Perhaps it could use the SCA to detect irregular correspondences too. Thinking this through, there should probably be some mechanism to select different forms depending on word class — no need for a full paradigm builder, just something which can apply transformation rules before/after the sound changes.
So now I just need to build the thing. (Mind you, the word ‘just’ there is doing a lot of work…)
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Re: Beyond the SCA: strategies for improving conlanger productivity
I've been trying to do something like this for years. Let me know how you get on!bradrn wrote: ↑Thu Jan 30, 2025 2:10 am Of course the whole thing would be connected to an SCA. With this, I could use the tool to actually add new entries to the dictionary, by running their ancestors through the sound changes. Perhaps it could use the SCA to detect irregular correspondences too. Thinking this through, there should probably be some mechanism to select different forms depending on word class — no need for a full paradigm builder, just something which can apply transformation rules before/after the sound changes.
So now I just need to build the thing. (Mind you, the word ‘just’ there is doing a lot of work…)
*I* used to be a front high unrounded vowel. *You* are just an accidental diphthong.
Re: Beyond the SCA: strategies for improving conlanger productivity
Just remember -- it's only a simple matter of programming!bradrn wrote: ↑Thu Jan 30, 2025 2:10 amThe more I think about this, the more I want it. I’m imagining a tool which — let’s say — can show me a dictionary for the ancestor on the left, and a dictionary for the descendant on the right. Or perhaps it just lists the headwords, with an extra space to edit the selected dictionary entries. Either way, with this display, the interface would be able to match up descendant words with their ancestral etyma (or vice versa). Or conversely, it could show me, say, which words in the ancestor haven’t made it into the descendant for whatever reason.bradrn wrote: ↑Sun Jan 26, 2025 5:31 am That said… I have a thought that this could be done in a more modular way. It would have to be built on a common dictionary format — I think MDF is well-suited to the task. Brassica already supports MDF input and output, and it should be possible to build something on top of that which can track which descendants are derived from which ancestors. With such a tool one could, say, change some words in the ancestor and get a diff showing what words must be changed in the descendant.
The trickiest task out of what you describe is that of one-off sound changes and analogy. I honestly don’t think there’s any better way to do that than manually altering individual words yourself. (Brassica supports sporadic sound changes to some extent, but ultimately expects the user to pick their preferred outcome.) If we had a dictionary-tracking tool such as the one I described, it would be reasonable for it to recognise when an irregular change has occurred, but I wouldn’t expect it to apply those changes itself.tt
Of course the whole thing would be connected to an SCA. With this, I could use the tool to actually add new entries to the dictionary, by running their ancestors through the sound changes. Perhaps it could use the SCA to detect irregular correspondences too. Thinking this through, there should probably be some mechanism to select different forms depending on word class — no need for a full paradigm builder, just something which can apply transformation rules before/after the sound changes.
So now I just need to build the thing. (Mind you, the word ‘just’ there is doing a lot of work…)
Yaaludinuya siima d'at yiseka wohadetafa gaare.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
Re: Beyond the SCA: strategies for improving conlanger productivity
Well, the nice thing is that most of the components are built already. I have an MDF parser, and I have a SCA, and they can already talk to each other. The hardest part will be the UI. (It’s all in Haskell, which doesn’t have a huge number of GUI libraries…)alice wrote: ↑Thu Jan 30, 2025 2:11 pmI've been trying to do something like this for years. Let me know how you get on!bradrn wrote: ↑Thu Jan 30, 2025 2:10 am Of course the whole thing would be connected to an SCA. With this, I could use the tool to actually add new entries to the dictionary, by running their ancestors through the sound changes. Perhaps it could use the SCA to detect irregular correspondences too. Thinking this through, there should probably be some mechanism to select different forms depending on word class — no need for a full paradigm builder, just something which can apply transformation rules before/after the sound changes.
So now I just need to build the thing. (Mind you, the word ‘just’ there is doing a lot of work…)
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Re: Beyond the SCA: strategies for improving conlanger productivity
I'd be interested in this, but I don't think MDF¹ as is is the best option for conlangs (or some natural languages, come to think of it). The basic format is good², but for LZ I need to include both the romanisation and the native script for every word, and I can't just automatically generate one of them because the relationship is many:many. It gets worse when ideally I'd like to include the romanisation for supplimentary forms and examples too.bradrn wrote: ↑Thu Jan 30, 2025 2:10 am Of course the whole thing would be connected to an SCA. With this, I could use the tool to actually add new entries to the dictionary, by running their ancestors through the sound changes. Perhaps it could use the SCA to detect irregular correspondences too. Thinking this through, there should probably be some mechanism to select different forms depending on word class — no need for a full paradigm builder, just something which can apply transformation rules before/after the sound changes.
¹ Source: reading Coward and Grimes 2000, which Bradrn linked to to elsewhere on the forum. I'm not linking it myself, just search for Grimes mdf or similar on the forum. I don't fully understand it, and it may be that there is some way to work around this
² I think. I'm not sure exactly how it's processed between being structured text and a formatted dictionary. Is it easily searchible? By each field? Coward and Grimes was not written in an era where such things were as important³, and I don't have access to any modern MDF-using tool⁴
³ It includes the now hilarious line A dictionary only on the computer is of little use to anybody but yourself (and then only when you are sitting at the computer).
⁴ I don't have a computer I can easily download things onto
Last edited by Lērisama on Fri Jan 31, 2025 12:39 pm, edited 1 time in total.
LZ – Lēri Ziwi
PS – Proto Sāzlakuic (ancestor of LZ)
PRk – Proto Rākēwuic
XI – Xú Iạlan
VN – verbal noun
SUP – supine
DIRECT – verbal directional
My language stuff
PS – Proto Sāzlakuic (ancestor of LZ)
PRk – Proto Rākēwuic
XI – Xú Iạlan
VN – verbal noun
SUP – supine
DIRECT – verbal directional
My language stuff
Re: Beyond the SCA: strategies for improving conlanger productivity
Ah yeah… MDF assumes that you have one script per language. Not an unreasonable assumption as natlangs go, but I see how it could become difficult for conlangs. I suppose you could repurpose the \ph ‘phonetic form’ field for romanisations, but beyond that it could be difficult.Lērisama wrote: ↑Fri Jan 31, 2025 1:42 am […] for LZ I need to include both the romanisation and the native script for every word, and I can't just automatically generate one of them because the relationship is many:many. It gets worse when ideally I'd like to include the romanisation for supplimentary forms and examples too.
The other option is to add your own fields, which is supported by at least SIL Toolbox. Fundamentally, MDF is just a specific application of the SIL Standard Format Marker (SFM) format, which places no restrictions on field names or ordering. (It’s not even the only such application: for instance it’s also used for glossing, with a completely different set of fields.)
Then again, is a dictionary really the best tool for this kind of thing? If the native script is this complicated, perhaps a drawing program would work better.
Coward and Grimes’s paper is excellent, but beware that it can be slightly outdated in parts. (Though nothing related to this discussion, I think.) For technical documentation I often refer to the ‘MDF Documentation’ on the Toolbox website.¹ Source: reading Coward and Grimes 2000
The format itself is essentially structured text, but usually you’d use it through software which lets you edit and search it like a dictionary.² I think. I'm not sure exactly how it's processed between being structured text and a formatted dictionary. Is it easily searchible? By each field?
In this case the whole discussion is moot anyway — you wouldn’t be able to use any MDF software, including mine.⁴ I don't have a computer I can easily download things onto
(Mind you, MDF does have the distinction that it’s trivial to edit manually, so I would probably use it even if I had no access to special software. But then I have a fondness for plain-text formats anyway.)
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Re: Beyond the SCA: strategies for improving conlanger productivity
Thank you, that's helpful. Good to know Toolbox allows custom fields, which I'd probably have to use in this scenario (I'd like to keep ph free for IPA)bradrn wrote: ↑Fri Jan 31, 2025 3:26 amAh yeah… MDF assumes that you have one script per language. Not an unreasonable assumption as natlangs go, but I see how it could become difficult for conlangs. I suppose you could repurpose the \ph ‘phonetic form’ field for romanisations, but beyond that it could be difficult.Lērisama wrote: ↑Fri Jan 31, 2025 1:42 am […] for LZ I need to include both the romanisation and the native script for every word, and I can't just automatically generate one of them because the relationship is many:many. It gets worse when ideally I'd like to include the romanisation for supplimentary forms and examples too.
The other option is to add your own fields, which is supported by at least SIL Toolbox. Fundamentally, MDF is just a specific application of the SIL Standard Format Marker (SFM) format, which places no restrictions on field names or ordering. (It’s not even the only such application: for instance it’s also used for glossing, with a completely different set of fields.)
I think so, but I'm not exactly sure what you mean by “drawing program” – I don't store the native script as images.Then again, is a dictionary really the best tool for this kind of thing? If the native script is this complicated, perhaps a drawing program would work better.
Don't worry, it's pretty obvious how old the paper is¹. Thank you for the actual documentation though, that will be helpful²Coward and Grimes’s paper is excellent, but beware that it can be slightly outdated in parts. (Though nothing related to this discussion, I think.) For technical documentation I often refer to the ‘MDF Documentation’ on the Toolbox website.¹ Source: reading Coward and Grimes 2000
So does toolbox create a database from the MDF file somehow? It seems inefficient to do all the processing on it as text⁴The format itself is essentially structured text, but usually you’d use it through software which lets you edit and search it like a dictionary.² I think. I'm not sure exactly how it's processed between being structured text and a formatted dictionary. Is it easily searchible? By each field?
I will have one in the (hopefully very) near future, and I can start writing a script to convert to MDF now if I decide to use itIn this case the whole discussion is moot anyway — you wouldn’t be able to use any MDF software, including mine.⁴ I don't have a computer I can easily download things onto
(Mind you, MDF does have the distinction that it’s trivial to edit manually, so I would probably use it even if I had no access to special software. But then I have a fondness for plain-text formats anyway.)
¹ It's frequent references to MS DOS help considerably here
² My ultimate plan at this point is to have Toolbox on the computer once it's working, and maybe write a script to convert it into something pretty³ for use not on the computer when I don't have access to Toolbox
³ Probably html
⁴ Although my computer abilities can be safely
LZ – Lēri Ziwi
PS – Proto Sāzlakuic (ancestor of LZ)
PRk – Proto Rākēwuic
XI – Xú Iạlan
VN – verbal noun
SUP – supine
DIRECT – verbal directional
My language stuff
PS – Proto Sāzlakuic (ancestor of LZ)
PRk – Proto Rākēwuic
XI – Xú Iạlan
VN – verbal noun
SUP – supine
DIRECT – verbal directional
My language stuff
Re: Beyond the SCA: strategies for improving conlanger productivity
Me three. Coincidentally, I've been pondering if I should get to it if I keep being idle at work.alice wrote: ↑Thu Jan 30, 2025 2:11 pmI've been trying to do something like this for years. Let me know how you get on!bradrn wrote: ↑Thu Jan 30, 2025 2:10 am Of course the whole thing would be connected to an SCA. With this, I could use the tool to actually add new entries to the dictionary, by running their ancestors through the sound changes. Perhaps it could use the SCA to detect irregular correspondences too. Thinking this through, there should probably be some mechanism to select different forms depending on word class — no need for a full paradigm builder, just something which can apply transformation rules before/after the sound changes.
So now I just need to build the thing. (Mind you, the word ‘just’ there is doing a lot of work…)
Just don't get bogged down in UIs. No, but seriously. We have truckloads of spreadsheets and text editors already. You can delegate the UI work to them and just have your backend hook them to an SCA of your choice.bradrn wrote: ↑Thu Jan 30, 2025 8:27 pm [...]
Well, the nice thing is that most of the components are built already. I have an MDF parser, and I have a SCA, and they can already talk to each other. The hardest part will be the UI. (It’s all in Haskell, which doesn’t have a huge number of GUI libraries…)
From some preliminary research, LibreOffice can have its file contents modified by an external source and it'd load the changes on the fly, without needing restart. At least VSCode autorefreshes files when that occurs.
But when you already have some backend that takes data from a file, transfers it to an SCA, and returns it to the file, why stop there? You could have nodes for filtering, random word generation, suffixation with derivational or inflectional morphology, etc.
Heck, you could even hook it up politely to that online service that tells you the semantic neighbours to an input word. So that you could have suggestions for semantic shifts.
Inputs and outputs would be to just about any format there's a node for - MDF, PDF, CSV, plaintext, offline and online spreadsheets, whatever. SCAs and other external components would be used as libraries or APIs.
Before you know it, you'd be defining custom pipelines on the go, for usecases or problems that you stumbled upon just moments ago.
Of course, all that workflow would be tedious and error-prone to define in text, so why not define it in drawio or similar?

/j/ <j>
Ɂaləɂahina asəkipaɂə ileku omkiroro salka.
Loɂ ɂerleku asəɂulŋusikraɂə seləɂahina əɂətlahɂun əiŋɂiɂŋa.
Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ.
Ɂaləɂahina asəkipaɂə ileku omkiroro salka.
Loɂ ɂerleku asəɂulŋusikraɂə seləɂahina əɂətlahɂun əiŋɂiɂŋa.
Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ.
Re: Beyond the SCA: strategies for improving conlanger productivity
Yes, I meant images. But then how do you store it? In a font?Lērisama wrote: ↑Fri Jan 31, 2025 10:49 amI think so, but I'm not exactly sure what you mean by “drawing program” – I don't store the native script as images.Then again, is a dictionary really the best tool for this kind of thing? If the native script is this complicated, perhaps a drawing program would work better.
I don’t know how it works underlyingly — it’s not open-source. I suspect it does have some kind of internal representation which it uses for sorting and suchlike. But the user interface certainly behaves as if you’re editing text, which is nice.So does toolbox create a database from the MDF file somehow? It seems inefficient to do all the processing on it as text⁴The format itself is essentially structured text, but usually you’d use it through software which lets you edit and search it like a dictionary.² I think. I'm not sure exactly how it's processed between being structured text and a formatted dictionary. Is it easily searchible? By each field?
No, no, why on Earth would I do that‽ I’m trying to get away from spreadsheets and word processors, not use them even more.Zju wrote: ↑Fri Jan 31, 2025 3:20 pmJust don't get bogged down in UIs. No, but seriously. We have truckloads of spreadsheets and text editors already. You can delegate the UI work to them and just have your backend hook them to an SCA of your choice.bradrn wrote: ↑Thu Jan 30, 2025 8:27 pm [...]
Well, the nice thing is that most of the components are built already. I have an MDF parser, and I have a SCA, and they can already talk to each other. The hardest part will be the UI. (It’s all in Haskell, which doesn’t have a huge number of GUI libraries…)
Besides, I don’t see how I could implement the UI I want within those applications. I want something which lets me edit two dictionaries at once; those applications are designed to edit one file at a time.
This is considerably more complex than anything I was imagining. I also think it hits the problem of diminishing returns pretty quickly: I just don’t need most of these things. Far better to just implement something simple which does 95% of what I need, then maybe consider if there’s a way to extend it to deal with the other 5%. That way at least something can get built.But when you already have some backend that takes data from a file, transfers it to an SCA, and returns it to the file, why stop there? You could have nodes for filtering, random word generation, suffixation with derivational or inflectional morphology, etc.
Heck, you could even hook it up politely to that online service that tells you the semantic neighbours to an input word. So that you could have suggestions for semantic shifts.
Inputs and outputs would be to just about any format there's a node for - MDF, PDF, CSV, plaintext, offline and online spreadsheets, whatever. SCAs and other external components would be used as libraries or APIs.
Before you know it, you'd be defining custom pipelines on the go, for usecases or problems that you stumbled upon just moments ago.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices
(Why does phpBB not let me add >5 links here?)
Re: Beyond the SCA: strategies for improving conlanger productivity
Yes (as soon as the characters are actually made), piggybacking off the Kana and CJK unicode ranges
Ah, I was fooled by a licence saying it could be modified. Although I can't find that now, so maybe I just misrememberedI don’t know how it works underlyingly — it’s not open-source. I suspect it does have some kind of internal representation which it uses for sorting and suchlike.
LZ – Lēri Ziwi
PS – Proto Sāzlakuic (ancestor of LZ)
PRk – Proto Rākēwuic
XI – Xú Iạlan
VN – verbal noun
SUP – supine
DIRECT – verbal directional
My language stuff
PS – Proto Sāzlakuic (ancestor of LZ)
PRk – Proto Rākēwuic
XI – Xú Iạlan
VN – verbal noun
SUP – supine
DIRECT – verbal directional
My language stuff
Re: Beyond the SCA: strategies for improving conlanger productivity
This is starting to remind me of a modular analog synthesizer. Which it is, in a way.
*I* used to be a front high unrounded vowel. *You* are just an accidental diphthong.
- WeepingElf
- Posts: 1652
- Joined: Sun Jul 15, 2018 12:39 pm
- Location: Braunschweig, Germany
- Contact:
Re: Beyond the SCA: strategies for improving conlanger productivity
Indeed the human voice is an acoustic synthesizer. It works in a very similar way.
Re: Beyond the SCA: strategies for improving conlanger productivity
That was my pointWeepingElf wrote: ↑Sun Feb 02, 2025 5:35 amIndeed the human voice is an acoustic synthesizer. It works in a very similar way.

*I* used to be a front high unrounded vowel. *You* are just an accidental diphthong.
Re: Beyond the SCA: strategies for improving conlanger productivity
A you suggesting…alice wrote: ↑Sun Feb 02, 2025 2:13 pmThat was my pointWeepingElf wrote: ↑Sun Feb 02, 2025 5:35 amIndeed the human voice is an acoustic synthesizer. It works in a very similar way.Modify a Moog, add some storage and a bit of intelligence, and you can do it all without software.
…a human being as a dictionary?
Edit: less flippantly, that would require custom hardware, which scares me, and I'm not sure of it has that much of an advantage over a text-based format.
LZ – Lēri Ziwi
PS – Proto Sāzlakuic (ancestor of LZ)
PRk – Proto Rākēwuic
XI – Xú Iạlan
VN – verbal noun
SUP – supine
DIRECT – verbal directional
My language stuff
PS – Proto Sāzlakuic (ancestor of LZ)
PRk – Proto Rākēwuic
XI – Xú Iạlan
VN – verbal noun
SUP – supine
DIRECT – verbal directional
My language stuff
- WeepingElf
- Posts: 1652
- Joined: Sun Jul 15, 2018 12:39 pm
- Location: Braunschweig, Germany
- Contact:
Re: Beyond the SCA: strategies for improving conlanger productivity
It doesn't require much hardware to get a synthesizer to speak. In the 1980s, the Commodore 64 home computer had a simple built-in synthesizer, and there was a program called SAM/Reciter for it which made this synthesizer speak, with an unnatural-sounding but intelligible voice.
But we are digressing.
But we are digressing.