Beyond the SCA: strategies for improving conlanger productivity

Conworlds and conlangs
bradrn
Posts: 6717
Joined: Fri Oct 19, 2018 1:25 am

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by bradrn »

TomHChappell wrote: Sat Jan 25, 2025 11:56 am If I had the budget to do so, I’d get one example of each type of tool mentioned in this thread!
But, I don’t.
It’s nice to at least read about them!
I’m almost certain that most (if not all) of them are freely available.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
User avatar
Vilike
Posts: 166
Joined: Thu Jul 12, 2018 2:10 am
Location: Elsàss
Contact:

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by Vilike »

alice wrote: Sat Jan 25, 2025 2:39 pm To get back on topic: What (hypothetical) tools would you like to have or find useful?
Detailed tracking of each word from the proto-lexicon to the daughters, allowing to apply one-off sound changes, analogy, and see the lexical relationships between words at each stage of the language.

That is to say, a tightly integrated SCA/lexicon. Come to think of it, maybe PolyGlot already does most of that, I'm just not hot on the file format or the PDF export.
Yaa unák thual na !
bradrn
Posts: 6717
Joined: Fri Oct 19, 2018 1:25 am

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by bradrn »

Vilike wrote: Sun Jan 26, 2025 5:01 am
alice wrote: Sat Jan 25, 2025 2:39 pm To get back on topic: What (hypothetical) tools would you like to have or find useful?
Detailed tracking of each word from the proto-lexicon to the daughters, allowing to apply one-off sound changes, analogy, and see the lexical relationships between words at each stage of the language.
People say that a lot, but I’m not entirely sure how useful such a tool would actually be. Each of those tasks (dictionary management, regular sound change, irregular sound change, diachronic word tracking) is involved on their own, and software which combines them all is likely to end up either ferociously complicated or unusably simplified.

That said… I have a thought that this could be done in a more modular way. It would have to be built on a common dictionary format — I think MDF is well-suited to the task. Brassica already supports MDF input and output, and it should be possible to build something on top of that which can track which descendants are derived from which ancestors. With such a tool one could, say, change some words in the ancestor and get a diff showing what words must be changed in the descendant.

The trickiest task out of what you describe is that of one-off sound changes and analogy. I honestly don’t think there’s any better way to do that than manually altering individual words yourself. (Brassica supports sporadic sound changes to some extent, but ultimately expects the user to pick their preferred outcome.) If we had a dictionary-tracking tool such as the one I described, it would be reasonable for it to recognise when an irregular change has occurred, but I wouldn’t expect it to apply those changes itself.
That is to say, a tightly integrated SCA/lexicon. Come to think of it, maybe PolyGlot already does most of that, I'm just not hot on the file format or the PDF export.
PolyGlot is often mentioned but rarely used. I think the main problems are (a) its unappealing and complex UI, and (b) its limited support for practically anything beyond the basics in dictionary management. That is, in the trade-off of ‘ferociously complicated or unusably simplified’, PolyGlot ends up being both.

EDIT: a third problem with PolyGlot — it’s very ‘opinionated’, to use the programmers’ term. I feel that it has very definite and inflexible expectations about how a dictionary and grammar should be organised, and they’re not always how I’d like to do it. For instance, ‘Phonology & Text’ are grouped in a single section for some reason, but ‘Phrasebook’ is separate even though you might want to include phrases in the dictionary more generally. The ‘Lexicon’ section assumes that each word has only a single pronunciation a single definition, and nothing more. And so on.

Furthermore, most of its features are things that I just don’t care about. I have no need for a logographic engine, or automated inference of pronunciation, or conjugation tables… I just want to organise my dictionary in a way which reflects the language. Which is precisely what it doesn’t let me do, because it’s so inflexible.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Zju
Posts: 944
Joined: Fri Aug 03, 2018 4:05 pm

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by Zju »

alice wrote: Tue Jan 21, 2025 2:50 pm Now imagine you have a program which uses this SCA to automate as many of your conlanging tasks as possible. What tasks would these include?
It would include no hardcoded tasks - it'd allow you to define about any task you'd like as you need it. Still a pipedream, I know...
/j/ <j>

Ɂaləɂahina asəkipaɂə ileku omkiroro salka.
Loɂ ɂerleku asəɂulŋusikraɂə seləɂahina əɂətlahɂun əiŋɂiɂŋa.
Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ.
bradrn
Posts: 6717
Joined: Fri Oct 19, 2018 1:25 am

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by bradrn »

Zju wrote: Sun Jan 26, 2025 4:18 pm
alice wrote: Tue Jan 21, 2025 2:50 pm Now imagine you have a program which uses this SCA to automate as many of your conlanging tasks as possible. What tasks would these include?
It would include no hardcoded tasks - it'd allow you to define about any task you'd like as you need it. Still a pipedream, I know...
So then, what sort of tasks would you want to define in it?
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
bradrn
Posts: 6717
Joined: Fri Oct 19, 2018 1:25 am

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by bradrn »

bradrn wrote: Sun Jan 26, 2025 5:31 am That said… I have a thought that this could be done in a more modular way. It would have to be built on a common dictionary format — I think MDF is well-suited to the task. Brassica already supports MDF input and output, and it should be possible to build something on top of that which can track which descendants are derived from which ancestors. With such a tool one could, say, change some words in the ancestor and get a diff showing what words must be changed in the descendant.

The trickiest task out of what you describe is that of one-off sound changes and analogy. I honestly don’t think there’s any better way to do that than manually altering individual words yourself. (Brassica supports sporadic sound changes to some extent, but ultimately expects the user to pick their preferred outcome.) If we had a dictionary-tracking tool such as the one I described, it would be reasonable for it to recognise when an irregular change has occurred, but I wouldn’t expect it to apply those changes itself.tt
The more I think about this, the more I want it. I’m imagining a tool which — let’s say — can show me a dictionary for the ancestor on the left, and a dictionary for the descendant on the right. Or perhaps it just lists the headwords, with an extra space to edit the selected dictionary entries. Either way, with this display, the interface would be able to match up descendant words with their ancestral etyma (or vice versa). Or conversely, it could show me, say, which words in the ancestor haven’t made it into the descendant for whatever reason.

Of course the whole thing would be connected to an SCA. With this, I could use the tool to actually add new entries to the dictionary, by running their ancestors through the sound changes. Perhaps it could use the SCA to detect irregular correspondences too. Thinking this through, there should probably be some mechanism to select different forms depending on word class — no need for a full paradigm builder, just something which can apply transformation rules before/after the sound changes.

So now I just need to build the thing. (Mind you, the word ‘just’ there is doing a lot of work…)
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
User avatar
alice
Posts: 1072
Joined: Mon Jul 09, 2018 11:15 am
Location: 'twixt Survival and Guilt

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by alice »

bradrn wrote: Thu Jan 30, 2025 2:10 am Of course the whole thing would be connected to an SCA. With this, I could use the tool to actually add new entries to the dictionary, by running their ancestors through the sound changes. Perhaps it could use the SCA to detect irregular correspondences too. Thinking this through, there should probably be some mechanism to select different forms depending on word class — no need for a full paradigm builder, just something which can apply transformation rules before/after the sound changes.

So now I just need to build the thing. (Mind you, the word ‘just’ there is doing a lot of work…)
I've been trying to do something like this for years. Let me know how you get on!
*I* used to be a front high unrounded vowel. *You* are just an accidental diphthong.
Travis B.
Posts: 7647
Joined: Sun Jul 15, 2018 8:52 pm

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by Travis B. »

bradrn wrote: Thu Jan 30, 2025 2:10 am
bradrn wrote: Sun Jan 26, 2025 5:31 am That said… I have a thought that this could be done in a more modular way. It would have to be built on a common dictionary format — I think MDF is well-suited to the task. Brassica already supports MDF input and output, and it should be possible to build something on top of that which can track which descendants are derived from which ancestors. With such a tool one could, say, change some words in the ancestor and get a diff showing what words must be changed in the descendant.

The trickiest task out of what you describe is that of one-off sound changes and analogy. I honestly don’t think there’s any better way to do that than manually altering individual words yourself. (Brassica supports sporadic sound changes to some extent, but ultimately expects the user to pick their preferred outcome.) If we had a dictionary-tracking tool such as the one I described, it would be reasonable for it to recognise when an irregular change has occurred, but I wouldn’t expect it to apply those changes itself.tt
The more I think about this, the more I want it. I’m imagining a tool which — let’s say — can show me a dictionary for the ancestor on the left, and a dictionary for the descendant on the right. Or perhaps it just lists the headwords, with an extra space to edit the selected dictionary entries. Either way, with this display, the interface would be able to match up descendant words with their ancestral etyma (or vice versa). Or conversely, it could show me, say, which words in the ancestor haven’t made it into the descendant for whatever reason.

Of course the whole thing would be connected to an SCA. With this, I could use the tool to actually add new entries to the dictionary, by running their ancestors through the sound changes. Perhaps it could use the SCA to detect irregular correspondences too. Thinking this through, there should probably be some mechanism to select different forms depending on word class — no need for a full paradigm builder, just something which can apply transformation rules before/after the sound changes.

So now I just need to build the thing. (Mind you, the word ‘just’ there is doing a lot of work…)
Just remember -- it's only a simple matter of programming!
Yaaludinuya siima d'at yiseka wohadetafa gaare.
Ennadinut'a gaare d'ate eetatadi siiman.
T'awraa t'awraa t'awraa t'awraa t'awraa t'awraa t'awraa.
bradrn
Posts: 6717
Joined: Fri Oct 19, 2018 1:25 am

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by bradrn »

alice wrote: Thu Jan 30, 2025 2:11 pm
bradrn wrote: Thu Jan 30, 2025 2:10 am Of course the whole thing would be connected to an SCA. With this, I could use the tool to actually add new entries to the dictionary, by running their ancestors through the sound changes. Perhaps it could use the SCA to detect irregular correspondences too. Thinking this through, there should probably be some mechanism to select different forms depending on word class — no need for a full paradigm builder, just something which can apply transformation rules before/after the sound changes.

So now I just need to build the thing. (Mind you, the word ‘just’ there is doing a lot of work…)
I've been trying to do something like this for years. Let me know how you get on!
Well, the nice thing is that most of the components are built already. I have an MDF parser, and I have a SCA, and they can already talk to each other. The hardest part will be the UI. (It’s all in Haskell, which doesn’t have a huge number of GUI libraries…)
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Lērisama
Posts: 258
Joined: Fri Oct 18, 2024 9:51 am

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by Lērisama »

bradrn wrote: Thu Jan 30, 2025 2:10 am Of course the whole thing would be connected to an SCA. With this, I could use the tool to actually add new entries to the dictionary, by running their ancestors through the sound changes. Perhaps it could use the SCA to detect irregular correspondences too. Thinking this through, there should probably be some mechanism to select different forms depending on word class — no need for a full paradigm builder, just something which can apply transformation rules before/after the sound changes.
I'd be interested in this, but I don't think MDF¹ as is is the best option for conlangs (or some natural languages, come to think of it). The basic format is good², but for LZ I need to include both the romanisation and the native script for every word, and I can't just automatically generate one of them because the relationship is many:many. It gets worse when ideally I'd like to include the romanisation for supplimentary forms and examples too.

¹ Source: reading Coward and Grimes 2000, which Bradrn linked to to elsewhere on the forum. I'm not linking it myself, just search for Grimes mdf or similar on the forum. I don't fully understand it, and it may be that there is some way to work around this
² I think. I'm not sure exactly how it's processed between being structured text and a formatted dictionary. Is it easily searchible? By each field? Coward and Grimes was not written in an era where such things were as important³, and I don't have access to any modern MDF-using tool⁴
³ It includes the now hilarious line A dictionary only on the computer is of little use to anybody but yourself (and then only when you are sitting at the computer).
⁴ I don't have a computer I can easily download things onto
Last edited by Lērisama on Fri Jan 31, 2025 12:39 pm, edited 1 time in total.
LZ – Lēri Ziwi
PS – Proto Sāzlakuic (ancestor of LZ)
PRk – Proto Rākēwuic
XI – Xú Iạlan
VN – verbal noun
SUP – supine
DIRECT – verbal directional
My language stuff
bradrn
Posts: 6717
Joined: Fri Oct 19, 2018 1:25 am

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by bradrn »

Lērisama wrote: Fri Jan 31, 2025 1:42 am […] for LZ I need to include both the romanisation and the native script for every word, and I can't just automatically generate one of them because the relationship is many:many. It gets worse when ideally I'd like to include the romanisation for supplimentary forms and examples too.
Ah yeah… MDF assumes that you have one script per language. Not an unreasonable assumption as natlangs go, but I see how it could become difficult for conlangs. I suppose you could repurpose the \ph ‘phonetic form’ field for romanisations, but beyond that it could be difficult.

The other option is to add your own fields, which is supported by at least SIL Toolbox. Fundamentally, MDF is just a specific application of the SIL Standard Format Marker (SFM) format, which places no restrictions on field names or ordering. (It’s not even the only such application: for instance it’s also used for glossing, with a completely different set of fields.)

Then again, is a dictionary really the best tool for this kind of thing? If the native script is this complicated, perhaps a drawing program would work better.
¹ Source: reading Coward and Grimes 2000
Coward and Grimes’s paper is excellent, but beware that it can be slightly outdated in parts. (Though nothing related to this discussion, I think.) For technical documentation I often refer to the ‘MDF Documentation’ on the Toolbox website.
² I think. I'm not sure exactly how it's processed between being structured text and a formatted dictionary. Is it easily searchible? By each field?
The format itself is essentially structured text, but usually you’d use it through software which lets you edit and search it like a dictionary.
⁴ I don't have a computer I can easily download things onto
In this case the whole discussion is moot anyway — you wouldn’t be able to use any MDF software, including mine.

(Mind you, MDF does have the distinction that it’s trivial to edit manually, so I would probably use it even if I had no access to special software. But then I have a fondness for plain-text formats anyway.)
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Lērisama
Posts: 258
Joined: Fri Oct 18, 2024 9:51 am

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by Lērisama »

bradrn wrote: Fri Jan 31, 2025 3:26 am
Lērisama wrote: Fri Jan 31, 2025 1:42 am […] for LZ I need to include both the romanisation and the native script for every word, and I can't just automatically generate one of them because the relationship is many:many. It gets worse when ideally I'd like to include the romanisation for supplimentary forms and examples too.
Ah yeah… MDF assumes that you have one script per language. Not an unreasonable assumption as natlangs go, but I see how it could become difficult for conlangs. I suppose you could repurpose the \ph ‘phonetic form’ field for romanisations, but beyond that it could be difficult.

The other option is to add your own fields, which is supported by at least SIL Toolbox. Fundamentally, MDF is just a specific application of the SIL Standard Format Marker (SFM) format, which places no restrictions on field names or ordering. (It’s not even the only such application: for instance it’s also used for glossing, with a completely different set of fields.)
Thank you, that's helpful. Good to know Toolbox allows custom fields, which I'd probably have to use in this scenario (I'd like to keep ph free for IPA)

Then again, is a dictionary really the best tool for this kind of thing? If the native script is this complicated, perhaps a drawing program would work better.
I think so, but I'm not exactly sure what you mean by “drawing program” – I don't store the native script as images.

¹ Source: reading Coward and Grimes 2000
Coward and Grimes’s paper is excellent, but beware that it can be slightly outdated in parts. (Though nothing related to this discussion, I think.) For technical documentation I often refer to the ‘MDF Documentation’ on the Toolbox website.
Don't worry, it's pretty obvious how old the paper is¹. Thank you for the actual documentation though, that will be helpful²

² I think. I'm not sure exactly how it's processed between being structured text and a formatted dictionary. Is it easily searchible? By each field?
The format itself is essentially structured text, but usually you’d use it through software which lets you edit and search it like a dictionary.
So does toolbox create a database from the MDF file somehow? It seems inefficient to do all the processing on it as text⁴

⁴ I don't have a computer I can easily download things onto
In this case the whole discussion is moot anyway — you wouldn’t be able to use any MDF software, including mine.

(Mind you, MDF does have the distinction that it’s trivial to edit manually, so I would probably use it even if I had no access to special software. But then I have a fondness for plain-text formats anyway.)
I will have one in the (hopefully very) near future, and I can start writing a script to convert to MDF now if I decide to use it


¹ It's frequent references to MS DOS help considerably here
² My ultimate plan at this point is to have Toolbox on the computer once it's working, and maybe write a script to convert it into something pretty³ for use not on the computer when I don't have access to Toolbox
³ Probably html
⁴ Although my computer abilities can be safely
LZ – Lēri Ziwi
PS – Proto Sāzlakuic (ancestor of LZ)
PRk – Proto Rākēwuic
XI – Xú Iạlan
VN – verbal noun
SUP – supine
DIRECT – verbal directional
My language stuff
Zju
Posts: 944
Joined: Fri Aug 03, 2018 4:05 pm

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by Zju »

alice wrote: Thu Jan 30, 2025 2:11 pm
bradrn wrote: Thu Jan 30, 2025 2:10 am Of course the whole thing would be connected to an SCA. With this, I could use the tool to actually add new entries to the dictionary, by running their ancestors through the sound changes. Perhaps it could use the SCA to detect irregular correspondences too. Thinking this through, there should probably be some mechanism to select different forms depending on word class — no need for a full paradigm builder, just something which can apply transformation rules before/after the sound changes.

So now I just need to build the thing. (Mind you, the word ‘just’ there is doing a lot of work…)
I've been trying to do something like this for years. Let me know how you get on!
Me three. Coincidentally, I've been pondering if I should get to it if I keep being idle at work.
bradrn wrote: Thu Jan 30, 2025 8:27 pm [...]
Well, the nice thing is that most of the components are built already. I have an MDF parser, and I have a SCA, and they can already talk to each other. The hardest part will be the UI. (It’s all in Haskell, which doesn’t have a huge number of GUI libraries…)
Just don't get bogged down in UIs. No, but seriously. We have truckloads of spreadsheets and text editors already. You can delegate the UI work to them and just have your backend hook them to an SCA of your choice.

From some preliminary research, LibreOffice can have its file contents modified by an external source and it'd load the changes on the fly, without needing restart. At least VSCode autorefreshes files when that occurs.

But when you already have some backend that takes data from a file, transfers it to an SCA, and returns it to the file, why stop there? You could have nodes for filtering, random word generation, suffixation with derivational or inflectional morphology, etc.
Heck, you could even hook it up politely to that online service that tells you the semantic neighbours to an input word. So that you could have suggestions for semantic shifts.

Inputs and outputs would be to just about any format there's a node for - MDF, PDF, CSV, plaintext, offline and online spreadsheets, whatever. SCAs and other external components would be used as libraries or APIs.

Before you know it, you'd be defining custom pipelines on the go, for usecases or problems that you stumbled upon just moments ago.

Of course, all that workflow would be tedious and error-prone to define in text, so why not define it in drawio or similar?

Image
/j/ <j>

Ɂaləɂahina asəkipaɂə ileku omkiroro salka.
Loɂ ɂerleku asəɂulŋusikraɂə seləɂahina əɂətlahɂun əiŋɂiɂŋa.
Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ.
bradrn
Posts: 6717
Joined: Fri Oct 19, 2018 1:25 am

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by bradrn »

Lērisama wrote: Fri Jan 31, 2025 10:49 am
Then again, is a dictionary really the best tool for this kind of thing? If the native script is this complicated, perhaps a drawing program would work better.
I think so, but I'm not exactly sure what you mean by “drawing program” – I don't store the native script as images.
Yes, I meant images. But then how do you store it? In a font?
² I think. I'm not sure exactly how it's processed between being structured text and a formatted dictionary. Is it easily searchible? By each field?
The format itself is essentially structured text, but usually you’d use it through software which lets you edit and search it like a dictionary.
So does toolbox create a database from the MDF file somehow? It seems inefficient to do all the processing on it as text⁴
I don’t know how it works underlyingly — it’s not open-source. I suspect it does have some kind of internal representation which it uses for sorting and suchlike. But the user interface certainly behaves as if you’re editing text, which is nice.
Zju wrote: Fri Jan 31, 2025 3:20 pm
bradrn wrote: Thu Jan 30, 2025 8:27 pm [...]
Well, the nice thing is that most of the components are built already. I have an MDF parser, and I have a SCA, and they can already talk to each other. The hardest part will be the UI. (It’s all in Haskell, which doesn’t have a huge number of GUI libraries…)
Just don't get bogged down in UIs. No, but seriously. We have truckloads of spreadsheets and text editors already. You can delegate the UI work to them and just have your backend hook them to an SCA of your choice.
No, no, why on Earth would I do that‽ I’m trying to get away from spreadsheets and word processors, not use them even more.

Besides, I don’t see how I could implement the UI I want within those applications. I want something which lets me edit two dictionaries at once; those applications are designed to edit one file at a time.
But when you already have some backend that takes data from a file, transfers it to an SCA, and returns it to the file, why stop there? You could have nodes for filtering, random word generation, suffixation with derivational or inflectional morphology, etc.
Heck, you could even hook it up politely to that online service that tells you the semantic neighbours to an input word. So that you could have suggestions for semantic shifts.

Inputs and outputs would be to just about any format there's a node for - MDF, PDF, CSV, plaintext, offline and online spreadsheets, whatever. SCAs and other external components would be used as libraries or APIs.

Before you know it, you'd be defining custom pipelines on the go, for usecases or problems that you stumbled upon just moments ago.
This is considerably more complex than anything I was imagining. I also think it hits the problem of diminishing returns pretty quickly: I just don’t need most of these things. Far better to just implement something simple which does 95% of what I need, then maybe consider if there’s a way to extend it to deal with the other 5%. That way at least something can get built.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Lērisama
Posts: 258
Joined: Fri Oct 18, 2024 9:51 am

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by Lērisama »

bradrn wrote: Fri Jan 31, 2025 6:29 pm Yes, I meant images. But then how do you store it? In a font?
Yes (as soon as the characters are actually made), piggybacking off the Kana and CJK unicode ranges

I don’t know how it works underlyingly — it’s not open-source. I suspect it does have some kind of internal representation which it uses for sorting and suchlike.
Ah, I was fooled by a licence saying it could be modified. Although I can't find that now, so maybe I just misremembered
LZ – Lēri Ziwi
PS – Proto Sāzlakuic (ancestor of LZ)
PRk – Proto Rākēwuic
XI – Xú Iạlan
VN – verbal noun
SUP – supine
DIRECT – verbal directional
My language stuff
User avatar
alice
Posts: 1072
Joined: Mon Jul 09, 2018 11:15 am
Location: 'twixt Survival and Guilt

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by alice »

This is starting to remind me of a modular analog synthesizer. Which it is, in a way.
*I* used to be a front high unrounded vowel. *You* are just an accidental diphthong.
User avatar
WeepingElf
Posts: 1652
Joined: Sun Jul 15, 2018 12:39 pm
Location: Braunschweig, Germany
Contact:

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by WeepingElf »

alice wrote: Sat Feb 01, 2025 2:22 pm This is starting to remind me of a modular analog synthesizer. Which it is, in a way.
Indeed the human voice is an acoustic synthesizer. It works in a very similar way.
... brought to you by the Weeping Elf
My conlang pages
Yrgidrámamintih!
User avatar
alice
Posts: 1072
Joined: Mon Jul 09, 2018 11:15 am
Location: 'twixt Survival and Guilt

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by alice »

WeepingElf wrote: Sun Feb 02, 2025 5:35 am
alice wrote: Sat Feb 01, 2025 2:22 pm This is starting to remind me of a modular analog synthesizer. Which it is, in a way.
Indeed the human voice is an acoustic synthesizer. It works in a very similar way.
That was my point :-) Modify a Moog, add some storage and a bit of intelligence, and you can do it all without software.
*I* used to be a front high unrounded vowel. *You* are just an accidental diphthong.
Lērisama
Posts: 258
Joined: Fri Oct 18, 2024 9:51 am

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by Lērisama »

alice wrote: Sun Feb 02, 2025 2:13 pm
WeepingElf wrote: Sun Feb 02, 2025 5:35 am
alice wrote: Sat Feb 01, 2025 2:22 pm This is starting to remind me of a modular analog synthesizer. Which it is, in a way.
Indeed the human voice is an acoustic synthesizer. It works in a very similar way.
That was my point :-) Modify a Moog, add some storage and a bit of intelligence, and you can do it all without software.
A you suggesting…
…a human being as a dictionary?

Edit: less flippantly, that would require custom hardware, which scares me, and I'm not sure of it has that much of an advantage over a text-based format.
LZ – Lēri Ziwi
PS – Proto Sāzlakuic (ancestor of LZ)
PRk – Proto Rākēwuic
XI – Xú Iạlan
VN – verbal noun
SUP – supine
DIRECT – verbal directional
My language stuff
User avatar
WeepingElf
Posts: 1652
Joined: Sun Jul 15, 2018 12:39 pm
Location: Braunschweig, Germany
Contact:

Re: Beyond the SCA: strategies for improving conlanger productivity

Post by WeepingElf »

It doesn't require much hardware to get a synthesizer to speak. In the 1980s, the Commodore 64 home computer had a simple built-in synthesizer, and there was a program called SAM/Reciter for it which made this synthesizer speak, with an unnatural-sounding but intelligible voice.

But we are digressing.
... brought to you by the Weeping Elf
My conlang pages
Yrgidrámamintih!
Post Reply