Random Thread

Topics that can go away
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Random Thread

Post by rotting bones »

zompist wrote: Tue Nov 09, 2021 6:49 pm I don't know what that means. You realize that temperature is not homogenous in a meal?
Why have the temperature for every atom when you can have the average temperature for a region and let the computer generate atomic velocities that match the required temperatures?
zompist wrote: Tue Nov 09, 2021 6:49 pm The original quote claimed that replicators worked at the "subatomic level". You can make a fake technology do anything if you keep changing the parameters every time a point is questioned. The replication isn't precise enough? It's subatomic! The replication takes too much data storage? It's compressed a trillion-fold!
In general, scientific explanations in Star Trek are nonsense.
zompist wrote: Tue Nov 09, 2021 6:49 pm FWIW I'm happy to let the replicator store thing at the molecular level. The average number of atoms in an amino acid, as an example, is 19. That saves you an order of magnitude. Out of 25. That does not make the problem go away, especially if people insist that every possible variation and every possible ingredient is also accounted for.
Say you ignore substances that occur in trace amounts. If you use statistical information about major chemicals occurring in given regions and the computer separately knows how to construct each of those molecules, then you don't need information at the molecular level either.

(You can generate small graphs that summarize relevant properties of larger graphs: https://arxiv.org/abs/1802.03480 You can generate graphs from summaries: https://arxiv.org/abs/1805.11973 Etc.)
zompist
Site Admin
Posts: 2949
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Random Thread

Post by zompist »

rotting bones wrote: Tue Nov 09, 2021 7:27 pm Say you ignore substances that occur in trace amounts. If you use statistical information about major chemicals occurring in given regions [...]
I don't know what you're trying to show here. The discussion was about whether the replicator is distinguishable by taste from non-replicator food. You can't just throw out "trace amounts" and assume that the flavor is unaffected. I already provided an example (the Maillard reaction) where trace amounts of substances are important.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Random Thread

Post by rotting bones »

zompist wrote: Tue Nov 09, 2021 9:33 pm I don't know what you're trying to show here. The discussion was about whether the replicator is distinguishable by taste from non-replicator food. You can't just throw out "trace amounts" and assume that the flavor is unaffected. I already provided an example (the Maillard reaction) where trace amounts of substances are important.
You're right, "trace amount" would have a different definition in the context of cooking. Maybe I should have used a different expression.

My point is that you don't need to pinpoint exactly where relatively rare molecules are, only concentrations per "region", where "region" is much larger than the molecular scale.

The best counterargument I can think of is that some important chemicals could be so rare that it's difficult for science to tell them apart from unimportant ones. 1. If you argued for this, then I missed it. 2. I don't know of any specific reason to think so. 3. This is more of a problem with analysis than reconstruction anyway.

Another possible counterargument is that there could be a dish that requires specific molecules at highly specific locations instead of distributions, tilings, etc. I don't know of any reason to think so either, but with alien species, who knows?
zompist
Site Admin
Posts: 2949
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Random Thread

Post by zompist »

rotting bones wrote: Wed Nov 10, 2021 5:37 am
zompist wrote: Tue Nov 09, 2021 9:33 pm I don't know what you're trying to show here. The discussion was about whether the replicator is distinguishable by taste from non-replicator food. You can't just throw out "trace amounts" and assume that the flavor is unaffected. I already provided an example (the Maillard reaction) where trace amounts of substances are important.
You're right, "trace amount" would have a different definition in the context of cooking. Maybe I should have used a different expression.

My point is that you don't need to pinpoint exactly where relatively rare molecules are, only concentrations per "region", where "region" is much larger than the molecular scale.
OK, try this. Take a cooked steak. Leave half of it alone; put the rest in a blender. Eat both parts. I think you could tell the difference.

I guess you were thinking of, I dunno, a much smaller scale of randomization. What scale is that and why do you think it would be undetectable?

We already have processes that do various sorts of damage to cells in food; we call it "cooking". You seem very sure that any sort of cellular sludge is identical to the results of cooking. I think that's unlikely, because as I've pointed out multiple times, smell and digestion respond to individual molecules. If I'm reading some Googled documents correctly, certain substances can be smelled at a concentration of 1 in 1 trillion molecules. What is the reason for thinking that undifferentiated sludge will fool the digestive system?

If all you mean is "this particular glycosylamine molecule could be located a micron away", that may well be true. Or it may not be true. Maybe the glycosylamine has to be next to a ketosamine molecule to taste right.

I don't know the effect of every possible change to a cell. But I'll give you an everyday example of a change at the cell level being detectable. Compare torn with cut lettuce: the difference is quite evident. That's because cutting, but not tearing, destroys a swath of cell walls.
bradrn
Posts: 6263
Joined: Fri Oct 19, 2018 1:25 am

Re: Random Thread

Post by bradrn »

I consider this topic slightly differently. To me, the question is not: can a replicator create a given food exactly? But: can a replicator create food which is good enough? The former problem is, as has already been intimated, very difficult to solve. But the latter is substantially easier. Lossy compression is a thing, after all, and humans aren’t infinitely sensitive to molecules. Furthermore, some foods lend themselves well to compression: think of a piece of salmon, for instance, organised in nearly-identical layers.

(This is also convenient from a literary perspective: with such a replicator no-one would need to go hungry, but real, non-synthesised, chef-cooked food would still be of superior quality and therefore a luxury.)

Besides, many biomolecules lend themselves well to compression. DNA famously has only four base pairs; similarly there are only 22 naturally occurring amino acids — that’s only five bits per amino acid! Perhaps there are thousands of different flavour molecules, but they’ll all have pretty similar structures, and there’s only so many ways to arrange atoms into a molecule. (We even have compression codes right now which take advantage of that: to specify e.g. glucose, I don’t need to specify the exact position and momentum of all 24 atoms, but can simply use the SMILES code OC[C@H]1OC(O)[C@H](O)[C@@H](O)[C@@H]1O.) I’d say the really tricky problem would be making sure that all proteins end up correctly folded: recall that mad cow disease is caused by ingestion of misfolded proteins.

EDIT: Actually, now that I think about it, does it even matter if the food is incompressible? The advantage of a replicator isn’t that it’s small; the advantage is that it can make food and keep on making it in the same way. For this purpose it’s irrelevant whether it takes a hard drive the size of a building to store the information for one piece of fish. And there is such a thing as the Internet — if the ‘recipe’ is large enough, it can just be stored off-site and downloaded piece by piece.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Random Thread

Post by rotting bones »

Could you expand more on why you think I need to specify the scale of reconstruction? For example, don't you think that each patch of lettuce corresponds to a regional tiling at a scale much larger than individual molecules? It seems to me the tiling information is much more compact than the positions of molecules.

I don't understand why a tiling would taste like mush. Also, see papers showing how structure can be reliably generated from statistical distributions like the generative adversarial networks paper by Wang et al., 2018.


Of course, none of this matters if a meal comes along that requires so many different types of important molecules that storing the reconstruction information for all of them breaks the computer's memory banks.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Random Thread

Post by rotting bones »

bradrn wrote: Wed Nov 10, 2021 6:34 am EDIT: Actually, now that I think about it, does it even matter if the food is incompressible? The advantage of a replicator isn’t that it’s small; the advantage is that it can make food and keep on making it in the same way. For this purpose it’s irrelevant whether it takes a hard drive the size of a building to store the information for one piece of fish. And there is such a thing as the Internet — if the ‘recipe’ is large enough, it can just be stored off-site and downloaded piece by piece.
Can Star Trek science instantly send messages long distances away? Does it cost anything?

Edit: I think zompist's objection has to do with the positions of molecules, not the storage space required for each type of molecule.
bradrn
Posts: 6263
Joined: Fri Oct 19, 2018 1:25 am

Re: Random Thread

Post by bradrn »

rotting bones wrote: Wed Nov 10, 2021 7:01 am Can Star Trek science instantly send messages long distances away? Does it cost anything?
I haven‘t the foggiest. I’ve never watched the show.
Edit: I think zompist's objection has to do with the positions of molecules, not the storage space required for each type of molecule.
I thought his objection was that storing this sort of information about each and every molecule would take up too much space.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
Ares Land
Posts: 3024
Joined: Sun Jul 08, 2018 12:35 pm

Re: Random Thread

Post by Ares Land »

I think the idea is more that people could tell when it's replicator food. And, presumably, complain about it!
Zju
Posts: 912
Joined: Fri Aug 03, 2018 4:05 pm

Re: Random Thread

Post by Zju »

The idea that highly massaged data is undetectable is another form of engineer's disease.
Highly massaged data isn't, but adequately compressed data is undetectable to humans. People cannot tell apart an appropriately compressed JPEG from an image without any compression. (Otherwise we wouldn't be using JPEGs at all! And they are compressed by a few orders of magnitude) As you said, fooling the eyes is easy.
As for fooling the tongue, I guess we'll never know for certain, barring finding out our knowledge about subatomic particles is fundamentally wrong, or us somehow coming up with bioprinting despite it being correct.
Um, trompe-l'œil dates back to ancient times.
I don't get how that follows. Producing a piece of trompe-l'œil depends on an artist's skills, their availability in the first place and some non trivial amount of time. Printing a piece of trompe-l'œil depends on data availability (and pesky details like toner and actually having a printer).
But then you can't posit that a dish has thousands of variants and can be tweaked to individual taste.
We already have ads customisation, maybe for once we could use customisation algorithms for good.
Scan the molecules, store everything; reproduce it all layer by layer. (Well, I do have questions about how you "scan" something like a soup. I guess the process has to be extremely fast (so the food doesn't have time to change), and destructive (as it's barely plausible that you can "scan" a surface, less so that you can scan the insides of something).)
For all we know, that's the single biggest problem with replicators.
/j/ <j>

Ɂaləɂahina asəkipaɂə ileku omkiroro salka.
Loɂ ɂerleku asəɂulŋusikraɂə seləɂahina əɂətlahɂun əiŋɂiɂŋa.
Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Random Thread

Post by rotting bones »

Ares Land wrote: Wed Nov 10, 2021 8:02 am I think the idea is more that people could tell when it's replicator food. And, presumably, complain about it!
IIRC disliking replicated food is canon in Star Trek.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Random Thread

Post by rotting bones »

bradrn wrote: Wed Nov 10, 2021 7:10 am I thought his objection was that storing this sort of information about each and every molecule would take up too much space.
Could be, but I started out by asking why we should repeat the information for each molecule when we can normalize it:

Molecule 1 has structure: ...

Molecule 2 has structure: ...

Molecule types by position: 1 1 2 1 2 2 1 ...

Why repeat the structure information at every position?

But I'll let zompist clarify this point if he wants.
Zju
Posts: 912
Joined: Fri Aug 03, 2018 4:05 pm

Re: Random Thread

Post by Zju »

As Zompist said, that'd save you only about an order of magnitude of information. So instead of storing 100 tonnes worth of dish information, you'd have to store "only" about 10 tonnes worth of dish information.

On the other hand, I'm still not sure why one couldn't store that information as slices and whatnot.
/j/ <j>

Ɂaləɂahina asəkipaɂə ileku omkiroro salka.
Loɂ ɂerleku asəɂulŋusikraɂə seləɂahina əɂətlahɂun əiŋɂiɂŋa.
Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ.
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Random Thread

Post by rotting bones »

Zju wrote: Wed Nov 10, 2021 12:50 pm As Zompist said, that'd save you only about an order of magnitude of information. So instead of storing 100 tonnes worth of dish information, you'd have to store "only" about 10 tonnes worth of dish information.

On the other hand, I'm still not sure why one couldn't store that information as slices and whatnot.
I tried to explain why I don't understand that objection:
rotting bones wrote: Wed Nov 10, 2021 6:59 am Could you expand more on why you think I need to specify the scale of reconstruction? For example, don't you think that each patch of lettuce corresponds to a regional tiling at a scale much larger than individual molecules? It seems to me the tiling information is much more compact than the positions of molecules.

I don't understand why a tiling would taste like mush. Also, see papers showing how structure can be reliably generated from statistical distributions like the generative adversarial networks paper by Wang et al., 2018.


Of course, none of this matters if a meal comes along that requires so many different types of important molecules that storing the reconstruction information for all of them breaks the computer's memory banks.
BTW I also posted multiple papers showing that structure can be reliably reconstructed from statistical information, though there might be scalability issues.
Zju
Posts: 912
Joined: Fri Aug 03, 2018 4:05 pm

Re: Random Thread

Post by Zju »

Right, I didn't infere that that paragraph was about an argument of mine. I agree that for a lettuce a patch much smaller than slices will suffice, provided that one also specifies information about lettuce "veins" and whatnot.

As for the papers, I might actually add them to my winter holiday's reading list.
/j/ <j>

Ɂaləɂahina asəkipaɂə ileku omkiroro salka.
Loɂ ɂerleku asəɂulŋusikraɂə seləɂahina əɂətlahɂun əiŋɂiɂŋa.
Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ. Hərlaɂ.
zompist
Site Admin
Posts: 2949
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Random Thread

Post by zompist »

bradrn wrote: Wed Nov 10, 2021 6:34 am I consider this topic slightly differently. To me, the question is not: can a replicator create a given food exactly? But: can a replicator create food which is good enough? The former problem is, as has already been intimated, very difficult to solve. But the latter is substantially easier. Lossy compression is a thing, after all, and humans aren’t infinitely sensitive to molecules. Furthermore, some foods lend themselves well to compression: think of a piece of salmon, for instance, organised in nearly-identical layers.

(This is also convenient from a literary perspective: with such a replicator no-one would need to go hungry, but real, non-synthesised, chef-cooked food would still be of superior quality and therefore a luxury.)
I agree with all this. I have no problem with replicators that are "pretty good" but imperfect enough to complain about.
DNA famously has only four base pairs; similarly there are only 22 naturally occurring amino acids — that’s only five bits per amino acid! Perhaps there are thousands of different flavour molecules, but they’ll all have pretty similar structures, and there’s only so many ways to arrange atoms into a molecule. (We even have compression codes right now which take advantage of that: to specify e.g. glucose, I don’t need to specify the exact position and momentum of all 24 atoms, but can simply use the SMILES code OC[C@H]1OC(O)[C@H](O)[C@@H](O)[C@@H]1O.)
I already mentioned amino acids— you're saving just one order of magnitude there. As for glucose, you've provided a 38-byte code to store 24 atoms. :P How many atoms does it take to store those bytes?
zompist
Site Admin
Posts: 2949
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Random Thread

Post by zompist »

rotting bones wrote: Tue Nov 09, 2021 7:27 pm (You can generate small graphs that summarize relevant properties of larger graphs: https://arxiv.org/abs/1802.03480
OK, I looked over this one— it's almost all over my head. But my immediate reaction is, "You'd eat that?" Did you look at the accuracy levels in their tables? Brad just told you that an incorrectly folded protein can give you a fatal disease. But you'd eat food where the proteins have been cobbled together by a machine algorithm so that the protein diagrams sometimes look like the real ones?

When we talk about "compression", we really need to specify for what purpose. I'm sure the authors' algorithm has some use for chemists. Perhaps sometimes you have a shitload of data, and finding patterns in it quickly is useful. Creating food has much lower tolerances.
bradrn
Posts: 6263
Joined: Fri Oct 19, 2018 1:25 am

Re: Random Thread

Post by bradrn »

zompist wrote: Wed Nov 10, 2021 4:13 pm
DNA famously has only four base pairs; similarly there are only 22 naturally occurring amino acids — that’s only five bits per amino acid! Perhaps there are thousands of different flavour molecules, but they’ll all have pretty similar structures, and there’s only so many ways to arrange atoms into a molecule. (We even have compression codes right now which take advantage of that: to specify e.g. glucose, I don’t need to specify the exact position and momentum of all 24 atoms, but can simply use the SMILES code OC[C@H]1OC(O)[C@H](O)[C@@H](O)[C@@H]1O.)
I already mentioned amino acids— you're saving just one order of magnitude there. As for glucose, you've provided a 38-byte code to store 24 atoms. :P
Well, the comparison isn’t quite that simple, because it’s not just 24 atoms, but also the bonding between them: e.g. fructose has exactly the same atoms, but arranged in a different way.
How many atoms does it take to store those bytes?
Looking up ‘hard drive storage density’, it appears that the best as of 2015 could store 1.34 TBit/in² = 2 Bit/nm². This gives 76 nm² for the SMILES code, compared to 1.4 nm² for the glucose molecule (source: PubChem), which if anything is an overestimate. But then, storage densities have gone up since 2015, and are continuing to increase. I’m not sure if we’ll ever be able to describe glucose in a space smaller than an actual glucose molecule, but I can’t rule it out.

EDIT: The calculation was wrong, sorry. It should be 152 nm² for the code.
Conlangs: Scratchpad | Texts | antilanguage
Software: See http://bradrn.com/projects.html
Other: Ergativity for Novices

(Why does phpBB not let me add >5 links here?)
rotting bones
Posts: 1408
Joined: Tue Dec 04, 2018 5:16 pm

Re: Random Thread

Post by rotting bones »

zompist wrote: Wed Nov 10, 2021 4:20 pm OK, I looked over this one— it's almost all over my head. But my immediate reaction is, "You'd eat that?" Did you look at the accuracy levels in their tables? Brad just told you that an incorrectly folded protein can give you a fatal disease. But you'd eat food where the proteins have been cobbled together by a machine algorithm so that the protein diagrams sometimes look like the real ones?

When we talk about "compression", we really need to specify for what purpose. I'm sure the authors' algorithm has some use for chemists. Perhaps sometimes you have a shitload of data, and finding patterns in it quickly is useful. Creating food has much lower tolerances.
That's the simpler model. The second one I linked in the same post has near 100% accuracy: https://arxiv.org/abs/1805.11973 There's better work out there, but I don't understand those.
zompist
Site Admin
Posts: 2949
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Random Thread

Post by zompist »

Zju wrote: Wed Nov 10, 2021 12:50 pm On the other hand, I'm still not sure why one couldn't store that information as slices and whatnot.
Sure, you can do this. I just think it's way harder than people are thinking.

Try this: take a steak. Cut it carefully into slices or cubes of the size you think is best for your replicator.

Congrats, you've invented steak tartare. Which tastes different from an uncut steak.

Cook it, and you've invented hamburger. (To be more precise, hamburger is a combination of belly fat from certain cattle, and lean meat from certain other cattle. Both sources on their own are unpalatable, but the combination is tasty enough. And also doesn't taste like a cooked steak.)

I granted the slices idea in the original post— I said you could use 1 cm3 samples instead. I think if people are thinking "all I need are ten molecules", they're fooling themselves— most dishes have more than ten ingredients, and none of them are homogenous in a cooked dish.

But for fun, let's look at the other side of the problem: how many variations do you need? It's trivial to identify replicator food if every time you run it, you get Anton Ego's ratatouille. How much variety is needed so that people don't complain it's either repetitive, or not as good as Mama's food?

1. Multiple varieties of each source ingredient. As a starter, there's 250 breeds of cattle.
2. Sub-varieties: sex, age, gelding. Yes, these taste different.
3. Individual variation— think wine or cheese.
4. Parts of the ingredient— e.g. cuts of beef.
5. Variation within the part (beef has connective tissue, bits of fat, etc.)
6. The appearance of the cell at various temperatures from frozen to seared.
7. The effect of various pre-heating techniques: tenderizing, cutting (cf. my lettuce example), marination, aging, fermentation.
8. The effect of different heating techniques: slow-cooked is very different from flash-fried.
9. The effect of distance from the heat— we probably want the outside of things seared, the inside far less so.
10. The effect of post-heating techniques— glazing, pickling, etc.

Let's say each of these has 20 samples. Each are independent variables, so the effect has to be multiplied together. That's a factor of 10 trillion, or 13 orders of magnitude. And a real cook or food scientist could probably double my list.

And please, folks, don't go all engineer's-disease on me and say "oh, the temperature differences don't need separate samples, that's just a simple filter." Cooking is a complex process. It's not "all the molecules stay the same but get hotter." It breaks down molecules, creates new ones in a complex way that isn't even fully understood.

I understand, by the way, that we've got some warring intuitions going. You folks are thinking "that zompist, he's not seeing all the repetitiveness in the data." And I'm thinking "These folks keep forgetting how complex any food dish is, and forget how badly engineers can slip up by making inappropriate simplifications."
Post Reply