Page 89 of 210

Re: Random Thread

Posted: Wed Nov 10, 2021 6:05 pm
by bradrn
zompist wrote: Wed Nov 10, 2021 5:29 pm
Zju wrote: Wed Nov 10, 2021 12:50 pm On the other hand, I'm still not sure why one couldn't store that information as slices and whatnot.
Sure, you can do this. I just think it's way harder than people are thinking.

Try this: take a steak. Cut it carefully into slices or cubes of the size you think is best for your replicator.

Congrats, you've invented steak tartare. Which tastes different from an uncut steak.
Steak tartare is ground or minced meat. This is different to cutting it up into chunks, which is a perfectly normal way of eating steak.
I granted the slices idea in the original post— I said you could use 1 cm3 samples instead. I think if people are thinking "all I need are ten molecules", they're fooling themselves— most dishes have more than ten ingredients, and none of them are homogenous in a cooked dish.
Yes, I agree with this. I was thinking more ‘several thousand ingredients’ throughout my last few posts. (Though in practice this compresses well, see below and also my previous posts.)
How much variety is needed so that people don't complain it's either repetitive, or not as good as Mama's food?
But this is an extremely high bar. Of course replicator food is going to be repetitive. (If it weren’t, it wouldn’t be called a ‘replicator’.) And even many human cooks can’t make food ‘as good as Mama’s’.
And please, folks, don't go all engineer's-disease on me and say "oh, the temperature differences don't need separate samples, that's just a simple filter." Cooking is a complex process. It's not "all the molecules stay the same but get hotter." It breaks down molecules, creates new ones in a complex way that isn't even fully understood.
This feels a bit like a strawman. Is anyone here seriously disputing that molecules undergo chemical reactions when heated?

That being said, the process can be simplified to quite an extent. At the macro scale, these reactions can be approximated as a statistical process: after being heated, we get 20% of compound A, 10% of compound B, 2% of compound C etc. These processes are easily simulated by a computer — it’s as easy as ‘pick a random number in [0,1) and use it to choose a molecule from the list’. And because there’s only so many chemical reactions, all these molecules will have a relatively similar structure: it’s not like you get completely random molecules out of the process.
I understand, by the way, that we've got some warring intuitions going. You folks are thinking "that zompist, he's not seeing all the repetitiveness in the data." And I'm thinking "These folks keep forgetting how complex any food dish is, and forget how badly engineers can slip up by making inappropriate simplifications."
And I’m saying: (a) there are simplifications which can be made without compromising quality too much (or, in some cases, at all); and (b) quality doesn’t matter as much as you might think; and (c) even if the data couldn’t be simplified at all, that would be irrelevant.

Re: Random Thread

Posted: Wed Nov 10, 2021 6:19 pm
by rotting bones
I found a paper that claims to have 100% accuracy and is not based on too much linear algebra: https://arxiv.org/abs/1805.09076

Disclaimer: I haven't read this one before. It's not my fault if it contains something that makes it irrelevant to this discussion.

Re: Random Thread

Posted: Wed Nov 10, 2021 6:54 pm
by zompist
bradrn wrote: Wed Nov 10, 2021 6:05 pm Steak tartare is ground or minced meat. This is different to cutting it up into chunks, which is a perfectly normal way of eating steak.
Most of your post is reasonable, so I'll try not to be snarky. You can absolutely vary the parameters of how this imaginary machine is supposed to work. You can't vary it moment-by-moment, in contradictory ways, to swat down objections as they occur.

When I say that chunks of steak take a lot of data space, people say the chunks are molecular level or below.

When I say that patterns across whole swaths of food are important (e.g. the searing patterns within a steak), then people say the chunks are small slices along a vector of the food.

When I say that beef in small slices tastes different than an uncut steak, you say the chunks are bite sized.

You can have it one of these ways; you can't have the machine do it all ways at once, depending on which criticism is being made.

I've made steak tartare. The recipe I used did not call for mincing. Rather, you chip out small bits of raw meat using a spoon. Very likely the spoon is used precisely to avoid damaging the cell structure of the meat. You remove fat and connective tissue as you go. (You also add a raw egg and various spices.)
How much variety is needed so that people don't complain it's either repetitive, or not as good as Mama's food?
But this is an extremely high bar. Of course replicator food is going to be repetitive. (If it weren’t, it wouldn’t be called a ‘replicator’.) And even many human cooks can’t make food ‘as good as Mama’s’.
It's not a high bar, it's exactly how we judge all the food we eat. But if you're saying that people can tell that it's replicator food-- great, that is my point.
That being said, the process can be simplified to quite an extent. At the macro scale, these reactions can be approximated as a statistical process: after being heated, we get 20% of compound A, 10% of compound B, 2% of compound C etc. These processes are easily simulated by a computer — it’s as easy as ‘pick a random number in [0,1) and use it to choose a molecule from the list’. And because there’s only so many chemical reactions, all these molecules will have a relatively similar structure: it’s not like you get completely random molecules out of the process.
Again, you can't just mix all the levels of analysis and pick the one that fits the last objection someone made.

Is it a molecule=level reconstruction, or is it a simulation? Does it know about bite-sized sampled chunks, or is it a sub-molecular-level physics simulator? If you have 1018 bytes of data on molecule positions, is it that easy to find things like "cell walls"?

To put it another way: there are various ways Star Trek could store a book. It could use the replicator technology to store every molecule of a printed book and 3-D print it on demand. It could store all the font and placement information like a PDF does. It could store just the text in Unicode. It could store a seed and randomly generate the text from it. On a scientific level, we can understand the interrelationships of these levels: in principle, reductionism works. In practice, you'd better pick one level, not all of them at once. If you want the text, the molecule-level reconstruction makes it very hard to get, while the seed method (while good for, say, simulating a library in a holodeck recreation of a place) loses details in multiple texts.
And I’m saying: (a) there are simplifications which can be made without compromising quality too much (or, in some cases, at all); and (b) quality doesn’t matter as much as you might think; and (c) even if the data couldn’t be simplified at all, that would be irrelevant.
(a) it all depends on what "too much" is. Something can be "good enough" without being "undetectable". There's an entire subdiscipline of psychology devoted to this, and it's yet another area where engineers' simplifications are just wrong.
(b) I don't know what that even means. I'm reacting to claims that replicator food is undetectable. That's an extremely strong claim and I've given many reasons to doubt it. Saying my standards are too high is straw-manning. Yes, you can argue that replicator is "good enough", but that is not refuting any position that I am arguing.
(c) I don't get this.

Re: Random Thread

Posted: Wed Nov 10, 2021 7:04 pm
by rotting bones
zompist wrote: Wed Nov 10, 2021 6:54 pm It's not a high bar, it's exactly how we judge all the food we eat. But if you're saying that people can tell that it's replicator food-- great, that is my point.
Again, are you sure disliking replicator food isn't canon in Star Trek? I vaguely remember scenes to that effect from DS9.

(You might have missed these posts, I might have missed your responses, etc: http://verduria.org/viewtopic.php?p=52174#p52174 http://verduria.org/viewtopic.php?p=52216#p52216)

Re: Random Thread

Posted: Wed Nov 10, 2021 7:28 pm
by zompist
rotting bones wrote: Wed Nov 10, 2021 7:04 pm
zompist wrote: Wed Nov 10, 2021 6:54 pm It's not a high bar, it's exactly how we judge all the food we eat. But if you're saying that people can tell that it's replicator food-- great, that is my point.
Again, are you sure disliking replicator food isn't canon in Star Trek? I vaguely remember scenes to that effect from DS9.
I expect it is canon. My posting said what it was responding to-- a tumblr user who said that replicator food was sub-molecular-level and undetectable, and the ensuing discussion there and on Metafilter.

(And, no disrespect to the tumblr person. It led to a great discussion, and they backed off their claim anyway.)
(You might have missed these posts, I might have missed your responses, etc: http://verduria.org/viewtopic.php?p=52174#p52174 http://verduria.org/viewtopic.php?p=52216#p52216)
I think my response to Brad should answer your question too. You can't just vary the chunk size according to the objection you're fielding at the moment.

Re: Random Thread

Posted: Wed Nov 10, 2021 7:31 pm
by bradrn
zompist wrote: Wed Nov 10, 2021 6:54 pm Most of your post is reasonable, so I'll try not to be snarky. You can absolutely vary the parameters of how this imaginary machine is supposed to work. You can't vary it moment-by-moment, in contradictory ways, to swat down objections as they occur.
I think you’re confusing what I said with what other people said:
When I say that chunks of steak take a lot of data space, people say the chunks are molecular level or below.
I don’t recall ever saying this.
When I say that patterns across whole swaths of food are important (e.g. the searing patterns within a steak), then people say the chunks are small slices along a vector of the food.

When I say that beef in small slices tastes different than an uncut steak, you say the chunks are bite sized.
I agree with both of these, since they’re pretty much synonymous. (For perspective, when I eat fish or meat, I tend to eat it in bite-size slices.) You need to take the chunk dimensions which are most ‘natural’ for each food — no-one said they have to be the same size across all possible foods.
But if you're saying that people can tell that it's replicator food-- great, that is my point.
Yes, I strongly suspect this will be the case. My expectation is that replicator food would just feel a bit bland compared to other food — not anything you could put your finger on precisely, just lacking the same ‘richness’ a bit.
That being said, the process can be simplified to quite an extent. At the macro scale, these reactions can be approximated as a statistical process: after being heated, we get 20% of compound A, 10% of compound B, 2% of compound C etc. These processes are easily simulated by a computer — it’s as easy as ‘pick a random number in [0,1) and use it to choose a molecule from the list’. And because there’s only so many chemical reactions, all these molecules will have a relatively similar structure: it’s not like you get completely random molecules out of the process.
Again, you can't just mix all the levels of analysis and pick the one that fits the last objection someone made.

Is it a molecule=level reconstruction, or is it a simulation? Does it know about bite-sized sampled chunks, or is it a sub-molecular-level physics simulator? If you have 1018 bytes of data on molecule positions, is it that easy to find things like "cell walls"?
How I’ve been thinking of it all along is that the replicator has two things: a library of molecules and proteins, and a (possibly pseudorandom) specification for where they should go. As it goes through the specification, it samples compounds from the library and places them in the correct spots. (Which is an even bigger challenge than what we’ve been talking about, but without this you have no replicators at all, so we have to ignore it.)
(a) it all depends on what "too much" is. Something can be "good enough" without being "undetectable".
Agreed. This is what I was saying above.
(b) I don't know what that even means. I'm reacting to claims that replicator food is undetectable. That's an extremely strong claim and I've given many reasons to doubt it.
As I said, I strongly doubt it also.
Saying my standards are too high is straw-manning. Yes, you can argue that replicator is "good enough", but that is not refuting any position that I am arguing.
Earlier, you strongly implied that even a single molecule in the wrong place would cause a detectable difference. I’m saying that even if this were the case, it wouldn’t matter as long as the food is ‘good enough’. (Or, if you weren’t implying that, I apologise for misinterpreting.)
(c) I don't get this.
It’s basically repeating what I said earlier:
bradrn wrote: Wed Nov 10, 2021 6:34 am EDIT: Actually, now that I think about it, does it even matter if the food is incompressible? The advantage of a replicator isn’t that it’s small; the advantage is that it can make food and keep on making it in the same way. For this purpose it’s irrelevant whether it takes a hard drive the size of a building to store the information for one piece of fish. And there is such a thing as the Internet — if the ‘recipe’ is large enough, it can just be stored off-site and downloaded piece by piece.

Re: Random Thread

Posted: Wed Nov 10, 2021 8:00 pm
by rotting bones
zompist wrote: Wed Nov 10, 2021 7:28 pm I think my response to Brad should answer your question too. You can't just vary the chunk size according to the objection you're fielding at the moment.
The only levels I mentioned were statistical distributions and tiling. I thought I presented some pretty compelling evidence that translating statistical distributions into sharp structures is well within our reach.

In any case, why should I commit to a specific chunk size when I can let the model vary it within different regions, and just pick the best model? In machine learning, we try to choose hyperparameters that pick the best low level parameters wherever possible. This approach often outperforms human experts if you feed the model enough data.

Re: Random Thread

Posted: Wed Nov 10, 2021 8:23 pm
by zompist
rotting bones wrote: Wed Nov 10, 2021 8:00 pm
zompist wrote: Wed Nov 10, 2021 7:28 pm I think my response to Brad should answer your question too. You can't just vary the chunk size according to the objection you're fielding at the moment.
The only levels I mentioned were statistical distributions and tiling. I thought I presented some pretty compelling evidence that translating statistical distributions into sharp structures is well within our reach.

In any case, why should I commit to a specific chunk size when I can let the model vary it within different regions, and just pick the best model? In machine learning, we try to choose hyperparameters that pick the best low level parameters wherever possible. This approach often outperforms human experts if you feed the model enough data.
I don't know enough about machine learning to reply in detail. I do know enough about it to be more skeptical about it than you seem to be.

I mean, have you looked at (say) the pictures on https://thispersondoesnotexist.com/? On one level they're absolutely amazing. On another... well, see how many pictures you have to load till you find something really strange or disturbing. It didn't take me long to find things like eyeglasses missing on half the face, or unconvincing hair-skin transitions, to say nothing of horror-movie backgrounds.

(And yes, the pictures will get better in the next decades. Pictures are easy. If you assume that all engineering problems are no harder than image processing.... well, please tell me you don't work on self-driving cars.)

Machine learning systems are also notorious for being only as good as the data they were trained on. Give this project to a bunch of engineers and you're likely to get an uncanny simulation of the restaurant food they most like to eat, and crappy simulations of everything else.

Re: Random Thread

Posted: Wed Nov 10, 2021 8:58 pm
by rotting bones
zompist wrote: Wed Nov 10, 2021 8:23 pm I don't know enough about machine learning to reply in detail. I do know enough about it to be more skeptical about it than you seem to be.

I mean, have you looked at (say) the pictures on https://thispersondoesnotexist.com/? On one level they're absolutely amazing. On another... well, see how many pictures you have to load till you find something really strange or disturbing. It didn't take me long to find things like eyeglasses missing on half the face, or unconvincing hair-skin transitions, to say nothing of horror-movie backgrounds.

(And yes, the pictures will get better in the next decades. Pictures are easy. If you assume that all engineering problems are no harder than image processing.... well, please tell me you don't work on self-driving cars.)

Machine learning systems are also notorious for being only as good as the data they were trained on. Give this project to a bunch of engineers and you're likely to get an uncanny simulation of the restaurant food they most like to eat, and crappy simulations of everything else.
I recently mentioned the bias-variance tradeoff myself: https://verduria.org/viewtopic.php?p=51964#p51964

I was objecting to very specific criticisms like replicator food being "mush". Similarly, I think a model that tiles molecules known to be present in the required output, albeit by probability distributions, will probably perform better than one that has just seen a lot of pictures of the required output within an adversarial setting.

On the one hand, I doubt the replicator project can be left to present day machine learning models. Possibly, we'll have to go back to the drawing board and create new models from the parent discipline, pattern recognition: https://drive.google.com/file/d/1lluiTT ... p=drivesdk Most likely we'll need separate models for each ingredient, and a model that coordinates these sub-models.

On the other hand, it seems unlikely to me that no verisimilitude is possible in this field despite the size of the training samples the Federation could muster. Note that I've linked to a model that claims to translate between distribution and structure with 100% accuracy!

Re: Random Thread

Posted: Wed Nov 10, 2021 10:22 pm
by zompist
rotting bones wrote: Wed Nov 10, 2021 8:58 pm I was objecting to very specific criticisms like replicator food being "mush".
I don't feel that you're reading what I say at this point. I never said that; see my last post to Brad, specifically the discussion of "good enough".

Re: Random Thread

Posted: Thu Nov 11, 2021 6:20 am
by rotting bones
zompist wrote: Wed Nov 10, 2021 10:22 pm I don't feel that you're reading what I say at this point. I never said that; see my last post to Brad, specifically the discussion of "good enough".
That was an old objection to stuff like this: https://verduria.org/viewtopic.php?p=52171#p52171 I gave a new one in the sentence after that, for example.

I approve of your defense of common sense from Tumblr. Sorry if that's so obvious, it didn't need saying. I wish I could say something more on Tumblr speculations about Stat Trek. If I'm silent, it's because I know nothing about it and researching it sounds kind of boring.

Re: Random Thread

Posted: Thu Nov 11, 2021 8:46 am
by MacAnDàil
Zompist is right. Cooking is much more complex than we necessarily realise. We need to be wary of falling into the McNamara fallacy. I expect replicator food would likely be even worse than baby food. No wonder you need to choo-choo babies into eating those pots.

The mention of 250 cattle varieties reminds that some cattle varieties have gone extinct, as Philippe Jacques Dubois mentions in La grande amnésie écologique. The Ecological Amnesia of the title refers to how modern people do not necessarily realise how the fauna and flora (including domesticated varieties) were before (including farmers). So they not only become extinct, but also forgotten.

Re: Random Thread

Posted: Thu Nov 11, 2021 8:53 am
by rotting bones
MacAnDàil wrote: Thu Nov 11, 2021 8:46 am Zompist is right. Cooking is much more complex than we necessarily realise. We need to be wary of falling into the McNamara fallacy. I expect replicator food would likely be even worse than baby food. No wonder you need to choo-choo babies into eating those pots.

The mention of 250 cattle varieties reminds that some cattle varieties have gone extinct, as Philippe Jacques Dubois mentions in La grande amnésie écologique. The Ecological Amnesia of the title refers to how modern people do not necessarily realise how the fauna and flora (including domesticated varieties) were before (including farmers). So they not only become extinct, but also forgotten.
Anyone can say whatever words they want. I can say that replicator food is 100% certain to be better than what the best human chefs can imagine. They only problem is that there's no reason to think so. Your post has the same problem. I have tried my best to explain why over the last few pages.

Re: Random Thread

Posted: Thu Nov 11, 2021 9:31 am
by Moose-tache
I think we might be overthinking things when it comes to replicator food. A replicator recipe maker would start by isolating what it is consumers need from the product, and what the tolerances are for those criteria. For example, a piece of onion only needs three things. There should be a repeating matrix of cellulose permeated by water and a selection of flavor molecules like sugars and thiosulfinates. Aside from the cellulose matrix, which can just be one shell repeated infinitely, we don't need to know anything about the position of these molecules, nor do we need to reconstruct the organelles or nucleic acids within our artificial "cells." Just whatever makes crunch, astringency, and a hint of sweetness. Voila. Onion. A real steak is complex, but who cares? If stacks of protein filaments can give you the same texture, then just stack protein filaments and soak them in flavor molecules. If the consumer isn't fooled, just tweak the recipe until they are. There's no reason why the problem would be intractible. It's not like our human mouths are detecting molecule-sized imperfections in meat texture. We're only detecting generalized arrangements that can be recreated from repeating patterns. It's like drawing animation. You don't need to draw what your subject is doing at every point in time, only a few dozen times per second, because your animation only needs to fool a squishy flesh-camera. And you can skip the back half of whatever you're drawing entirely! To put it another way, imagine you are eating 0.1 mol of salt and 0.1 mol of sugar, with each molecule hitting your tongue alternately, one salt, one sugar. You can detect saltiness and sweetness. Now let's say the molecules come two-by-two, two sugars then two salt, etc. The taste is the same. Even groups of a billion molecules each would likely have no effect on the flavor. The depths of wasted information in food are enormous and we can skip all of it that is not having a measurable impact on our experience.

The limiting factor in creating replicator food that is indistinguishable from traditional food would be scientific knowledge of flavor and mouth feel, not data packing.

Re: Random Thread

Posted: Thu Nov 11, 2021 10:16 am
by rotting bones
Moose-tache wrote: Thu Nov 11, 2021 9:31 am I think we might be overthinking things when it comes to replicator food. A replicator recipe maker would start by isolating what it is consumers need from the product, and what the tolerances are for those criteria. For example, a piece of onion only needs three things. There should be a repeating matrix of cellulose permeated by water and a selection of flavor molecules like sugars and thiosulfinates. Aside from the cellulose matrix, which can just be one shell repeated infinitely, we don't need to know anything about the position of these molecules, nor do we need to reconstruct the organelles or nucleic acids within our artificial "cells." Just whatever makes crunch, astringency, and a hint of sweetness. Voila. Onion. A real steak is complex, but who cares? If stacks of protein filaments can give you the same texture, then just stack protein filaments and soak them in flavor molecules. If the consumer isn't fooled, just tweak the recipe until they are. There's no reason why the problem would be intractible. It's not like our human mouths are detecting molecule-sized imperfections in meat texture. We're only detecting generalized arrangements that can be recreated from repeating patterns. It's like drawing animation. You don't need to draw what your subject is doing at every point in time, only a few dozen times per second, because your animation only needs to fool a squishy flesh-camera. And you can skip the back half of whatever you're drawing entirely! To put it another way, imagine you are eating 0.1 mol of salt and 0.1 mol of sugar, with each molecule hitting your tongue alternately, one salt, one sugar. You can detect saltiness and sweetness. Now let's say the molecules come two-by-two, two sugars then two salt, etc. The taste is the same. Even groups of a billion molecules each would likely have no effect on the flavor. The depths of wasted information in food are enormous and we can skip all of it that is not having a measurable impact on our experience.

The limiting factor in creating replicator food that is indistinguishable from traditional food would be scientific knowledge of flavor and mouth feel, not data packing.
This sounds plausible, but it runs into the same engineering problems we get for meat substitutes. Are you handwaving those, or do you have evidence that things will work out in the end?

Re: Random Thread

Posted: Thu Nov 11, 2021 3:54 pm
by Moose-tache
The only problems I've seen presented are:

1) cooking are food products are microscopically very complicated, therefore replicated them by rights ought to be difficult, and
2) replicating things exactly takes a lot of processing power.

I think both arguments are flawed, because they ignore which food properties we need to replicate. The vast majority of the internal structure of a carrot is irrelevant to making a substance that is crunchy, slightly sweet, and full of specific terpenoids. You could replicate that with a repeating lattice-and-medium containing half a dozen molecules. Virtually no data is required for that, nor is it terribly complex. Are there any other arguments against replicated food?

Re: Random Thread

Posted: Thu Nov 11, 2021 4:58 pm
by rotting bones
Moose-tache wrote: Thu Nov 11, 2021 3:54 pm The only problems I've seen presented are:

1) cooking are food products are microscopically very complicated, therefore replicated them by rights ought to be difficult, and
2) replicating things exactly takes a lot of processing power.
I don't remember anyone mentioning processing power. The arguments were that storing the internal structure takes up a lot of space and that chemicals present in trace amounts make a significant difference. I argued that crisp structures can be stored as statistical distributions that don't take up much space and represent the tiling at a high level so that the scale of replication doesn't annoy us.
Moose-tache wrote: Thu Nov 11, 2021 3:54 pm I think both arguments are flawed, because they ignore which food properties we need to replicate. The vast majority of the internal structure of a carrot is irrelevant to making a substance that is crunchy, slightly sweet, and full of specific terpenoids. You could replicate that with a repeating lattice-and-medium containing half a dozen molecules. Virtually no data is required for that, nor is it terribly complex. Are there any other arguments against replicated food?
Overall, your argument is correct, though you don't mention substances present in trace amounts. More importantly, my understanding is that identifying all the structures that are relevant to taste and texture, not to mention replicating them in an artificial substrate, are non-trivial problems. Typically, organic systems are ruled by so many edge cases that most of the resources go to covering those. Do you have evidence that taste and texture are exceptions?

Re: Random Thread

Posted: Thu Nov 11, 2021 5:02 pm
by zompist
Moose-tache wrote: Thu Nov 11, 2021 9:31 amIt's not like our human mouths are detecting molecule-sized imperfections in meat texture. We're only detecting generalized arrangements that can be recreated from repeating patterns. It's like drawing animation. You don't need to draw what your subject is doing at every point in time, only a few dozen times per second, because your animation only needs to fool a squishy flesh-camera. And you can skip the back half of whatever you're drawing entirely! [...] The limiting factor in creating replicator food that is indistinguishable from traditional food would be scientific knowledge of flavor and mouth feel, not data packing.
Once again, eye metaphors are useless here. Eyes are very easy to fool.

There's a reason I've been emphasizing smell and digestion. The mouth, though a little more complex than the four-flavors model we learned in school, is definitely limited. For most people is mediated by a much more sensitive organ, the nose. And noses are not limited to half a dozen flavors, but detect a wide variety of individual molecules. They can be fooled, but not very easily.

This business about "six molecules" is utterly bonkers. Maybe you just don't care about differences in food, but I don't get the point of denying everything we know about cooking, food psychology, and food composition. Do you think everyone else who cares about food is just faking it? But I'll grant you that if you absolutely don't care about anything, then the food replicator is perfect.

--

And as a general note, I'm tired of the topic and will skip it. At this point I've explained a bunch of complexities that people just glitch over. I'm not quite sure why defending a non-canonical absolutist interpretation of an impossible sf device is so important that real-life science has to be denied, but whatevs.

Re: Random Thread

Posted: Thu Nov 11, 2021 5:19 pm
by rotting bones
zompist wrote: Thu Nov 11, 2021 5:02 pm Once again, eye metaphors are useless here. Eyes are very easy to fool.

There's a reason I've been emphasizing smell and digestion. The mouth, though a little more complex than the four-flavors model we learned in school, is definitely limited. For most people is mediated by a much more sensitive organ, the nose. And noses are not limited to half a dozen flavors, but detect a wide variety of individual molecules. They can be fooled, but not very easily.

This business about "six molecules" is utterly bonkers. Maybe you just don't care about differences in food, but I don't get the point of denying everything we know about cooking, food psychology, and food composition. Do you think everyone else who cares about food is just faking it? But I'll grant you that if you absolutely don't care about anything, then the food replicator is perfect.
I don't know. As long as your ingredients don't come with off-putting odors, can't additives go a long way to making the final product smell right?

Also, why would digestion be so hard to fool?

On the other hand, it's very difficult to crystallize organic molecules that contain both hydrophilic and hydrophobic components like those that make up cell walls. If that problem remains unsolvable, then Moose-tache's idea of a general lattice may be the only practical approach.

Edit:

This is not to say that the end result will resemble the original dish.
zompist wrote: Thu Nov 11, 2021 5:02 pm And as a general note, I'm tired of the topic and will skip it. At this point I've explained a bunch of complexities that people just glitch over. I'm not quite sure why defending a non-canonical absolutist interpretation of an impossible sf device is so important that real-life science has to be denied, but whatevs.
It's not important. It's fun to argue about it because it's not important.

Re: Random Thread

Posted: Thu Nov 11, 2021 8:13 pm
by Torco
I don't think 'replicators' that we just ask for a food and they molecularly assemble it with magical laser beams are ever going to be a thing, but making food with machines is, well... we're kind of very good at this, us humans.

A lot of what we drink is already totally artificial: you take whater, carbonate it with a machine, and inject a tiny bit of a syrup with nice ingredients like sweeteners and food dyes and flavour molecules: there's no reason this can't be made into a compact, monolithic machine that you just add water to and boom, gives you soda, other than market reasons: okay, there are some in the market but it's niche, probably cause it's easier to just buy a Fanta. The drink I'm currently having in the ingredients doesn't read 'vodka' or 'rum', it reads Alcohol, which could perfectly just be relatively pure, not straight from the brewery, but rather made in some ultra-efficient bioreactor with funky additives that produces some horrible booze, then ultra-distilled to a degree of purity that no drink humans drink have, and then added to soda alongside the citric acid and the sorbitol and whatever else.

Some foods are also very artificial: I have a bottle of protein powder industrially distilled from whey.

Making food from stuff that is not food, now that's difficult: but some techniques seem promising in future. you could grow tissue out of stem cells, this is already done, but some things are probably going to be easier to replicate than others. beef? difficult. bone marrow? possibly less so. lard? probably even less so. (and tbh it seems like lard is a lot healthier than previously thought.) And there's other processes too! you can have two chemicals make a spongy pongy matrix, like an acid and a base, then coat that sponge with some delicious edible material (like a flavourized paste of starch) and then remove any of the spongyspongy, or leave it if you made it with things which are themselves edible: this is basically how they make cheetos.

The bad thing about industrially processed foods is that they're made for profit, and thus are filled with harmful ingredients because they're cheaper or because they're addictive: this is why cheetos are bad for you, but you could, in principle, make a nutricionally balanced cheeto that's good for you, but it'd make Evercrisp of whoever less money.

I do disagree with this tho
The limiting factor in creating replicator food that is indistinguishable from traditional food would be scientific knowledge of flavor and mouth feel, not data packing.
No, obviously no. relevant knowledge whether food will be eaten and enjoyed by humans, or even whether it will fool a human into thinking it's the real deal, is not something difficult to obtain: you just give it to humans and ask them, maybe give them a box of it for their trouble if they like it and measure how many boxes you have at the end: the limiting factor is the ability to like molecularly assemble things, or whatever implications are carried by the word 'replicator': to me it sounds like the things in star-trek, and even if we're imagining the distant future of fantastic technology, there's always going to be ways to do something more easily than just building things atom by atom. no, you coopt a natural process that's already fantastically efficient, they often are. (if you have a bunch of hydrogen and a bunch of oxygen, what are you going to weave them molecule by molecule for, better to let them react vigorously and take advantage of the energy, maybe heating up the plate in the meantime)