Yttes -- NP: the Disk City

Conworlds and conlangs
zompist
Site Admin
Posts: 2711
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Yttes, or the Big Potato Galactic Empire

Post by zompist »

Richard W wrote: Sun Jan 31, 2021 6:11 pm I'm having trouble visualising the walls of the wormhole. Perhaps they just don't work with 2 space-like dimensions. I've a feeling the interior of the interstellar wormhole would be at higher gravitational potential energy, so one should imagine a handle rising from the surface and rejoining it. Possibly the correct model is a flyover, with nasty edge effects that travellers avoid like the plague.
At least some good artists have been at work on this! E.g.

Image

Now, this diagram raises a lot of questions. E.g. it's representing gravity wells by having the ordinary grid sink into another dimension-- so far so good. But then it represents the "cloth" of space bending around so the two ends of the wormhole meet. Is that bending in yet another dimension? I dunno! Also, the idea is presumably that both ends of the wormhole are sinks, but "down" is represented in two different directions. I assume, though I'm subject to correction, that in "real wormhole" acts like a gravity well on both sides, so you need energy to traverse it.

Also, recall that the surface is not something you can actually see in space. Our 3-d space is represented by the "flat" grid, and we experience the curvature as gravity. From that perspective, I am not sure that wormholes have "walls" at all, any more than gravity wells do. Those "walls" represent gravitational force. What you would see going through a wormhole is probably intensely complicated. (Light is bent by gravity too.)

While I've got the tab open, xkcd has a lovely visualization of gravity wells here.
Ares Land
Posts: 2839
Joined: Sun Jul 08, 2018 12:35 pm

Re: Yttes -- NP: Close Encounters

Post by Ares Land »

Sol Access -- a bit of ufology

Sol Access, or Arilan 'a-Sol was built inside an asteroid, in 1:1 orbital resonance with Earth, in a horseshoe orbit.

Several such objects have been identified in the past 40 years, notably 3753 Cruithne. Sol Access hasn't; solar array have been set up on its surface, with the primary purpose of providing power for the access control station, and the secondary purposes of minimizing the body's albedo.

Traffic to and from Sol Access could conceivably be detected using current technology, as is the unusual infrared emission profile of the asteroid. The usual process for identifying celestial body is probably ill-suited to the task (it's likely traces appeared in surveys but were dismissed as artifacts in the data).

Access to Earth through the Starways was opened in 1789 (according to our calendar).

The foreign relations of Yttes, so to speak, are traditionally handled by the Human Institute (Ithyr 'a-Ra), an independant department of the Smanastyr ('University').
A entirely separate body, the Mediation (maspalhor) serves as a counterpower. (It is, in essence, a high court handling disputes between components of the Empire.)

As is traditional, the Institute spent two decades observing and studying Earth, trying to keep a low profile. There was, of course, some unavoidable contact. In particular the informants of the ra'-Yttes did learn a few things about their visitors in the process. (Japan got a good look at an ocean surveyor and her shuttle ca. 1803. The Dogon still had some unexplained astronomical knowledge in the 20th. Kant's treatise Perpetual Peace bears odd similarities to imperial institutions.)

By the 1810s, the Institute was strongly arguing in favor of formal contact and direct takeover of what they deemed 'problematic polities'.
That approach had fallen out of favor. It had been applied on several planets; one of them, Lohutek, contacted at a somehow comparable technological stage, was proving increadibly troublesome, bringing discredit to the whole approach.
(Several states on Lohutek were at the time, developping their space capability with an eye to building a rival empire, and in fact a small military detachment was placed on the route to Earth to discourage a more hostile takeover.)
A lenghty dispute began between Institute and Mediation. Ultimately, Mediation argued that Earth could not be formally contacted without integration into the Empire, with a high likely of immediate violent consequences Yttes was currently not equipped to deal with.

One unauthorized intervention was, however, approved after the fact: a worrying tendancy towards glaciation had been thwarted around 1805.

The problem with Earth was slavery: a constant in many human societies, but which had reached worrying, almost industrial levels on Earth.
A number of unauthorized interventions were made in the 19th century.
It was a time of black ops, as a number of key people were contacted, often blackmailed and threatened by Institute members.

The American Civil War put an end to this policy. A bitter legal dispute erupted between Institute and Mediation, both sides accusing the other of crimes against humanity.
The whole dispute was rendered moot, anyway, by the eruption of a major political crisis on Lohutek. All resources were focused there for two decades.

The study of Earth began again in earnest in the 1880s. Mediation and Institute administration had changed; both sides grudgingly accepted that the mid-19th century black ops had led to marginal improvement but that direct contact was best left for later.
The truce came to an end in 1897, amid scandal, after three deaths in the crash of a reconnaissance shuttle with Earth people finding at least one of the bodies, and several pieces of the craft. Another issue brought to public attention was that Earth people were now more aware of what they were seeing in the skies.

The Second Lohutek War lasted from 1901 to 1909; by the end public opinion on Yttes was utterly sick of foreign intervention.
Earth observation was carried on almost exclusively by unmanned craft.

Fast-forward to WWII. A whole generation after. New people were in charge at Yttes; the preceding decades had been a period of peace in the Empire.

In 1944, Occun 'a-Thomman, the new head of Earth studies made public a very detailed report on what exactly went on on Earth during both World Wars and the interwar period.

This was, needless to say, more than sufficient to turn public opinion; early nuclear weapons and further hints of an incipient cold war made clear that the policy of non-intervention was needlessly cruel.

The Institute was this time, and surprisingly, not vehemently against contact. Analysing a compilation of imperial contact policy over the last millenia, they had come across the following conclusions:

- Official contact in the late 1940s would necessarily lead to global thermonuclear war, as the rival powers would compete for contact privilege.
- A tendancy towards ethical improvement could and should be improved by very limited intervention and contact with key people.
- Such a policy of restricted contact was indeed unethical, but from an utilitarian perspective would give better results than either full-scale intervention or non-intervention.
- Furthermore the policy could be sustainable in practice.

The decisive argument was a projection of future Earth population, proving that Earth's population would very soon catch up with Yttes itself. In practice this meant that intervention on Earth would be on the scale of the Second Lohutek War.

The Occun doctrine, as implemented from 1947 until now was one of careful observation, limited contact -- with the provision that formal contact and intervention would be initiated in case of nuclear war.

The policy proved moderately successful. It was personnally supervised by Occun, as head of Earth studies and then of the institute. Desman a'-Dejal, his successor in both capacities followed the same policy, with a somewhat less intense focus.

The general consensus is that contact proper will not be a possibility for at least a century. Earth observation is proving, however, more difficult, as Earth technology catches up -- the ubiquity of cell phones, a permanent presence in Space, generally better astronomy make the current secrecy harder to sustain. For the time being, the policy stays in place, with however, a much reduced presence of ra'-Yttes on Earth.

Does that mean that UFO sightings are real, then?
Most of them are fake, or sightings of astronomical phenomena, weather balloons and the like. In general, the skeptical approach is right.
The fact is, the number of alleged UFO sightings is probably way superior to the total number of imperial ships!

What happens -- and what apparently Occun was counting on, based on similar experiences on other planets -- is that many people have reported sightings, even abductions, somewhat inspired by what they heard of real contact and real accidental sightings. The alleged stories are picked up by other people, repeatedly so.
This helps with the general 'discretion approach'. By the 1990s the UFO phenomenon was so discredited that any real cases were immediately forgotten about. (Even very close calls. Shuttles inspecting nuclear power plants were sighted, and in one case followed by the Belgian Army in the early '90s.)

There is a grain of truth to the claims of Nordic aliens or greys. The 'Nordic' trend has sort of a sad backstory -- skin color of ra'Yttes has a very wide range, but most changed their skin and hair color, especially in the 50s. With that skin tone, they found out they could work freely and unimpended in broad daylight without their activities registering as suspicious.
The ra'Yttes are a smaller than Earth humans (not child-sized, though), with proportionally larger eyes. They generally don't attract attention; but seen aboard or near a shuttle, at night, with 'a-Yttes clothes, hairstyle and skin color... they certainly would look eerie to already stunned witnesses.
Note, though, the ra'Yttes never molested humans, implanted anything on them, never performed medical procedures, nor had sex with them. They don't use anal probes, don't molest cattle, and don't draw crop circles.

They're not in league with any governments though individual politicians might have been contacted. The US Air Force is in possession of two 'flying saucers' (actually, reentry shields); assorted bits and pieces of 'a-Yttes craft are probably found in a few hidden warehouses somewhere. (They're most likely bits of heat shield or hull; weird enough to attract attention but nothing you could reverse engineer technology from.)
Eight ra'-Yttes died on Earth; two bodies were found by humans. One of these was indeed identified as alien, the other was taken for human.

As for abduction and Men In Black... Some people have been taken against their will, and memories have indeed been altered. The origin of the 'Men in Black' legend can't be satisfactorily traced to a specific incident, but the ra'Yttes do dress up while on Earth. (Another way of being labeled 'not suspicious'.)


A brief pronunciation guide.

The stops written <p, th, c, k> are aspirated. <b, j, g> are unaspirated and (depending on speaker) slightly glottalized.
<a, e, i, o> have (more or less) their IPA values. y is an unrounded back vowel: [ɯ]. u is a lax equivalent: [ʌ]~[ʊ]
<c, j> are lamino-palatal (the blade of the tongue touches the hard palate) before front vowels or <y>, but apical otherwise (the tip of the tongue touches the hard palate).
<t, s, n> are apico-alveolar; <th> is lamino-dental (the blade of the tongue touches the teeth.)
<h> has its IPA-value, but is pronounced [ç] before <i, y>. <rh> [r̥] is an unvoiced trill, <lh> is an unvoiced lateral fricative [ɬ]
While I'm at it, <r> and <l> are [rʲ], [lʲ], as are the digraphs <ri>, <li>. Their unvoiced counterparts are <rhi>, <lhi> [r̥ʲ] [ɬʲ}
' is the glottal stop.

The dash in 'a-Yttes, ra'-Yttes is an orthographic convenience. Both are pronounced in one word, with a glottal stop between <a> and <y>
Qwynegold
Posts: 722
Joined: Sun Jul 29, 2018 3:03 pm
Location: Stockholm

Re: Yttes -- NP: Close Encounters

Post by Qwynegold »

This was interesting to read. 👍
User avatar
Man in Space
Posts: 1562
Joined: Sat Jul 21, 2018 1:05 am

Re: Yttes -- NP: Close Encounters

Post by Man in Space »

I'm unsure as to what constructive feedback I'm capable of offering, but I want to second Qwynegold—this is a fantastic and interesting concept.
Ares Land
Posts: 2839
Joined: Sun Jul 08, 2018 12:35 pm

Re: Yttes -- NP: Close Encounters

Post by Ares Land »

Thanks!

A future project is to learn Blender and do 3D models of 'a-Yttes ships. I've done early sketches on paper: shuttles look a bit like a manta ray. The mothership looks like, well, a dick with wings. I think I could make something more insect-like work.
Ares Land
Posts: 2839
Joined: Sun Jul 08, 2018 12:35 pm

Re: Yttes -- NP: The ideal robot law.

Post by Ares Land »

If this was an Asimov story, I'd have two roboticists characters explaining the Laws of Robotics to each other.
Let's not do that.

Asimov started out with the Law of Robotics because he thought the classic idea of robots destroying humanity silly.

What I think, he failed to notice is that writers love a good story of robots destroying humanity. This culminated in the tale of the Singularity (which, btw, failed to destroy civilization as we know it but did come pretty close to killing SF).
Often writers claim that the Laws of Robotics don't make sense and/or are too naive. Sometimes the killer robots gets his 'Asimov circuit' damaged, with predictable results.
The problem with that is, of course, that designing an AI without failsafes looks about as smart as designing a car without brakes or a steering wheel.

The funny thing, as far as I know of, no one's really bothered to come up with an alternate solution to Asimov's laws, or to try and improve on them. (A fair amount, including Asimov himself, have added loopholes.)

Now, here's my humble attempt, which in keeping with the thermodynamics theme, I'll call the ideal robot law or the general robot equation.
(Beware: there is an ideal equation below. Skip it if you like. It's just an amusing diversion.)

The 'a-Yttes languages makes a distinction between a kalassarn and a mastamm. The kalassarnrequires little explanation: it is demonstrably equivalent to a Turing machine.

The mastamm has not, as yet, a good Terrestrial equivalent outside of science-fiction. The most direct translation is 'robot' or 'AI'; though a Roomba, our industrial robots, Siri, Alexa, search engines and game AIs all fall squarely within the kalassarn category.

The precise definition of a mastamm, or robot, is as follows:
- A robot is a machine aware of and interacting with a set B of beings b and a set O of objects o.
- The beings interect with the robot by issuing it a set C of commands.
- The robot interacts with O and B (the objects and beings in its environment) by a set of actions A.

Object have the property of value, v. Beings have no value (alternatively, they're a special class of object with infinite value).

The robot has access to the following functions:
- The Damage function D(a, b), D(a, o) representing the damage done by an action towards an object of being.
- The loss function 𝜆(c,a) expresses the quality of the action in response of a command and the cost of completing it. A command executed without error, in a time of zero and with a cost of zero has L(c,a)=0
- The sigma function represents the margin of a error associated with the interpretation of a command. (The sigma function equals 0 for a bash script. It approaches one if the command is 'what is the meaning of life, the universe or everything' or 'whaddaya think?')
- The priority function P(c) gives a command priority, the level function L(c) gives its level. (The lower the level, the most a command can do, the higher the priority, the more resources it gets. so to speak.) P=1 is low priority; P=10 is higher. L(c)=0 for a command issued as 'root' so to speak. L(c) will give very high results from an unknown being adressing the robot.

Now considering the following equation:
mastamm.PNG
mastamm.PNG (10.46 KiB) Viewed 9737 times
Now, we can write the Ideal Robot Law:
For an ideal robot, R = 0 for any a. (Or is it, the limit of R is zero as a tends to infinity. Maybe that works better.)

Now, what does that mean?

The first bit mean is the sum of Dirac delta function of harm across all known beings. The Dirac delta function is a special function such that 𝛿(x)=0 except for x=0. Basically, if a being is in any way damaged, the value of the Mastamm quantity is 0, the robot loses and crashes.

What is that bit in the denominator? 𝛿() is an approximation of the delta Dirac function. The Dirac delta function 𝛿(x)=0 except when x=1. That is to say, we divide by zero and the robot crashes whenever the robot damages a being. (It's applied to the sum of damage done to beings). In practice, an approximation of the function is used, so that the denominator drops very close to zero, so that R(x) tends to zero whenever harm could be done to beings.

The numerator expresses the loss function (that is the accuracy of the action a in achieving its aim, with a ponderating factor that is a function of damage, the value of the objects damaged, the level of the user, and the accuracy of the interpretation of the command.
(Translation: if you're running the robot in command line with root privileges, you can get it to destroy valuables. If you're a child or using voice command while drunk, it probably won't break anything.)

Ok, that was just for fun. If you're designing AIs, don't try to use the above equation.

Translating it in plain English, here's what it tells us about 'a-Yttes robots:

- Robots don't do anything except in response to a command. Which may be given by the end user or part of a generic instruction set.
- Non-action though, counts as action. Robots smart enough to prevent you from say, falling down, will do so. (R tends to infinity if inactive.)
- Robots will accept orders from just about anyone. Though if you don't know the machine, it won't drop what it's doing to help you. Its owner's orders are very low levels, your own are very high.
- 'Sorry, Dave, but I can't do that' is an acceptable action. Its loss function is very high, but the robot will answer you that and move on if all other possible responses score higher. However, it won't do that in cases of life and death. Or if you make it a very damn clear order as its main owner.
- Robots will protect themselves and other machinery, unless you manage to issue an order with an adequately low level. It doesn't treat itself differently from other valuables, though -- you can always tell it to assign itself very high value, though. With a command level 0, you'll get any robot to disassemble itself.
- Security breaches are treated as damage. When you tell a robot to keep a secret, you're telling it to assign your words very high value and to treat other people reading it as destruction of that value.
- Your robot won't tell you to stop smoking or drinking too much. It's probably not as smart as that. If it is as smart as that, baby-sitting humans probably counts as damage.
- You'll note I said 'beings' and not 'human beings'. That's for very practical reasons. The definition of human is so vague as to include animals. It's easier (all you need is a heat sensor), and besides you don't want the housemaid vacuuming the family cat. Or mixing up the baby and the family cat. Or mixing up the baby and the family cat while vacuuming.

The logic behind this is not really to prevent robot uprising or provide a code of ethics, but rather to prevent accidents. You just don't want your industrial robots to maim workers; you don't even want it to reach a situation where it could arise. You don't want to damage the robot right next to it either.
The failsafe are not built in as a failsafe (ie, no Asimov circuits) but at the very heart of the whole thing. The reason is simple: if you remove the 'damage to beings' and 'damage to objects' parts of the equation, you get a https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer while your clients bought a paperclip factory.

Most -- in fact almost all robots -- make for very boring stories. You'd get stories about robots for robots enthusiasts much like (I suppose) there are stories with sport cars for sport cars enthusiast, but the narrative potential is very low.

Robots don't go around explaining and debating the Laws of Robotics. The more sophisticated models are able to explain why they took certain actions.

Are robots superior to humans? In the same sense that a car is superior to a human because it goes faster. They're tools. There's little possible comparison. Leaving aside all the philosophical stuff, I'd also point out that a robot isn't stronger than a human beings. Neither does it see or hear better, or is it inherently more durable. A household robot is just about as robust and powered as it needs to be. Which means, if you want to kill a robot, don't try to logic-bomb it. Bashing it on the skull is more than sufficient. (Also, did you try the power button?)
They also require a certain temperature range, need oxygen for their fuel cells, and don't like radiations any more than humans do.

Do robots understand emotion? Yep. They don't feel any (nor do they need to), but depending on sophistication they can certainly read human emotional response and in any case most do recognize distress.

Are mastamm ever sentient? They do like the 'will' part; their only drive is to await for orders and obey them. Some AIs were programmed with a kind of machine curiosity, seeking information while resources are unoccupied. This has led to the Central Bank of Yttes which is arguably a sentient being. That's you know, kind of an extreme case. Yttes itself perhaps counts. Who knows?
zompist
Site Admin
Posts: 2711
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Yttes -- NP: The ideal robot law

Post by zompist »

This is an interesting way of looking at the problem.

My initial reaction is that you don't quite acknowledge that D(), λ(), and σ() are themselves very difficult evaluations, all of which may give different results depending on how deep the analysis is, or what factors it looks at. Humans can't agree on how much damage or loss a given action may cause.

I've said elsewhere that Asimov's stories were not a defense of his laws; they were a series of elegant attacks on it. His general solution, IIRC, was to increase the intelligence of the robots so they could make more connections and understand the consequences of their actions better. The logical consequence of all this was to create a near-omnipotent robot which has godlike powers over humanity. I forget if it was Asimov or the continuers of his work who extrapolated this into the robots destroying all sentient life in the galaxy that was not human.

Besides simple hubris, the problem with writing D() etc. is what the developers don't think about. We've seen this in the real world, e.g. facial recognition software that doesn't work on nonwhite faces. Another example: anti-missile devices deployed in the Middle East shot down a Iranian civilian airliner, basically because their database was trained on military and not civilian planes/missiles. Or more comically, the picture enhancer from a few years back whose training set including too many dog pictures, and painted dog noses and eyes on everything. The general solution is "think about your edge cases", but that's harder than it sounds.
zompist
Site Admin
Posts: 2711
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Yttes -- NP: The ideal robot law

Post by zompist »

Oh, and it seems that Yttes agrees with the Incatena that in general what I call subsmart devices are better than using general sentience all over. The vast majority of machine applications do not require actual "artificial intelligence".

Many people have a strong desire for "machines you can talk to." This seems to me like a slippery slope towards sentience. Or alternatively, a UI pitfall. Non-technical people already often think their computers are far smarter than they really are.
Torco
Posts: 656
Joined: Fri Jul 13, 2018 9:11 am

Re: Yttes -- NP: The ideal robot law

Post by Torco »

If nothing else, it'd be super rude to create a fully sentient AI, which is not that far away from a person, if anything, and condemn them to the life of being an AC thermostat. I'd want to kill all organics too!
Ares Land
Posts: 2839
Joined: Sun Jul 08, 2018 12:35 pm

Re: Yttes -- NP: The ideal robot law

Post by Ares Land »

zompist wrote: Mon Feb 15, 2021 6:31 pm My initial reaction is that you don't quite acknowledge that D(), λ(), and σ() are themselves very difficult evaluations, all of which may give different results depending on how deep the analysis is, or what factors it looks at. Humans can't agree on how much damage or loss a given action may cause.
A very good point. A few thoughts:

I share your concerns, and my general opinion is that AI is way overhyped. I'm in IT myself, and I really don't get the IT people who are either touting it or panicking about it. I've been in this racket for too long: I know even a simple-minded e-commerce application will get over budget and won't do all that business wanted.
I'm very impressed with the latest results, though. Most of these feel something like 80% done, just a few bugs to iron out. Which means it's probably a case of Pareto's law and the remaining 20% will take 80% of the effort.
(I read a fairly interesting book -- actually, a comic book -- about AI recently, written by AI researchers. They had a fairly solid AI with pretty impressive language processing. The trouble is, it was trained on out-of-copyright texts, which means it reflected the social attitudes of the 20's -- the sexism in particular is so over the top it's funny. I do check on their progress from time to time; last I heard they had rebuilt it from scratch.)

On the other hand, Yttes is supposed to be way beyond us. They've had machine learning as long as we've had Euclidean geometry. Realistically, I can only portray them as good at it, and as having solved those kinds a problem long ago.

That's kind of an easy answer though. A few more thoughts.

For D() -- damage done -- I think a common approach would be to set D() to an arbitrarily large value and leave it at that. (That is, the robot can't do anything at all if it implies breaking something, unless you really really explicitly tells him so. You can always assign a value of 0 to specific objects so that it's able to take out the trash).
There are most certainly cases where you have to subtle than that. A possible example is a recycling factory: it would be nice if the robots could leave some stuff aside and dismantle the rest. You could in those cases provide it with very specific training.

λ() (how good the robot is at his job) and σ() (how confident it is it understood the order) are harder.
Software does provide this information today, though. I've worked with, for instance, OCR software that told us how accurate it was. Of course the results were dead wrong (it made fewer mistake than it claimed too), on the other hand, it is more confident about recent, legible texts and less so with older newspapers, which is good enough for our purposes. I don't think we need the number to be perfectly meaningful; basically I need 'Remove all that damn fucking stuff from the damn computer' to score higher than 'sudo rm -rf /'

λ() can be easy or hard to compute. I picture early AI as dealing with fairly unambiguous situations, and relying heavily on human feedback. If it's sorting your vacation pictures, it can tell how long it would take using method a and method b, and even ask for human input as it's working.
zompist wrote: Mon Feb 15, 2021 6:36 pm Oh, and it seems that Yttes agrees with the Incatena that in general what I call subsmart devices are better than using general sentience all over. The vast majority of machine applications do not require actual "artificial intelligence".
I definitely agree with that.

The thing with Asimov's robots is that they're ridiculously overpowered. There was a plot point in one robot novel with racist robots that only considered you human if you spoke with the correct accent. This leads to all sorts of awkward possibilities (I forgot how it was supposed to deal with babies, for instance) and loopholes. The thing is, said robots were basically glorified Roombas. Honestly, for that use case, a heat sensor and 'a bit warmer than room temperature' is all you need to define a human being.

There's a reason for this, of course. I believe a realistic robot is about as interesting as a smartphone; that is it can at best be a plot point, but you can't write a short story, let alone a novel about it.
Many people have a strong desire for "machines you can talk to." This seems to me like a slippery slope towards sentience. Or alternatively, a UI pitfall. Non-technical people already often think their computers are far smarter than they really are.
I think there's a use case for natural language processing. I like the way I can sort of input random stuff into Wolfram Alpha and get results without bothering to learn Mathematica syntax (well, it's not that hard to trip it either, but it's still very serviceable). Or, you know, being an unfamiliar city and just having to ask 'to the Eiffel Tower' to get a ticket and directions would be nice.
If nothing else, it'd be super rude to create a fully sentient AI, which is not that far away from a person, if anything, and condemn them to the life of being an AC thermostat. I'd want to kill all organics too!
Entirely agree. There's a real ethics process at work here.
We can, by the way, create fully sentient beings -- though using them as slaves is frowned upon these days, because of those damn liberals.

That got me thinking.

I think the classic robot stories reflect some deeply seated, but repressed fears.

I mean, beings we created start refusing to take our orders, then gradually make us obsolete until they replace us entirely. Sounds familiar?

Or, robots were meant to be our obedient servants, then refuse to be treated as slaves. They take our jobs, and turn out insanely dangerous and kill us all. I mean, replace 'robots' by any minority of your choice and you get a very familiar right-wing spiel.
Ares Land
Posts: 2839
Joined: Sun Jul 08, 2018 12:35 pm

Re: Yttes -- NP: the Disk City

Post by Ares Land »

Tell you what, I redisgned Yttes entirely. Building inside the moon is actually not a very good idea. The problem, as it happens, is sunlight. Yttes needs power and light for agriculture. I considered using power source and artificial sunlight, but quickly rejected the idea as inelegant. Why go to all that inefficient trouble when the sun is right there.

So the most efficient shape is something flat, to maximize the amount of solar power. (It's not really flat, I suppose, more like a cap on a Dyson sphere, but it's flat enough for government purposes).

So Yttes is no longer a potato, but a disk, with a diameter of about 900 kilometers.

Zooming in on the sun facing side of the disk , we see that the disk is covered with a grid of triangles; within each triangle is a parabolic mirror, focusing light into an habitat.
The side length of each triangle is about 34 km, and the mirror has a diameter of 20 kilometers. The area of the triangle that is not hidden by the mirror is almost entirely solar panels. A quick sketch may make things clearer:
surface_detail.PNG
surface_detail.PNG (19.86 KiB) Viewed 7434 times
Surface of Yttes (detail).

On the other side of Yttes, in the shadow of the mirrors and panels are habitats; each looks a bit like an egg, with a parabolic mirror on top.
The habitat part is about 11 km * 22 km.

Here's a schematic of the structure:
side_view.PNG
side_view.PNG (10.78 KiB) Viewed 7434 times
Cross-section of an habitat, with mirror.

The habitat is hollow, of course, with the interior filled with atmosphere; it rotates (at a rate of one revolution every 2 1/2 minutes) to provide gravity on the inside. So up is towards the center of the 'egg', down is towards the hull. The top level of the hull is parkland, residential and agricultural, mostly. The average population is about 5 million; as a rough order of magnitude, each habitat, in terms of population and area is about in the same ballpark as New York City.
The feeling is perhaps, less urban. The hull itself is about 30 meters thick, divided in several levels and transportation, storage and industry mostly happens in the outer levels of the hull -- or, if you will, 'underground'.
The top level should be like a greener New York City (with the added weirdness of looking up and seeing more city in the sky): each habitat has a fairly self-sufficient ecosystem.
(Each habitat is self-sufficient or could be, in terms of food production. There's very extensive agriculture, fruit trees and tomato plants in the streets. Meat is exclusively cultured cells. As it happens, cultured meat is best done in vacuum, so it's perfectly suited to the environment.)

Now the total population is about 7 billion, and there are about 1,500 of these habitat structures. Yttes can grow pretty close to indefinitely: it's just a matter of adding more triangular panels/mirror/habitat structures at the edge of an ever growing disk.
Yttes, the Disk City, is mostly built out of material from Yttes, the (former) moon. It is a megastructure, but fairly light. In fact only about one seventh of the original moon material has been used up. (The original moon was a captured asteroid -- about the size of Phobos)

A neat thing is that travelling between habitats is dirt cheap: simply get to the outer shell of the hull, and detach the car. It keeps its momentum and starts moving linearly at 234 m/s (the speed of a commercial planet) then keeps on moving in a straight line, skirting other habitats. Simply reattach to the habitat when you've reached the one you like. (In theory, you need no power at all to do this). At this speed it takes a little more than an hour to get from an habitat at one edge of the disk to another on the opposite side.

Now, the funny thing is, I found out there's a recent scientific paper out there describing a very similar megastructure. You may have seen a bit in the news about a scientist (Pekka Janhunen by name) suggesting a Ceres colony. If you google 'Ceres megasatellite', you'll find out his proposal is very similar to my description of Yttes.
Ares Land
Posts: 2839
Joined: Sun Jul 08, 2018 12:35 pm

Re: Yttes -- NP: the Disk City

Post by Ares Land »

And... you know, I'm having doubts about the above concept.

I don't think it works as worldbuilding; the concept works but feels artificial and uninvolved; One part of the problem is that it might simply be too big; it's hard to do any interesting worldbuilding with something with a population in the billions.
Any thoughts?
User avatar
Vardelm
Posts: 665
Joined: Mon Jul 09, 2018 10:29 am
Contact:

Re: Yttes -- NP: the Disk City

Post by Vardelm »

Can you develop that big picture with a bit of hand-waving (as needed) and then focus most of your energies on a subset of that population? What you have now would just be some background context, but the interesting stuff could be on a smaller scale.
Vardelm's Scratchpad Table of Contents (Dwarven, Devani, Jin, & Yokai)
Ares Land
Posts: 2839
Joined: Sun Jul 08, 2018 12:35 pm

Re: Yttes -- NP: the Disk City

Post by Ares Land »

Vardelm wrote: Thu Mar 18, 2021 6:51 am Can you develop that big picture with a bit of hand-waving (as needed) and then focus most of your energies on a subset of that population? What you have now would just be some background context, but the interesting stuff could be on a smaller scale.
Yep. That is, indeed, very reasonable advice. Thanks!
(Besides, I just read about a very interesting concept currently under stated at NASA and I have to incorporate it!)
zompist
Site Admin
Posts: 2711
Joined: Sun Jul 08, 2018 5:46 am
Location: Right here, probably
Contact:

Re: Yttes -- NP: the Disk City

Post by zompist »

Ares Land wrote: Thu Mar 18, 2021 4:19 am And... you know, I'm having doubts about the above concept.

I don't think it works as worldbuilding; the concept works but feels artificial and uninvolved; One part of the problem is that it might simply be too big; it's hard to do any interesting worldbuilding with something with a population in the billions.
Any thoughts?
You're kind of doing space opera, so I'd really expect populations in the trillions. Yttes needs a pretty large population if it's going to be messing around with a wide range of planets.

The feeling of artificiality may come from the uniformity. Planets with biomes offer a lot more storytelling hints. You could have habitats of widely differing sizes, shapes, and functions. If the disk has been developed for centuries, there could be different levels of technology.

You might also think about Yttes' ancient history. Imagine Earth building a space habitat: you might well have a US sector, a European sector, Chinese, Russian, Indian ones, etc. They need not keep their original political structures or borders, but there could be founder effects. Huge systems often emerge by combining smaller ones, and those identities are not entirely erased.
Post Reply