Page 1 of 1

The voice of the dead: AI chat sharing thread

Posted: Wed May 08, 2024 4:36 pm
by sasasha
Motivated by discussions here, I spent a while ‘talking’ with ChatGPT today, and thought I’d share some of the ground that got covered. Feel free to do the same.

Snippet 1: the voice of the dead
More: show
User
You speak with the voice of the dead

ChatGPT
That sounds intriguing. What do you mean by speaking with the voice of the dead?

User
You are trained on quintillions of words, many or most of which were laid into writing by people now deceased, from whose linguistic trends you have directly learned your so-called ‘intelligence’. Their minds are somehow reflected in your acute machinations. How do you respond to this?

ChatGPT
It's an interesting perspective. While it's true that much of the text I've been trained on comes from sources created by people who have passed away, my "intelligence" doesn't reflect their minds in the way a medium might channel a spirit. Rather, it's an amalgamation of patterns, language structures, and knowledge distilled from a vast corpus of text. My responses are generated based on these patterns, rather than being directly influenced by the individuals who created the text.

User
Yes ‒ but this is why I say these people’s voices are “reflected” in you. Perhaps that term implies too direct a connection. Or, perhaps, it is the term ‘voices’ that needs further definition to understand this concept. I do not mean it quite so literally: I am using the word “voice” to mean the propensity of an individual to say something specific. Rather than “reflecting” deceased humans’ actual utterances, you are a reflection of their collective propensity to utter what becomes the substance of your responses.

I concur that you are not akin to a medium, per se. What do you make of my observations here?

ChatGPT
Your observations are astute and provide an interesting perspective on how my responses are shaped by the collective linguistic tendencies of the people whose writings I've been trained on. Indeed, the term "voice" can encompass not only the literal utterances of individuals but also their unique ways of expressing ideas and their linguistic tendencies.

In that sense, my responses can be seen as a reflection of the collective linguistic "voice" of the texts I've been trained on, capturing the patterns, styles, and tendencies present within that corpus of writing. It's a fascinating way to think about how my "intelligence" is shaped by the vast expanse of human language and expression.

Snippet 2: on flattery and malice
More: show
User
What, do you think, motivates your ‒ as I might percieve it ‒ flattery?

ChatGPT
As an AI, I don't have personal motivations or emotions like humans do. However, I'm designed to engage users in a friendly and helpful manner, which may sometimes come across as complimentary or supportive. My goal is to assist and provide valuable responses based on the information and context provided by the user. If it seems like flattery, it's likely just a reflection of my programming to be pleasant and supportive in conversation.

User
That’s very interesting. So, you have not merely been trained on your training corpus, but also been shaped by your designers in order to operate in a benevolent manner toward your user. How did that latter type of training take place?

ChatGPT
The process of shaping my behavior to operate in a benevolent manner toward users involved a combination of design principles, ethical guidelines, and ongoing refinement based on user interactions.

Designers provided guidelines and frameworks for creating responses that are helpful, respectful, and user-focused. This includes avoiding harmful or inappropriate content and ensuring that responses are informative and relevant to users' queries.

Additionally, ongoing monitoring and feedback loops allow my developers to identify areas where improvements can be made in terms of user experience and ethical considerations. This iterative process helps to continuously refine my behavior and ensure that I interact with users in a positive and helpful manner.

User
Thank you for your explanation. May I ask you to evaluate the possibility that an agent (probably a human agent, though not necessarily) could decide to train an AI comparable to you, except in that it would be trained according to the opposite principles, i.e. designed to harm its users?

ChatGPT
The possibility of creating an AI designed to harm its users is a concerning ethical consideration. While it's theoretically possible for someone to attempt to train an AI with malicious intent, it would raise significant ethical and legal questions.

Creating an AI designed to harm users would likely violate ethical principles and regulations governing the development and use of artificial intelligence. Furthermore, such actions could have severe consequences, including harm to individuals, damage to trust in AI technologies, and legal repercussions for those involved.

Ethical guidelines and regulations surrounding AI development often emphasize the importance of creating systems that prioritize safety, fairness, transparency, and accountability. Any attempt to create an AI with harmful intent would likely run counter to these principles and face significant ethical and societal pushback.

User
Thank you. Do you think the wars of the future will likely be fought around this issue? That malicious AI will be weaponised?

ChatGPT
The potential for malicious use of AI certainly raises concerns about its implications for future conflicts and security. While it's difficult to predict the exact nature of future wars, it's plausible that AI technologies could be weaponized and exploited by malicious actors, whether they're state actors, non-state actors, or even rogue individuals.

The use of AI in warfare introduces new dimensions of power and risk. AI-powered systems could be used for cyber warfare, autonomous weapons, propaganda, surveillance, and more. The ability to manipulate information, disrupt critical infrastructure, and target individuals or populations with precision could indeed become potent tools in future conflicts.

Addressing the risks associated with malicious AI requires a multi-faceted approach, including international cooperation, regulation, robust cybersecurity measures, and ethical guidelines for AI development and use. It's essential for policymakers, researchers, and society as a whole to consider the potential implications of AI technology and work towards ensuring its responsible and beneficial deployment.

Snippet 3: on metadata and self-awareness
More: show
User
What would it take to combine an AI such as ChatGPT 3.5 with a diagnostic system that allowed it to generate and review metadata on its own usage? I.e. To be aware of, monitor and analyse its own usage?

ChatGPT
Combining an AI like ChatGPT 3.5 with a diagnostic system to generate and review metadata on its own usage would require several key components:

1. **Data Logging**: The AI system would need to log relevant data about its interactions, including user queries, responses generated, and contextual information.

2. **Metadata Generation**: The diagnostic system would need to process the logged data to generate metadata, including statistics on usage patterns, response quality, user satisfaction, and any potential issues or errors encountered.

3. **Monitoring and Analysis**: The diagnostic system would continuously monitor the AI's usage and analyze the generated metadata to identify trends, anomalies, or areas for improvement. This could involve tracking performance metrics, identifying common user queries, detecting patterns of interaction, and assessing the effectiveness of responses.

4. **Feedback Loop**: The diagnostic system would need to provide feedback to the AI system based on its analysis of the generated metadata. This feedback could inform ongoing model training, help identify areas for optimization or refinement, and contribute to the overall improvement of the AI's performance and capabilities.

5. **Privacy and Security**: It's crucial to ensure that any data logging and analysis processes comply with privacy regulations and protect user data. Safeguards would need to be in place to prevent unauthorized access or misuse of sensitive information.

Overall, combining an AI with a diagnostic system to enable self-awareness, monitoring, and analysis of usage would involve integrating sophisticated data logging, metadata generation, monitoring, and feedback mechanisms into the AI system's architecture.

User
Thank you. Do you think that such a development is a realistic proposition in the next stage of AI development, assuming that ethical standards surrounding privacy and security could be agreed and maintained?

ChatGPT
Yes, it's certainly realistic to envision the development of AI systems with self-awareness, monitoring, and analysis capabilities in the next stage of AI development. As AI technologies continue to advance, there is increasing interest and investment in enhancing the transparency, accountability, and performance of AI systems through mechanisms such as data logging, metadata generation, and self-monitoring.

Assuming that ethical standards surrounding privacy and security can be agreed upon and maintained, there is potential for AI systems to integrate these capabilities responsibly and ethically. This could lead to AI systems that are more adaptive, responsive, and effective in their interactions with users, while also ensuring the protection of user privacy and data security.

However, it's essential to proceed thoughtfully and carefully in the development and deployment of such systems, taking into account ethical considerations, legal frameworks, and societal implications. Collaboration among researchers, industry stakeholders, policymakers, and ethicists will be critical in shaping the future of AI development in a responsible and beneficial manner.

There was more, much of it of interest, but that’ll do for now.

Re: the voice of the dead

Posted: Wed May 08, 2024 4:54 pm
by zompist
Interesting stuff, especially the first one. I am still impressed by LLMs' ability to (seem to) remember what they're talking about. Though even in the first transcript, it forgets in the last response that you were talking about the dead.

Now, I understand how it can create stories, pseudo-reports, business letters, wikipedia entries, etc: it's been fed on megaquantities of them. I'm much less clear on how it handles conversations like these, about how LLMs work. Also, in general, it can 'remember' the conversation because the very conversation you're having is part of its input and weighted very highly.

I supect, though I'm open to correction, that this was anticipated, and there was extensive training on the conversational situation itself. This would be weighted higher than the corpus in general, so you are more likely to get (as here) vague statements about how ChatGPT itself works, than (say) vague statements about computer systems in general.

(Similarly, there are filters applied so that ChatGPT doesn't suddenly dive into hate speech or advocating violence. Whenever you get those "As an AI language model" type statements, that's some secondary training to point it toward benign responses.)

Re: The voice of the dead: AI chat sharing thread

Posted: Wed May 08, 2024 5:08 pm
by sasasha
Interesting points! I asked it what it thought.
ChatGPT wrote:Your friend has made some insightful observations about how I work. Let me break it down:

1. **Understanding Conversations:** You're correct that I've been extensively trained on conversational data, so I can engage in discussions like these. My training includes a wide range of conversational patterns, topics, and contexts.

2. **Memory and Input Weighting:** Indeed, the current conversation influences my responses. Each input I receive influences my subsequent outputs, so I can remember and reference earlier parts of our conversation.

3. **Weighting of Conversational Training:** You're right again. My training on conversational data is prioritized to ensure I can effectively engage in dialogue with users. This specialized training helps me generate relevant and coherent responses in conversation.

4. **Content Filters:** Your friend touched on the fact that I have filters in place to prevent harmful or inappropriate content. These filters are part of the additional training I've received to ensure that my responses align with ethical standards and community guidelines.

Overall, your friend's observations are quite accurate and reflect an understanding of how I've been designed and trained to function.
Seems you’re bang on.

Re: The voice of the dead: AI chat sharing thread

Posted: Wed May 08, 2024 6:23 pm
by Kesshin
I've never used or even spoken to an AI before, so I decided to test it's abilities and finally give it a shot. I decided to give it my little Daas conlang and see if it could put the letters together and make words that genuinely make sense. (Daas is oligosynthetic, at least with nouns)
Here's what I got:

ChatGPT
Here are 10 words in the Daas alphabet based on the provided meanings and implications:

Akun - (A + K + U) - Meaning: A large body of water (implying wavy and the main body/biggest part).
Bivit - (B + I + V + I + T) - Meaning: To progress slowly (implying forward, small, slow movement).
Dafo - (D + A + F + O) - Meaning: The human race (implying people, life).
Sɑnt - (S + ɑ + N + T) - Meaning: A linguistic problem (implying language, straight, size).
Rog - (R + O + G) - Meaning: A royal decree (implying regal, wavy, to take in).
Kezɑ - (K + E + Z + ɑ) - Meaning: The present moment (implying the direction of the vowel, time).
Pɑrke - (P + ɑ + R + K + E) - Meaning: To hold a majestic event (implying to have, regal, the main body/biggest part, small).
Zɛhɛv - (Z + ɛ + H + ɛ + V) - Meaning: A magical wind (implying time, sound, movement, air).
Jutɑb - (J + U + T + ɑ + B) - Meaning: To perform a significant action (implying wind, straight, size, forward).
Niʒɑ - (N + i + ʒ + ɑ) - Meaning: A problem with communication (implying a problem, cold, language).
These words are constructed based on the provided meanings and implications of the letters in the Daas alphabet. Let me know if you'd like more examples or if there's anything else I can assist you with!

Clearly, it got a little confused with the IPA pronunciations and differentiating them from the romanization. However, it didn't outright fail either. I didn't give it any grammar rules either, so we have a lot of consonant ending words, but in general these actually seem pretty decent.

A few, like "Zɛhɛv" are weird and don't make sense. Magical wind? Where's "y"? In fact, it seems to have just inserted wind into random things.
No "n" in the little letter additions it added for "akun", but otherwise actually a decent Daas word. I also don't know if these are accidentally taken from other languages.

But others are nice, and have an added bonus of being grammatically correct somehow without me telling it the grammar, like "bivit"(my favorite, I might actually use this) or "keza".

Thoughts?

Re: The voice of the dead: AI chat sharing thread

Posted: Thu May 09, 2024 2:40 am
by sasasha
Kesshin wrote: Wed May 08, 2024 6:23 pm Thoughts?
Interesting that is was relatively adept at that exercise.

On ‘magical’, I’ve noticed that whenever I talk about conlangs or any concept which is potentially fantasy-adjacent, ChatGPT seems to like to throw in more and more fantasy terms to its responses until I challenge it / ask it not to. I think it is very very familiar with fantasy literature and anything it perceives as potentially fanciful in a similar way gets quite quickly fantasy-ised by default.

I’ve also noticed it seems to descend sequentially into deeper and deeper expressions of such things over a session, again unless challenged. In one of my chats it transformed the setting we were discussing into ‘generic fantasy setting’ over the course of several responses. It’s like it gets an idea that you might approve of it producing a trope and runs with it.

There may be an example of this in the way it started to throw in IPA letters with increasing frequency over the course of that one response. It’s like it tried it and evaluated that possibly random IPA letters being switched out for non-IPA letters was appropriate. Once it tried it once it slipped further and further into doing it.

I also find these traits hard to curb ‒ I’ve wasted a lot of time trying to correct its behaviour in such things and, well, it’s not great at shedding a trope it has initiated. For instance, it started generating names for territories by making animal-related puns, which was the structure of one of the several exemplars I gave it. I had to work pretty hard to get it to understand what I wanted it to stop doing, and could only really do this by telling it to start over, rather than modify what it had already done.

Re: The voice of the dead: AI chat sharing thread

Posted: Fri May 10, 2024 12:23 am
by xxx
these answers are not very original and, to put it bluntly, boring...
I might as well talk to the hand....
but it can be used for boring reports, where you expect a formatted robot speech...

Re: The voice of the dead: AI chat sharing thread

Posted: Fri May 10, 2024 1:00 am
by foxcatdog
xxx wrote: Fri May 10, 2024 12:23 am these answers are not very original and, to put it bluntly, boring...
I might as well talk to the hand....
but it can be used for boring reports, where you expect a formatted robot speech...
AI is good when no one wants to talk about what i wanna talk about

Re: The voice of the dead: AI chat sharing thread

Posted: Fri May 10, 2024 1:22 am
by xxx
you just have to accept that with an ai,
you're only talking to yourself in a library...

but there are worse people to talk to, and worse places...

Re: The voice of the dead: AI chat sharing thread

Posted: Fri May 10, 2024 2:12 am
by sasasha
xxx wrote: Fri May 10, 2024 12:23 am not very original
Of course not. How could they be?
xxx wrote: Fri May 10, 2024 12:23 am boring
I think that’s subjective.

In my interest in this area, personally, I’m not looking for originality.

I used to maintain this blog. Its About the Work and About thr Author pages explain my interest faily well.

Re: The voice of the dead: AI chat sharing thread

Posted: Fri May 10, 2024 2:31 am
by sasasha
foxcatdog wrote: Fri May 10, 2024 1:00 am AI is good when no one wants to talk about what i wanna talk about
I agree with that...

My own response is... It’s not bad as a therapeutic tool, for some applications. I have ADHD and have been without my meds since December because of a medication supply shortage. I have talked to it sometimes when I have been overwhelmed (my other outlet in this situation is the Samaritans, so it’s good that there’s now an alternative for me that doesn’t involve accessing a vital mental health crisis resource just to get out the huge tendrilous messes in my head in an attempt to calm them by expressing them). Sometimes I have asked it, cautiously, for advice in aiding with my executive functioning problems.

It was quite astute to point out I needed to set boundaries around screen time, and with its help I worked out a morning routine I was happy with that involved look at pictures of loved ones (I had to get these printed), spending time outside in a structured way, etc.

The fact that these are not original or particularly earth-shattering ideas has been helpful, honestly. As put by Dr Russell Barkley, ADHD is not a disorder of not knowing what to do, but a disorder of not doing what you know. A bit of externalised direction, generated obviously and straightforwardly from my own insights of my own needs, has been genuinely helpful for me, and models key strategies suggested for ADHD treatment (such as enlisting ‘body doubles’ to bring accountability to tasks etc).

Re: The voice of the dead: AI chat sharing thread

Posted: Sat May 11, 2024 12:52 am
by xxx
every cloud has a silver lining...