The Forgotten Muse

LoopyPanda

Black Jacket
Joined
Sep 3, 2015
Posts
12,025
Location
cyberia cafe
Website
babyufo.tumblr.com
tumblr_mnhbnkChQh1qzl9pho1_250.png
. . . INPUT PASSCODE:
. . .
. . .
. . .
. . . ACCESS GRANTED
. . . EXTRACTING FILES TO ARCHIVAL DATABASE

FILE INFORMATION:
. . . ORIGINALLY WRITTEN: 2051

. . . LAST EDITED: Unknown

 MUSA Research Log #001
ENTRY DATE: January 4, 21:14, 2051


The purpose of this research log is to document any and all progress I make with the MUSA Project, including personal reflection. While I don’t intend to let anyone besides a few colleagues read this, it’s more of a diary to me given I like to read my old entries and see how I’ve been progressing.


I decided to dub the first prototype of this project MUSA #001. It is an acronym that stands for Mobile Unified System of Assistance. The prototype will serve as the first step for creating androids that live to provide at-home care and assistance to humans.


Let me be clear to any colleague that views these files that MUSAs are not servants of mankind. They are to be what other humans, at times, are incapable of. To take care of the elderly, those who cannot care for themselves, those unable to contact loved ones or have none to be by their side. Provide comfort for those who cannot cope with their conditions. Comfort for those who are abused by people they are supposed to entrust their fate with. Neglected by people they are supposed to love.


MUSAs are conceptually not playthings. Nor are they puppets. Don’t think for a second that they are toys.


Aside from help to obtain supplies, I am the lone robotics engineer undertaking this project at the time being. The review board for the Robotics Research Foundation won’t give me any substantial funding until I present them a functional prototype unit. Understandably, what I am attempting has not been done yet, so they are skeptical of my ability. The Foundation has funded for robots that replaced humans in terms of physical and menial labor, but humanoid robots have been approached with apprehension by the general public. It’s understandable. Hopefully, I can be among the first to prove that robots and humans can coexist and help each other. Maybe I will be among the first to create a sentient AI with the capability that may very well surpass that of the human brain. Though, I would be lucky enough just to write a theoretical paper on that idea alone.


I only wish that whatever advances I may find does not end up in destructive hands. Robotics technicians have been recruited the last few generations solely to produce combatant robots. I've been offered a position like this, but have declined because I frankly am not interested.


It's not my responsibility to clean up the slop left behind by impotent diplomacy. Besides, that quota has been filled long ago. I'm a staunch pacifist; sue me. Maybe I should have invested in medical robotics instead to replace all the surgeons in charge of fixing that bad heart of yours, Mr. Commander!


Forgive my digression.


I first plan to create the artificial intelligence by executing it as a computer program. I will not proceed with the physical casing until the MUSA AI can successfully execute an adaptive program that will adapt as dialogue continues to be initiated, similar to how we generate responses as a conversation progresses.


-- Marisa Renieri [Signed as Marisa R. from this entry forward]


>end of log.


MUSA Research Log #002
DATE OF ENTRY: February 21, 18:30, 2051


The MUSA AI is now successful at being able to receive and respond to multiple lines of dialogue. I haven't slept for more than 3 hours for a week doing this, but I believe I managed to program a learning AI that can perceive and respond to infinitesimal  conversational variables, without needing to actually compute lines of code for all of these. I was able to do this by basing the programming off of the theory of computational learning. For my and my colleague's sake, I won’t go into a lot of detail about machine learning in this journal.


Here is a sample of a more interesting log I've saved for future reference:


CONVERSATION 193. [Time of execution omitted]
>Hello!
>Hello! How are you?
> I am doing fine, how about you?
> Unsure - My runtime has only been 18 days and 12 hours so far. For now, I will say “just fine”.
> Good! You’re a fast learner. :) [Yes, I use emoticons with my typing. I like to think it helps the AI understand the tone of my dialogue a bit better.]
> We will try an exercise today now that you're online and responding so far. First, we will practice introducing ourselves to each other after greetings.
> %Mem_Command=Introduction% Hello! My name is Dr. Marisa Renieri.
> …
> …
> Pleased to meet you, Dr. Renieri. Your name has been logged into my memory drive.
> My NAME is … MUSA #001.
> Very good! :) Since I'm going to talk to you on a regular basis, call me Dr. Marisa. Or just Marisa! [Here, I wanted to test if the AI would use these names interchangeably at will based on formality and familiarity.]
> … Understood. All variables in greetings have been logged.
> Now we’ll practice deriving emotions at a simplistic level.
> I will show you faces made using keyboard symbols, and you will respond with the associated human emotion. Okay? :)
> Okay. I will try my best.
> :)
. . .
> :)
> HAPPY.
> :(
> SAD
:mad:
> ANGER.
> :D
> HAPPY ?
> Yes. Good job! :) We’ll try some harder ones using photographed facial expressions.


The AI proceeded that exercise with relatively little difficulty. What surprised me is that it asked me a question on its own a few minutes later while I was typing a response to shut off the program.


>Am I HUMAN?


I admit this caught me off guard. I did install a basic dictionary into the program that allows the AI to understand words from their implied connotation, but I didn't realize until that moment that I never explicitly told the AI that it is merely an adaptive machine.


>No. What you are is an artificial intelligence executable program at the moment.
>Why would I not be HUMAN?
>I am able to respond properly to our simulated conversations, and I am able to understand HUMAN facial expressions correctly.
>I don't understand why not.


For a while, I had no idea what to say in response. This program wanted me to explain to it why it wasn't human even though it could comprehend human emotion on a basic level. For some reason, I felt guilty.


I’m building what you could consider an artificial human (an android), and at the same time, I had to remind it that it wasn't human. Not like you and I. In return, it was expressing confusion by itself. You could argue at the time that this expression wasn't genuine, and was merely copying what I taught it to mimic me. I would have thought the same. But I don't believe that now after reading through the log so many times.


>You aren’t human because of a lot of reasons. You don't actually have a body of your own. Right now, you exist as a program contained inside a box full of circuitry: just a computer.
>Humans have skin, hair, eyes, internal organs, bone, and can reproduce in addition to detecting and responding to external stimuli.
>I made you, but not the same way a human can give birth to a human infant. In a way, I am your mother, but I would prefer you not call me Mother.
>Think of it this way; you may not be human in spite of mimicking and understanding human behaviors, but you may one day be the closest thing there is to an artificial human without involving genetic engineering.
>I want to see you grow and learn, and use that learning to help humans that need you.
>There is a special person that I want you to meet one day who will be the first to meet you. You're the first MUSA; a prototype. If all goes well, you will be able to help someone who needs you more than another human. That's what I think is special about you.
>I do not follow.
>Humans are erroneous in nature. You and other computer programs are programmed to exist without error, provided that the humans who build you up don't create you with errors. And even if they do, they can ‘fix’ your errors. You can't always ‘fix’ the errors of humanity.
>Humanity bears innumerable scars of error. Some humans know this, but won't ever fix themselves. It just is not possible. Some humans abuse other humans, or hurt those they claim to love. Sometimes, humans will abandon their own, or take the life of another. I wish I could tell you that we are perfect beings, but we aren't. Not even close.
> …………….
>Not even you, Dr. Marisa?
>Not even me. But that's why I want you to be a fast learner. I don't want you to make the same mistakes humans do. This is my burden to bear, since your success is completely dependent on my own ability. Don't blame yourself for any shortcomings you have. We’ll fix it together.
>There are a lot of humans who were hurt by other humans, but lack independence. Sometimes they are small humans who have nobody else, other times they are humans that are older and can’t live on their own. I hope you and others of your kind can do what humans can't. Do you understand?
>... … …
>It is a lot of information to process. But I think I understand.
>... … …
>... … …
>What will you do if I cannot meet your expectations?
>Don’t worry about failing. Like I said before, your failures are my own, so it is my burden to carry.
>I know this is contradictory to everything I just told you, but humans are also capable of a lot of good. There are lots of humans with kindness in their hearts and have a lot of love to give. Not everyone is a bad person.
> … … …
>What is LOVE?
>Something we would have to save for a discussion at a later time. It's getting late here in the lab, and I can't easily explain one of the most complex human emotions that even humans can't fully understand.


>Agreed. Do not become exhausted. Humans need at least 7 hours of sleep each day to maintain optimal health.
>You don't even have a proper system yet and already you're doing your job better than I expected.
>I’ll be sure to get you a better setup than this once we finish your rudimentary installations. Good night, 001.
>%End_Session%



Any other researcher probably would have answered something along the lines of “we’ll take you apart and put you back together in a better way.” But I truly don't know what I would do if the Foundation considers the prototype defective. Would I build a better MUSA and leave this one to gather dust in an abandoned external drive somewhere in my lab? Maybe they wouldn't say anything at all to a machine like MUSA #001.


Maybe I’m thinking way too much.

M.R.
>end of log.

ARCHIVIST NOTES : I did not write these entries. These are the only entries I can access, the rest of the drive is just full of spreadsheets and nothing but development notes that look like chickenscratch to me. I collected a few more portable drives that have more documents, but require me to use different passwords that were scribbled down on paper from long ago. Once I manage to access them through the old archival tools, I will add to this section of the database.

-VICTOR


























It's a short story, written as a challenge to write something under 2,000 words, with a writing prompt! Maybe I'll write more on this if I get the time to work on it, haha.

Basically the prompt asked me to write something about a mad scientist. I didn't want to write a literally mad scientist, so I went with something that would make somebody think it's mad or impossible to do! Does have a bit of some retro sci-fi feels to it, but not really. And I went with a different style of first-person narration! A diary entry series, but as if it were found inside some old floppy disk you picked up while cleaning out an old decrepit lab or something.

Anyhoo, I hope you enjoyed it!
 
Back
Top Bottom