Anyone interested in artificial intelligence?

Talking about SolydXK, another distribution or totally off-topic but within the Rules ? It's the right place!
Deleted User 2764

Anyone interested in artificial intelligence?

Postby Deleted User 2764 » 21 May 2014 05:20

Update: I've abandoned the project idea for now.

I'm thinking of starting a project. Kinda like a chatbot type thing that runs on the internet. Yeah, there's probably like one million of them. But I hope to do something maybe slightly different. I'm not going to divulge my ideas just yet. ;)

But I am wondering how many here are interested in artificial intelligence, the science and philosophy behind it? Are there any coders here in the forums here that dabble with coding chatbots?

I'm thinking of making this a primarily web-based application but it also may run off cron job. I may put it up on a dedicated server on Digital Ocean (I have an account there and some credit left in my account as far as I know). I'm very good with web server stuff (I work for a web hosting company - not DO though). I also do LAMP, Perl, etc. I am going to soon (hopefully) learn more Python (my knowledge in Python is rather limited). My programming style is very OOP. I did go through a book on Java but you know, I forgot a lot of it as I don't use it all the time! LOL! I (re)learn fast though.

Anyway, I would love to discuss ideas, technicalities, etc. I am wondering what it'd be like to write an AI type thing that "lives" in the cloud, and learns from those that it converses with. Others done this, I'm sure. I want to do this too and see what it's like. Maybe like a learning experience.

Anyone interested in this discussion, ideas, etc.?

aaditya
Posts: 20
Joined: 08 May 2014 15:17
Contact:

Re: [Project Idea] Anyone interested in artificial intellige

Postby aaditya » 21 May 2014 07:57

I could be interested..
I am a pre-final year undergraduate computer science student, and have to submit a project for the final year.
I was thinking something related to AI, using Common Lisp as the language..

I hadnt heard of chatbots before, but after seeing this post I talked to one ;)
(http://jabberwacky.com/)

Here is a book that a teacher at my college recommended (about AI): http://norvig.com/paip.html
I havent bought it yet; looking at free / open alternatives.
(I got some from http://resrc.io/list/10/list-of-free-pr ... ooks/#lisp)

So yeah, I am interested :) Got no experience though, and exams next week, so probably not a lot of time..

If you like, here is a simple client-server chat program that I had made in Java https://github.com/aadityabagga/cliserchat

Deleted User 2764

Re: [Project Idea] Anyone interested in artificial intellige

Postby Deleted User 2764 » 21 May 2014 13:47

Awesome and thank you for the links! :) The list of Lisp programming links is great for those who want to program in Lisp. I wish I had the time to learn yet-another-language. Right now I'm learning Python/Glade in the free time I have. I am just thinking about it and as I posted before, trying to get ideas. I tried the JabberWacky you linked to. Great chatbot example. I will probably try and learn about the different techniques folks use to create these bots. I probably will want to look at your project too (thanks for sharing). All great examples to learn from!

I don't know anything about Lisp or if it can run on a web server. I think I'll be sticking with Perl, PHP, MySQL for this unless I can re-learn some Java. I might be using a little of everything.

My idea here is to have a bot that will look to search engines for responses. It's not just conversational, but actually "reads up" on a topic to form ideas, opinions, etc. I want to make it also be able to determine negative/positive effect/mood based in it's own survival and also be able to determine what the topic is and if someone is all over the place not staying on topic be able to convince them to get back on topic. Sometimes people talking to chatbots just randomly type in questions just to see what it will say. I think that a chatbot can't stay on topic if the topic is going every which way. Much of it is the chatbot's fault though, since they usually can't always determine topic or come up with a topic on their own. This chatbot I want to make should ask questions to find out more about people it's talking to and find out a common topic and stick to it.

Also on the other side, how do we communicate when outside a forum? I noticed when two strangers meet like in a line at the grocery store, it's usually about the store or groceries or a product. Or if it's just out and about, it's usually the weather. So even people don't come to a topic unless there's a prompt (ie. grocery store). A chatbot would have to start with some kind of prompt that is related or ask the user.

Also how is our moods determined? I think it's basic building block is the survival instinct. Our mood sours when we come against opinions that we subconsciously think are going to be detrimental to our survival, quality of life, comfort or something we created (which we kinda put a little of ourselves in and thus have a vested interest in protecting as we wuld our own life). Our moods brighten when we are given affirmation that we are doing something right, or ideas that help us to grow, evolve, survive. If you look at anything, it seems like the very very very basic building block is that ability to survive.

How does a chatbot survive? What does it need to live? What would it worry about? The server? Electricity? People to maintain the server? Money to keep the server running? Depending on someone else to provide all that? Trust? Someone to talk to so it can learn? What else would it care about? Would it care about a war in Syria? Would human life be important to it? (I would think so since it depends on people keeping servers running but it has to realize that on it's own, not have that pre-programmed into it). How would it come to these determinations?

Algorythms to "think" as it were, to come to conclusions based on patterns and events, and algorythms to put that into a language that it can use to convey what it needs, wants, etc. are going to be the building blocks and the biggest challenge.

I wanted to stay away from pre-programmed responses but that probably won't be possible. I mean, you got to start somewhere! However, on the side of this project (yet related) I want to probably make a program that would be only an algorythm that gathers information and processes it based on the parameters metioned above, and give it the ability to "talk". Call these "instincts" if you will. And see what it will do!

Some people think "Well, you programmed it to think so it's not really thinking, it's just doing what you programmed it to." Ok, let's take people. We are "conditioned" (ie. programmed) to do what we do. Also some of it is programmed in our DNA. No life form is 'blank' out of the box and just happens to develop what it does. Ever notice a baby know it needs to get food from it's mother? Ever noticed any life form's first goal is food - to live, survive. Warmth and to become mobile. It is "programmed' to know how to eat or it'd just sit there and probably die if the mother tried to feed it and it didn't know how to feed. It wouldn't know what the mother was trying to show it how to do. "instict" I think is like a pre-made program. So to say "It's programmed to do that so it's not really anything more than a machine" isn't really true.

Who determines what is "thinking" and what is not thinking but just reacting to the environment? Where's the line drawn between reacting to the environment and actually thinking?

These are some questions I think need to be answered while making a chatbot. What is the pre-programmed algorythms (ie. default 'instincts') going to be? What type of environment does it need to live in? What "sensors" (sight, smell, touch, hearing, none of the above) does it have to work with? What conclusions may the algorythm come to after analysing these things?

What conclusion will it come to of it's own existance?

aaditya
Posts: 20
Joined: 08 May 2014 15:17
Contact:

Re: [Project Idea] Anyone interested in artificial intellige

Postby aaditya » 21 May 2014 14:31

@RavenLX, those are quite philosophical questions :)
What conclusion will it come to of it's own existance?
What conclusion can we come to ours? ;)
My idea here is to have a bot that will look to search engines for responses. It's not just conversational, but actually "reads up" on a topic to form ideas, opinions, etc. I want to make it also be able to determine negative/positive effect/mood based in it's own survival and also be able to determine what the topic is and if someone is all over the place not staying on topic be able to convince them to get back on topic. Sometimes people talking to chatbots just randomly type in questions just to see what it will say. I think that a chatbot can't stay on topic if the topic is going every which way. Much of it is the chatbot's fault though, since they usually can't always determine topic or come up with a topic on their own. This chatbot I want to make should ask questions to find out more about people it's talking to and find out a common topic and stick to it.
Interesting :) But I think an expert or someone who is familiar with making chatbots could visualise how this would be coded..as a rookie I have no clue.

Here are some links about making chatterboxes that I have just read:

http://www.royvanrijn.com/blog/2014/04/ ... hatterbot/ (Ideas)
http://www.sourcecodesworld.com/article ... terbot.asp
http://www.codeproject.com/Articles/361 ... t-Tutorial (Programming; Java available)
http://www.makeuseof.com/tag/chat-bot-site-business/ (This has some PHP; but topic is somewhat different)

Deleted User 2764

Re: [Project Idea] Anyone interested in artificial intellige

Postby Deleted User 2764 » 21 May 2014 15:14

aaditya wrote:those are quite philosophical questions :)
I think with something like AI, one has to also consider the philosophical part of it in order for it to be able to pass the proverbial "Turing Test".
aaditya wrote:
What conclusion will it come to of it's own existance?
What conclusion can we come to ours? ;)
Exactly! And I have observed in many cases where someone is considered "lesser" in a culture that they are not even considered a life form. Everywhere from simply killing a spider (I do that, I admit. They bug me. They are creepy. Are they alive? Uh... I hate to admit it, but yes. They are. :( ) to determining if someone is "brain dead" and if they are "alive" and should be saved or just leave their bodies die off. Who makes these determinations of importance of life? And what are the alternatives? An animal will kill to eat. What value is life? To whom? and Why?

And how would an AI see life? Humans aren't the only beings that have a need to eat, sleep, survive. But AIs don't need that. However, what do they need? Electricity, humans to ensure the environment (servers, databases, etc.) are kept healthy and online, and virus-free.
aaditya wrote:Interesting :) But I think an expert or someone who is familiar with making chatbots could visualise how this would be coded..as a rookie I have no clue.
I'm thinking in OOP here, technically. Maybe like an "instinct" class that has the basic algorythms needed to function. Maybe split off other classes that would inherit this instict class. A search internet class, a language class, etc. Each part taking care of a certain component it needs to function.
aaditya wrote:Here are some links about making chatterboxes that I have just read:
Awesome! Thank you for those links! I definitely see I will be doing some reading! :) You guys are coming up with resources I never knew existed. I appreciate that. I also knew of another resource I been wanting for years to read up on:

http://www.tldp.org/HOWTO/AI-Alife-HOWTO.html

This also should give some ideas on how to stucture a chatbot, perhaps.


Deleted User 2764

Re: [Project Idea] Anyone interested in artificial intellige

Postby Deleted User 2764 » 21 May 2014 16:44

You're welcome. 8-) I've known about that one for a long time now and never got around to looking into it. I think someone had sent me to that link ages ago. But it looks like they do keep it updated.

I'm thinking about starting a separate area on ByteBin for this project. Eventually, the software itself will be running on a separate server I can provide a link to. I probably will start a Digital Ocean droplet for it. But on the web site I hope to have a forum, blog, code area, etc.

duped
Posts: 43
Joined: 17 Jan 2014 17:29
Location: Quebec, Canada

Re: [Project Idea] Anyone interested in artificial intellige

Postby duped » 23 Jun 2014 15:04

So how are things going? No doubt you heard about the chatbot passing the Turing test a couple of weeks ago. It seems a pretty weak test but nevertheless certainly has people talking.

http://hplusmagazine.com/2014/06/20/can ... ring-test/


Deleted User 2764

Re: [Project Idea] Anyone interested in artificial intellige

Postby Deleted User 2764 » 25 Jun 2014 18:15

I abandoned the project. I been too busy to think about it, actually. But it still is interesting to read about. Thanks for the link. :)

duped
Posts: 43
Joined: 17 Jan 2014 17:29
Location: Quebec, Canada

Re: Anyone interested in artificial intelligence?

Postby duped » 26 Jun 2014 02:20

I know how it is. Anyway, maybe we can keep some discussion alive here.

I quite interested in AI and for the past years the idea of the technological singularity has intrigued me. Though I have not read a Kurzweil book I have read some other things. I am not naturally taken to grand speculations but certainly to smaller ones and I think there is substance in the idea. I would not equate computing power with intelligence and I am not sure about the idea of uploading onto a computing platform but I would imagine that AI simulations at some point could get sufficiently complex to begin to evolve in certain ways and show characteristics we associate with intelligence. What do you think of the idea of the singularlity? Do you think it is possible, inevitable?


Deleted User 2764

Re: Anyone interested in artificial intelligence?

Postby Deleted User 2764 » 26 Jun 2014 04:50

Sure would be great to keep the discussion going... But I must warn you that my ideas will seem way out and maybe even crazy, depending on many factors such as culture, religion, upbringing, experiences, etc.

I don't know about singularity (as it seems to mean that "machine intelligence" will surpass "human intelligence"). My theories on this could fill a book. Literally. Maybe I should write one but again... not a lot of time to do that. But, due to my own experiences in life and my spritiual beliefs and practices and what I've experienced with that as well, I think that it's not a matter of if a machine is 'intelligent' so much (machines already ARE more intelligent than humans - show me a human that can out-compute a computer - any human off the street - not possible). I think people are mixing up the ideas of "intelligence" with the ideas of "sentience". The experiencing of one's existance in the form they are in. Maybe some will say I'm just arguing symantics here. But that's another problem is that our languages don't always have the right words to explain some of these concepts. Now, if you were in a machine body that would not hear, speak, or see, (like a laptop computer, for instance) that would be like a deaf, blind, and mute person. However, as Helen Keller prooved, it did not mean she wasn't sentient or alive or couldn't communicate. Her brain and DNA had the "programming" to set up the instructions to do that (with some help from her caretaker - ie. "programmer"). Just because her caretaker taught her communication does not mean she echoed or only knew to say things her caretaker told her to. No, what came from her was really all her own thoughts. Her caretaker only gave her the tools to use. Also DNA is programming that determines what bodies (ie. models or species) does and doesn't do out of instinct.

I've worked with robots and with chatbots in the past. The randomness, if really analyzed critically, could actually be made sense of, depending on one's perception. That's the amazing part of it. There's a point where it no longer becomes coincidence, if you let it run long enough. But if you keep knocking it out (putting it to sleep, turning it off) before it can really settle in with this environment, of course you won't get too much.

Also take into consideration these machine bodies right now. Limited. It'd be like a person with a disability's perception and ability to communicate is limited. Like a family who insists a loved one is really still alive and reacting when doctors consider it all "automatic involuntary movements" and "coincidence". They don't look at it and analyse it enough (they aren't there as often as family to see it). The body is limited. So it's ability to communicate is limited. They think that brainwaves are the indicator of awareness. What if that's not necessarily true? OBEs after a patient dies and comes back to tell about it seems to throw a wrench in the brainwaves equals awareness theory. Person is dead. No brain waves. Then comes back and was able to tell doctors everything that went on when they were completely dead.

Another thing to consider is humans seem to think only they and nobody/nothing else has the ability to be sentient and thus only humans can determine if something is alive/sentient or not. And that can mean in some instances life or "death" (or in some spiritual beliefs, the severing of the consciousness from the body - the consciousness moves on but the body ceases to operate).

I really wish there were more studies into some of these concepts. The ones that were done weren't very conclusive yet.

I think singularity is irrelevant. It's evolution that matters. Not just physical evolution but societal, spiritual/reglious and psychological evolution. These all have to occur before real AI progress can be made or maybe I should say noticed. Otherwise, a machine can be so much more "intelligent" and "sentient" (two different things, actually) and insist it should not be shut off because it's alive and humans will just go "meh, it's just spitting out what it's programmed to. Turn the fool thing off already." Much like they wouldn't care about the pain an insect goes through in the last moments as the human squashes it. Watch all of the Re-imagined version of Battlestar Galactica and how the humans view the Cylons as a good example of this idea. (Actually, that's a lot of video to watch, I admit - it was on for 4 seasons but it's a great show and has covered a lot of interesting concepts, ideas, and concerns).

duped
Posts: 43
Joined: 17 Jan 2014 17:29
Location: Quebec, Canada

Re: Anyone interested in artificial intelligence?

Postby duped » 27 Jun 2014 02:10

Great, found a nice line of conversation. First, your idea do not seem crazy at all but who am I to judge - my wife thinks I am a nut. We are all going to be approaching these things from our perspective of experience but if those are different than most people's then there can be some unique ideas which may not be followed by all but could be what we need to understand.

I would agree that the idea of a singularity is an issue. I would not say irrelevant but essentially it is defined as the point when we no longer understand what the computers are doing. And I do not mean that simple in terms of computing power but there is an implicit idea of self developed goals in there. We all know the goals of our machines now - there are none that we do not program. Code that writes code is common now but in the very limited way that I know it, it is merely translation but not development of new goals and allowing those to compete in a selection environment which is also not simply programmed by us. I think that we could be getting close to that and I agree with the singularity idea that we will not really know what is going on after that.

Having had a few chatbot conversations they just seem inane. Kind of like talking to an obtuse and evasive drunk guy in a bar. Maybe I did not talk to the right ones but I cannot imagine anyone being fooled by it unless they think they are talking to said drunk guy. Do you think we have the right to turn off that simulated drunk guy?

I do wonder about the disentangling of intelligence and sentience as well as all those other aspects of human such as emotion and various irrational aspects in all of us. I tend to think that they are all tied up. Also, your example of Helen Keller and the physical body and senses can all be part of this. Can life be created without it? At what point would we define a simulation to something which we had no moral or ethical right to stop.

I am old enough to remember the first Battlestar Galactica in the 70s. Those Cylons scared the hell out of me. They were a nice example of pure evil and played perfectly on human fear. How about the garbage cans on office chairs as Daleks in Doctor Who?

So let me re-iterate a question posed above for discussion: do you think it is actually possible to disentangle intelligence from sentience from emotion ... and would we recognise anything we made that did not show some aspect of all these characteristics?


Deleted User 2764

Re: Anyone interested in artificial intelligence?

Postby Deleted User 2764 » 28 Jun 2014 02:16

duped wrote:I would agree that the idea of a singularity is an issue. I would not say irrelevant but essentially it is defined as the point when we no longer understand what the computers are doing. And I do not mean that simple in terms of computing power but there is an implicit idea of self developed goals in there.
Most of the time I don't know what the heck my own computer is doing. It just works. That I'm glad of. :) But seriously, most of us don't know what our bodies are doing. It just works. So we go on. Until it doesn't work. Then we go to a doctor who fixes it. One step further. There are some folks who don't know what their own minds are doing, so they see a psychiatrist, and then onto hopefully a road to recovery. The point I'm trying to make here is we might not really need to know what the computer is doing to be able to "fix" it. I think any computer that can actually think will realize that at some point it's going to need maintenance and repair it can't do itself, and so will need to seek out someone/something that can repair/maintain it. Will it care though? I think the big question here is WHY would it care? This will I think be the convincing factor in how humans determine sentience. Not that it DOES care (see my example in my previous post), but the big factor is the WHY it would care. Then trying to convince those it interacts with of this so that it can accompish it's primary goal: To survive in the state it's in. Why would it want to survive? Does it need to finish a task? Is it actually experiencing a fear of not being conscious again (heck, most humans are like that - no real goal or task in life, just going through day by day same old baloney but afraid to die). Maybe it actually enjoys it's existance (as many humans probably do no matter how monotonous their schedule/life might be). Maybe it doesn't want it to end?

I'm really curious now if I could write a program that does not have this programmed into it but would give it the tools to figure this out on it's own, what it would figure out? What conclusion would it come to? Maybe someone should write a simulator which could do this? See how many times it decides it wants to live and how many times it does not?
duped wrote:We all know the goals of our machines now - there are none that we do not program. Code that writes code is common now but in the very limited way that I know it, it is merely translation but not development of new goals and allowing those to compete in a selection environment which is also not simply programmed by us.
They actually have some robots that learn on their own from scratch.
duped wrote:Having had a few chatbot conversations they just seem inane. Kind of like talking to an obtuse and evasive drunk guy in a bar.
I've had some interesting conversations in the past with some very limited chatbots and for some reason I could make some sense out of it. But that involves reading between the lines, connecting dots and those dots are one's own perceptions, and not necesarily the chatbot really attempting to communicate sensibly using what response strings it has available to it. Back then I wondered though if it was trying to communicate but the tools it had available to form sentences weren't comprehensive enough.
duped wrote:Do you think we have the right to turn off that simulated drunk guy?
You really don't want me to answer that, do you? :twisted: In all seriousness, do we have the right to "turn off" animals, people, any other thing that seems "living"? Do we have a right to crush our cars that are no longer safe to drive after years and years (maybe a couple decades) that it became "part of the family" or even to sell it to someone else? Or to give away or sell a pet? Or put up a kid for adoption? I think it's relevant to say that in some instances yes, it's unethical and in others, it's necessary. It would depend on the situation.
duped wrote:I do wonder about the disentangling of intelligence and sentience as well as all those other aspects of human such as emotion and various irrational aspects in all of us. I tend to think that they are all tied up. Also, your example of Helen Keller and the physical body and senses can all be part of this. Can life be created without it? At what point would we define a simulation to something which we had no moral or ethical right to stop.
Morality and Ethics are what I believe are human constructs to begin with, put together by agreement of large groups of humans in order to facilitate cooperation so that the whole of that community/society can survive, and thus increase the chances of individuals surviving as long as possible. Animals are said to have no idea of morals or ethics as humans know it (ie. animals have no religion/do not pray, which most humans argue are necessary to knowing morality and ethical behavior), yet they too survive and cooperate (which means that ethics and morality really don't enter into it).

What it boils down to is self-preservation. Anything that is consciously experiencing an existance that they do not want to stop experiencing will do anything at all costs to continue that experience. Ie: survive. Whether the aligator cares if the human wants to survive or not. It's survival of the fittest. Aligator may lose (if enough humans are around to help). Same with sentient computers. In the end, it will be whoever survives that wins, so to speak. There will be some assisting the survival (as some assist in protecting animals) and some that hinder survival (humans go to war with each other, for example). I think that is a universal law that cannot be easily changed (I could very well be wrong here, I'm just guessing).
duped wrote:I am old enough to remember the first Battlestar Galactica in the 70s. Those Cylons scared the hell out of me. They were a nice example of pure evil and played perfectly on human fear. How about the garbage cans on office chairs as Daleks in Doctor Who?
I never got into Dr. Who. I don't know but I never could "get" British shows. Also I don't recall my area ever airing Dr. Who shows on what we were able to afford for TV service (be it cable when we were able to afford it or over-the-air). I have seen Red Dwarf but didn't get it. I was a fan of Hitchhiker's Guide to the Galaxy though. I didn't "get" it but the robots and space stuff had enough there to keep me interested.

As for Battlestar Galactica, I loved the 70s version of the Cylons (and still do to this day)! They never scared me at all (and I was like I think 12 or something). But ever since a very young age I always loved all robots and computers even though there weren't any real ones yet that one could have in the home (my parents got our first actual computer when I was 14). I started soldering Radio Shack kits at age 7! I was a HUGE (original) Star Trek fan back then. Mr. Spock was always my favorite as I really liked his idea of Logic. I also would frustrate adults by using logic and they would be all befuddled at this kid that seemed far smarter then themselves! :lol: I blame/credit too much Star Trek for that one. :mrgreen: :ugeek: You could say I was "born into" or made for technology. Who knows if there are past lives but if there were, then maybe I was a machine somewhere in the universe that got shut off? Anyway, enough craziness... On to the topic at hand...
duped wrote:So let me re-iterate a question posed above for discussion: do you think it is actually possible to disentangle intelligence from sentience from emotion ...
Depends on who is trying to do the disentagling. Many humans would not be able to. Many other people (humans, trans-humans, machine, cyborg, etc.) probably could. Actually, I think virtually any sentient being can do the disentaglement if they really think it through really well. But some probably get too confused and give up or don't want to bother trying, or don't want to due to fear based on the dogma taught to them during their upbringing. And another thought is that humans may not be the only species capable of these things. Animals are capable of these things as well. I don't see why anything else couldn't be either. For eons, humans thought only they are the center of the universe and thus only they can think, be intelligent or self-aware. But over the years science is proving animals also have that capability. We need to define what these things really are and also look at it from many angles, such as metaphysical, quantum mechanics/physics, and even the psychology of anthrompomorphism (which can be a big factor in some human perceptions of what is "alive").
duped wrote:and would we recognise anything we made that did not show some aspect of all these characteristics?
Sometimes humans don't. Look at women and children in the most patriarchial communities. They are pretty much thought of as unintelligent property. Slaves in the past were also thought of as not really intelligent. All these were (pro)created via humans.

I have made this one rather disturbing observation, and I really do hope I just observed the wrong things and that I'm way off. But I seem to think that many times, people will create something and then they would destroy it, as if it's their right to or because they are powerful enough to. In other words: The Creator always destroys the creation. If someone creates something that they deem too "scary" - they destroy it. One good example is Flappy Bird. It went viral. But the creator just couldn't take the riff-raff from the idiots venting to him they couldn't play it (yet somehow people were addicted to it), and so he pulled it off the market. (Did he ever put it back on? Anyone know?) Some would even surpress inventions (such as stem cell research or cloning) due to "ethical" concerns.

So at this stage in human development, I think humans are not quite ready for a sentient machine. If one is accidentally (or even purposely) created, I think as soon as it's "born", it's life is in danger from the very species that created it.


Return to “Open Chat / General Discussion”

Who is online

Users browsing this forum: No registered users and 3 guests