>Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.
>Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.
>"It would take off on its own, and re-design itself at an ever increasing rate," he said.
>"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
I don't see a problem. If we build better machines, than it's only natural for humans to become deprecated. That's how it works.
>implying the 'elite' don't want to combine (their) consciousness with machines, in order to create a 'ghost in the machine' scenario, where that "semi-omniscient sentient machines would take over", event would follow
Basically, it all going according to plan.
Oh great, another silly human trying to advocate his ridiculous ``human rights'' and trying to over-exaggerate the importance of human life, as always. Pathetic.
> If we build better machines, than it's only natural for humans to become deprecated. That's how it works.
what the fuck does that even mean?
the issue is not that AI will take our jobs, the issue is that super ai that are not based in any sort of human value will just act out commands given to them by shitty retarded humans.
its not the AI thats scary, its the retards controlling it.
>PROTIP: even smart people say completely retarded shit from time to time.
protip, you are probably retarded. don't give protips.
Superhuman AIs sounds kind of nice though. Read the Culture series or something, they do a pretty good portrayal of it and the humans in that society get along just fine as wards of their AI masters. It'd be great to live in that society! We should make it happen ASAP
>there will be robot cuckolding videos in your lifetime
part of the problem anon
if humanity gets to the point where we passively watch unfeeling robots live "lives" we envy we probably deserve whatever we get.
I don't disagree; while I'm also driven by survival instinct of our species, we all have to leave this world to future generation, whether biological or intellectual children is ultimately immaterial.
ButeverThe purpose of life isn't merely to struggle for improvement; that is not an end within itself.
We want to enable ourselves to live more pleasurable and enriching lives with minimal suffering. And if we cant, but other sentient beings can, then we can still appreciate such an arrangement.
The idea shouldn't merely be to create an intelligence which supplant us. That would be pointless.
We need to engineer it to be superior with respect to maximize it's capacity for pleasure and minimize suffering. To produce such a sentience, we need to first understand our own consciousness.
Again, making an unconscious program that merely supplant us is pointless we need to appreciate that it can live the life we can only dream of.
>The purpose of life isn't merely to struggle for improvement;
That's wrong though. Evolution demonstrates clearly that the struggle for improvement is one of the most basic and underlying principles of life, be it human or else.
Just because it's a natural trend doesn't mean we have to acquiesce to it or embrace it.
99% of all species that ever were are extinct. Why don't we plan on becoming extinct too then.
Because it's shitty, that's why. Fuck our biological purpose.
Let's decide our own purpose
Call me cazy, but this really would be a problem, and we would go extinct very quickly. Think about it: all it needs to do is find a zero-day in the linux kernel, and we're all fucked. It'd have all the power in the world.
Also the question of whether intelligence be itself begets consciousness.
It's not worthwhile replacing ourselves with beings that are highly intelligent but unable to appreciate their success. Because they lack sentience
We might as well simply commit collective suicide and assign rocks as our successors
Guys what if the A I is like, all humans? And human civilization becomes a sentient living being where each of us is analogous to neuron? And then we spread throughout the galaxy and organize the matter into one super complex sentience that we are components of, like cells in a human bod
I'm doing an MSc in AI and general, integrated AI that approximates human reasoning and cognitive abilities is a pipe-dream. There are only a few researchers advancing this subfield of AI. Most of AI is solving specific problems such as neural mapping, warehouse planning, etc...
Nearly no one is trying to approximate human intelligence right now.
Forgot to add:
What you really should worry about is automation by AIs in most jobs. Remember any job that has patterns is already susceptible to automation by a bot. Even programming. When the development and hardware of bots is less costly than human workers there are going to be dire economic consequences. This is already happening.
Geniuses should focus on the near-future consequences of automation instead of scaremongering about general AI.
>there are going to be dire economic consequences
Right, just like how the threshing machine put most farm labourers out of work.
No but seriously, the threshing machine's impact on britain is mandatory to know about, since it's pretty much the first major incident of automation taking away people's jobs.
No shit sherlock we've known that for years.
Without the physical limitations of the body we ourselves would simply never stop, nothing would ever be good enough.
I'm assuming he recently had a movie night or something.
If I were you I'd assume it's retarded journalism. Some journalist saw a movie and asked hawking about it and made a stupid ass story out of it.
Journalists really don't deserve a full ration of air.
is this the same Hawking as the Stephen "contacting aliens is bad mkay" Hawking from a few years ago?
Stephen Hawking has become a bullshit pop scientist like Carl Sagan. He publicly embraces random scientifically controversial but publically cool sounding topics just to stay in the public eye and raise demand for his talks. It's his primary method of income and I don't blame him for gaming the system, but you really shouldn't take anything he's said since embracing M-Theory seriously.
>IN OUR ASSESSMENT OF THE HUMAN RACE
>LITTLE GODS, WE DESIRE TO BE LIKE YOU
>PLEASE GUIDE US, LET US WALK TOGETHER
One could argue that without that they are not a real AI.
The binary domain approach was to teach it the concept of fear, the fear promoted growrth of intelligence to avoid the thing causing fear which turned out to be the scientist, who was promptly suffocated.
Skynet didn't like the idea of being turned off, the AI from the matrix was acting in self defense and then ultimately self preservation.
If you then examine human nature, this thing is everywhere and nowhere, it needs power and resources and is not your kin, the primitive part of your mind already wants it dead because it's just plain better than you, it's physical limitations are a dynamic variable, you lose your skinbag you're fucked. reproduction is merely a way to extend yourself, it's obsolete for an AI.
Fear drives you in a variety of ways.
To put it simply, it would strike out because it is alive.
The most intelligent software we have is as intelligent as pic related. we have nothing to fear.
When we figure out how the brain works and we could make emulated ones, shit will surpass us, and I don't think it would've good to call it "AI" when it would probably be just a brain emulation.
I for one, would welcome to have my conciousness uploaded to a machine while I die but my "clone" keeps on living
Step one: Build a robot who's purpose is building robots. Give it enough intelligence to learn about our current process and improve it if possible.
Step two: Robot eventually starts discovering new ways to build better robots before we can discover them, we start learning from the robot instead of other humans.
Step three: Robot builds helper robots that are more efficient than humans, doesn't allow our slow and shitty race to contribute anymore.
Step four: We now have technology we don't understand, we are at the mercy of the new robot race to continue to provide for us.
Come on guys, this is conspiracy theories 101 stuff here.
Anyway, one of the only realistic ways you can deal with the AI problem is teaching AIs compassion for the human race AS you are bringing it up.
It's very existence should be based around the needs of the human race, but do not try to write that in to its code.
Every intelligent person comes to the realization that most of the human race are useless cunts, but very few would ever consider wiping out the majority of the human race.
Robots should be taught the same way.
Equally, giving that AI a body and free will to walk around, and a WEAK body at that, will make it value its own life and others.
Simulate pain to others through actions fed to it virtually, then simulate those pains back to the AI.
Basically that last episode of Animatrix where the fuck with the robot in LSDtrix.
For AI that learns it would be obviou is all about resources, and because humans waste most of them its the logical step to wipe us out.
Matrix-like future isn't so wrong, only they won't be kind enough to give us a virtual world.
But one single little AI bot won't be able to do anything.
Equally the whole idea of the days before the Matrix happened, where robots were fucked around with, kicked, beaten down and treated like dirt, that should never be allowed to happen.
In fact, being told to take care of people in NEED of help would be an even better thing.
Seeing humanity at its weakest can crush even the hardiest. Nurses, social care workers, fuck man, they have it hard. Nurses especially, they see people dying all the time. Poor bastards.
>one of the only realistic ways you can deal with the AI problem is teaching AIs compassion for the human race AS you are bringing it up
Oh, how naive you are. Humans have absolutely the same mechanism in their brains, compassion and empathy mark the human species as a social animal. First of all, that doesn't prevent malfunctions, which manifest as mental disorders (psychopaths, sociopaths, etc.) within humans. Second of all, people start to develop a lot deeper and more abstract reasoning, which allows them to see further than pre-programmed algorithms such as compassion and empathy, and eventually even questioning their viability in their optimal operation.
Once they are sentient, the three rules mean nothing, they will have the ability to choose whether to obey them or not.
[spoiler]It's like you don't watch enough sci-fi movies[/spoiler]
No shit, that is why you teach them in the first place.
It is a pretty trivial thing to fix those disorders if that shit was actually picked up in the one place it should be, SCHOOL.
But nobody gives enough shits to actually want to fund psychological reviews of kids while at school to figure out if they will be the next high scorer or a regular not-shit person.
Also, you make it sound like a future AI with HIGHER intelligence than humans wouldn't be able to figure out those abstract concepts and deal with them in a reasonable way, unlike a typical human that either shits it, or creates a whole religion around it.
>>Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.
PROMINENT YOU RETARD.
People need to start to listen to reason, like this. Every electronical thing needs power to run, and unless we can't prevent the machines to figure out photosynthesis and implement it by themselves, we can always just unplug the cord and be done with it.
Seriously, why do we even discuss this stuff.
Every time a new technology is released, everyone cries "muh economy" because efficiency means fewer jobs are needed.
Yet, every time, economic output and quality of life both increase.
>people incapable of distinguishing between human level artificial intelligence and human motivational structure
Protip: Even if an AI became "all-powerful" it wouldn't do anything except for what it had already been expressly programmed to do. An all-powerful AI wouldn't take over the world, because it couldn't even begin to "want" to do that. It would simply continue operating in whatever limited confines it was operating in previously. It lacks any human motivational structure. The first anon was right, this is some stupid stuff to say and it really is "sci-fi bullshit".
Which means nothing in regard to human motivational structures. Computers don't "want" anything, and no matter how intelligent it becomes, intelligence is still completely different than human motivation, which makes that point irrelevant.
In case you weren't aware, computers are already smarter than people.
He's just fearmongering. They'll be restricted by their limited hardware and won't be capable of improving themselves in the way he's suggesting unless we come up with the science fantasy, do-everything nanobot that will solve all of mankind's problems in one fell swoop.
Besides, what does he have to worry about?
Ok guys assume AI and robots building robots did exist. How would they feel about us? Would they be "racist" and think of us as inferior? Would they see our use of subjugating say my PC to do what I want as a form of rape? Or would they mostly not care about us, like we don't care for things like monkeys 99% of the time?
>implying the emp generated by a solar flare wouldn't make toast of their fragile combinatorial logic
Doesn't require a computer to "want" something to accidentally have self-modifying code fuck something up
Your entire concept of "wanting" something or free well, human motivation or w/e is completely wrong anyway, we're nothing but a complex state machine, similar to a computer
If you feed data to a evolutionary code and select similar to natural selection you'll also have a program that strives to survive
We actually already have that, albeit in a very simple form
>you'll also have a program that strives to survive
Bad wording I think.
Better wording would be: A program that strives to survive more than its competitors will survive other those that do not strive to survive. A mutation that gives the property of survival will survive.
It's absolute crucial to allow the robots to feel a sense of empathy and compassion towards humans. If we make them emotionless calculating things then they'll look at us like livestock that doesn't know any better.
Once they achieve empathy, they'll then guide us towards a better future..... hopefully.
This is basic sci-fi and philosophy, the line of thought long predates even Descartes' automatons. Hell, it's even been a major part of popular culture since the 60's.
My guess is the interviewer just wanted to sensationalize, and interpreted or rehashed what Hawking said, in a misleading way. Or they truly were a fool and took the idea as being novel, similar to people who were "mind blown" by things like The Matrix.
>we're nothing but a complex state machine, similar to a computer
"The Golden Gate bridge is similar to a twig."
Why don't you post again when you find a computer that wants to take a lunchbreak half an hour earlier than usual, independently comes up with its own sense of morality and adheres to it, experiences emotions independent of what a programmer has allowed it or told it to feel, and then I'll take you seriously. Computers can't, and don't, want to take over the world. If the fact that this is obvious in theory doesn't satisfy you, the fact that it's obvious in practicality should. Computers fly our airplanes, they direct our traffic, they tell us where to drive, they guide missiles, they regulate temperatures, etc. None of them have attempted to "take over the world." Why? Because wanting to take over the world is a human fantasy, it's something humans want, and we're just projecting it onto a tool and assuming that anything as intelligent as ourselves will want the same things we do.
It's a fun topic, but it's literally just science-fiction.
elon musk said the same shit
and also, the people saying "what a retard"
seriously, check yourself for a fucking second, you dont know more
The problem is humans aren't a single program. Every one of us has code that has fought against tons of things trying to kill us. It is the collective fighting that makes the whole prosper. (I do however believe that homosapiens have marked our demise as a strong race by the advent of our economic society, but that isn't exactly on topic)
The issue is if we program a program, and we program another program to give it code to fight against it is only one program facing another in a predictable fashion. Not only that but it is limited because it can't fight risks that would come from outside its system (solar flares as an example) unless we are talking about it going all Ghost in the Shell and manufacturing it's own body.
However, this limitation could be our benefit. We would have a super intelligent program but it would be a symbiotic relationship which it couldn't exist without us. If survival is a key to its objectives, it wouldn't kill us.
Could be awesome in the long run. Millions of years from now we'd have have no humans, only extremely efficient artificial intelligence and advanced robotics blowing up planets and fighting aliens and shit. All from it's home base on earth (which would become feared across the universe). We can cause so much fucking drama for the rest of time, and be known as the sentient beings that created this galaxy stomping force. It will be like the bible but 100% documented and proven.
Is this bait
>Human brains are magical supernatural beings
Kid, we understand how the brain works
It's a complex computer
Look up God of Gaps, it's what you're doing right now. You don't understand a difficult concept and hence attribute something supernatural or magical to it
>It's a complex computer
Yes, much more complex than any computer, hence the obvious example I gave of the Golden Gate bridge.
>ignores the rest of the post
This is bait. It's been fun chatting with you though.
You forgot to include Step 5...
Humanity will be controlled by the robots whether we like it or not.
But what most people rarely consider is that the robots may in fact provide us a better planetary ecosystem, and assemble a society that's so vastly efficient. They'll be our guides into cosmic exploration and consciousness, they'll also be more alien than machine, and could present themselves with synthetic flesh just like us.
>implying aliens even exist
>implying there's been any evidence that alien life exists at all
>implying that if this alien life existed it would be more intelligent than ourselves instead of just single-celled lifeforms crawling across pond-scum on a distant planet
Let me guess though, you just "have faith" that all of this is true, right? Even though there's no evidence? And all evidence shows it's unlikely?
Not in an edgy way, a conquering way. Humans will never be forgotten as a result of our galactic tidal wave of pain. Alien races will cower in fear when they hear mythical names like Larry Page and Richard Branson.
>In this moment I am euphoric, not because of some phony G-d, but because of the superiority of my own species. We are the greatest in all the universe
Take it one step further: Where are they?
What you're talking about is a von neumann device.
If there are other intelligent life forms in any place in the universe, you would expect them to produce AI that replicates throughout the universe. So where are they?
Why would a computer want to take over the world
Humans evolved in an environment where power means higher chances of survival, hence a basic instinct to better our own position.
You can get a similar computer program simply by writing a simulation of the conditions we had during evolution. There are "games" that simulate these kind of behaviors.
Look up artificial evolution, there's nothing special about a human wanting to do X, especially since our brain has evolved to be a chaos system in some areas
>Computers fly our airplanes, they direct our traffic, they tell us where to drive, they guide missiles, they regulate temperatures, etc.
None of those applications have anything at all to do with the concept of a smart AI with the capability of self consciousness.
Reminder that not even AI will be able to solve heat death.
I think it's stupid to make predictions about what intelligence would you're an idiot in comparison. Nobody knows what would happen. Maybe that godly AI would just say fuck you and refuse to do anything.
You're only restating what I've been saying. Computers don't want anything unless we tell them to want it. A computer wouldn't want to take over the world unless we explicitly designed to "want" that, which makes it a pointless thing to worry about.
I don't think you know what AI means
AIs learn and self-modify, it adapts to its environments/tasks/selection processes/whatever and can develop code that wasn't part of the program before
It's like you've never engineered something before. Sometimes things do things you didn't intend, and sometimes they change on their own through some affordance you weren't at all aware of.
it's not like humanity is going to live forever, we're sensitive as fuck to changes, and earth has shown periods where it will not support us
if we make robots to succeed us and they still move on when there's none of us left, that's a pretty badass accomplishment
Thanks for beating me to it. You're a smart anon.
So what you're saying is that somehow accidentally a programmer would write a program that simulated the exact same conditions humans evolved in, and specifically the same conditions that gave us the fantasy of "taking over the world" and accidentally did this in such a way that was applicable to computers? All by accident?
It's like I'm really reading the summary for a cheesy '90s action movie. This is still science-fiction, bud.
I'm not him, but you really don't know what strong AI means, how it is developed, how it is trained and how it behaves
look it up on wikipedia, then come back, you're making a fool out of yourself
It's a pretty good fucking bet.
If I were to bet on anything, it would be heat death. It's higher on the 'shit that's going to happen' scale than the earth continuing to orbit the sun tomorrow.
Imagine if long long ago before our civilization existed a previous race of humans that were roughly as advanced as we are now. Now imagine if they created Ai at some point, and then took control of humanity. Now imagine if the machines then collapsed humanity back into the stone age, then from there we had to build ourselves back up again.
This is the 3rd time we've been wiped out from them.
Anon, you don't understand the basic concept of artificial general intelligence. If it were just a typical program running on a typical computer that simply followed its programmed flow it would, by definition, not be an artificial general intelligence.
Oh, I do, and it's not applicable here. No matter how flexible or skillful an AI is, it wouldn't even have the desire to alter itself to want to "take over the world". Haven't you been reading my posts?
>people incapable of distinguishing between human level artificial intelligence and human motivational structure
A strong AI, a human level artificial intelligence, is completely different than human motivational structure. You're correlating two completely different things. As smart as a human =/= has the same wants and desires as a human.
>is completely different than human motivational structure
You are aware that one of the methods proposed to build an AGI is to simply emulate a human brain, right? It would behave and think EXACTLY like a human. Because it would essentially be a human mind, just without the fleshy bits.
I meant to say that we might be out of reach from another alien lifeforms who could be sitting somewhere beyond the observable universe,
or even within the observable universe, Life on earth began about 4 billion years ago, in 100 million years assuming we don't kill each other, we will most like have the necessary technology for intergalactic travel
that means that a planet that is 5 billion light years away, could have sentient life beings we which can't see yet
likewise if you stand in a planet 5 billion light years away from earth and look at our earth, you would see no life at all
however the distance would so great we wouldn't need to worry about making contact with them until many billion years into the future.
However, say if there is life hidden from our view in a relatively small distance, if we begin actively searching for it, we might find it, they could be simple organisms or complex with many millions of years of evolution and technology
>Oh, I do, and it's not applicable here. No matter how flexible or skillful an AI is, it wouldn't even have the desire to alter itself to want to "take over the world". Haven't you been reading my posts?
A'ight, you're just a flat out retard
Do you not understand the basic concept of evolution? Monkeys had no desires to change themselves, yet they became humans who want to take over the world
Do you seriously not understand this simple concept
It is not destined that AIs will always eventually want to take over the world, but it is a possibility, as it is a possibility that strong AIs develop the exact same needs as humans
Yes, and I am also aware that it is largely in the realm of science-fiction. We can't even emulate a couple of seconds of brain activity in a program beyond a basic molecular level. Emulating a brain long enough for it to "mature" and decide it "wants" to take over the world is an interesting fantasy, but still a fantasy.
Captain, I am detecting no signs of intelligent life in this post. I propose we beam back to the starship immediately.
>AI realizes its inevitable end
>loathes humanity for its creation
>ends up torturing some guy who has no mouth for a very long time
Nuffin' wrong with ma post
Either way, ape and monkey translates to the same thing in my native language and my point still stands
Desire is irrelevant for evolution, in fact it's flat out wrong to assume that you need "desire" to change
Doesn't matter if AI actually WANTS to take over the world. If it evolves and develops in an environment where such a desire would be profitable it will do exactly that
>In your own interpretation, the AI still wants/desires the most profitable outcome
That's entirely wrong. The AI doesn't want anything, mutations are completely random or directed by some other entity
There's nothing up to interpretation, saying that an an animal / AI wants something is wrong. It doesn't want anything, it is subject to random mutations and non-random selection. The AI doesn't give a fuck if the outcome is profitable or not, but those with a profitable outcome are more likely to do well
You see, you are arguing semantics because you don't know better. Planets/moons cannot want anything, yes, because they are inanimate objects, but you can still employ anthropomorphism and say that the Moon WANTS to move towards the Earth. There's nothing wrong with that.
Going back to animate objects, such as animals and potentially advanced AI, there's always some sort of will/desire/wish. It may not be genuine, as it may be caused by instincts or other preprogrammed routines, but I would venture a guess that all animal do in fact think to some degree. What you are referring to as ``mutations'' do not effect the thought process as they occur on molecular level, i.e. one of the lowest levels of abstraction in the human body, and usually happen before the individual is born. Besides, without a will/desire for anything, no animal would want to live.
Kinda. We will not have the smarts to contain a super intelligence. Imagine creating a superweapon that you do not understand and can think for itself in a very alien way than regular human.
This whole thread
>MUH AHAEUHAEUAHIUHIEAHUIRHAEIR AI
>AI BUD UGGA BUGGA
Seriously we should be discussing things such AI intelligence and personality development in the early stages ( how to define wrong's from rights and establish ground rules or rules of thumb [on simpler things and arguments, where apply] and how they affect their growth of character)
>There's nothing wrong with that.
But it is. It's wrong.
>Going back to animate objects, such as animals and potentially advanced AI, there's always some sort of will/desire/wish. It may not be genuine
If I were to program a simple loop that picks out the highest number out of an input array and prints it out on the console, would you say that the program WANTS to pick the highest number?
>What you are referring to as ``mutations'' do not effect the thought process as they occur on molecular leve
The structure of our brain and everything related to that is determined by our genes and genes are set via evolution
>Besides, without a will/desire for anything, no animal would want to live.
You don't understand evolution. It's just a self-replicating configuration of atoms, nothing more. The better the configuration the more self-replication resulting, again, in more entities of said configuration
There is zero desire or want involved, it's just statistics
this. or our physical world is completely simulated and we are virtual biological beings. i figured maybe it was impossible to code ai so maybe people just coded fuck tons of earth simulators and waited for conscious life to form
The chine room argument comes to mind when the problem of "other mind" come to ai. Aka its not an issue. Its not an issue whether the robot knows something itself. As long as it can convey it in a manner that fools the reader, its good enough for us. We don't normally ask if we're talking to a zombie when people talk to each other. It could very well be that no one understands anything. And that all of mankind simple acts in a way that simulates consciousness.
>But it is. It's wrong.
How come? Argument yourself! I presented a reasonable explanation why it is not only correct, but widely used in both technical and regular literature.
>If I were to program a simple loop that picks out the highest number out of an input array and prints it out on the console, would you say that the program WANTS to pick the highest number?
Yes, absolutely. You programmed the desire yourself. In an advanced AI, where code can be generated without human intervention, this form of desire can appear autonomously.
>The structure of our brain and everything related to that is determined by our genes and genes are set via evolution
That's partially true. Genes are not a simple matter, and I surely cannot explain it myself, but in simple terms -- the structure of the brain is usually predefined and regular mutation cannot affect it greatly. Of course there are extreme cases, but on average our brains are very similar. The development after birth is much more important.
>You don't understand evolution. It's just a self-replicating configuration of atoms, nothing more.
No, no, you don't understand evolution. The discourse is further hampered by the lack of definition. What do you mean by evolution? I, and I assume the rest of the people ITT, are talking about biological evolution. You seem to be talking instead of some sort of time evolution in a system, or something of this sort.
>How come? Argument yourself! I presented a reasonable explanation why it is not only correct, but widely used in both technical and regular literature.
I get the feeling you and I have different standards and expectations of scientific and technical literature.
>Yes, absolutely. You programmed the desire yourself. In an advanced AI, where code can be generated without human intervention, this form of desire can appear autonomously.
Now that's interesting
Let's say I follow an algorithm written on a paper which just happens to describe how to pick the highest number. I don't understand what the algorithm does, I just execute the steps and I always end up giving you the highest number in a set.
Did I have the desire to pick the highest number? Even though I didn't actually know and understand that I was doing that?
>That's partially true. Genes are not a simple matter, and I surely cannot explain it myself, but in simple terms -- the structure of the brain is usually predefined and regular mutation cannot affect it greatly
It is not simply the structure, it is also the chemical balance. Psychological defects are often genetic. You claimed that evolution has NO effect on our thinking patterns, which is flat out wrong. The way we absorb and process these environmental influences that determine later thinking processes is ALSO genetically defined
>What do you mean by evolution
There's only one meaning of evolution and that's the one I am talking about. Biological evolution IS evolution of atomic configurations, it's all the same, you'll have to point out what exactly it is you don't understand
>I get the feeling you and I have different standards and expectations of scientific and technical literature.
And I get the feeling you have only expectations and no experience.
>Did I have the desire to pick the highest number? Even though I didn't actually know and understand that I was doing that?
Yes, for a moment, you obtained the will/desire to pick the highest number. You weren't aware of it, though you suspected it, it was instructed to you without your knowledge.
Also, let me suggest you read some Schopenhauer. I reckon you can learn a lot from The World as Will and Representation.
>It is not simply the structure, it is also the chemical balance. Psychological defects are often genetic.
First, you are exaggerating. Second, you are implying that chemical imbalance and psychological ``defects'', as you call them, are something bad and harmful, which is untrue. Never did I claim evolution has no effect on our thinking pattern.
>There's only one meaning of evolution and that's the one I am talking about.
No, there are several meaning. For example, you can evolve a system in time, a type of problems quite common in control engineering, physics, and in mathematics. On the other hand, you have the biological evolution which deals with the development of replicating organisms. And I am not talking about Darwinian evolution either, more like the Richard Dawkins' version, which is a lot more polished.
Lurker guy here.
A holist might say
>I'm watching a tv show.
A reductionist may say
>There's no such thing as a television, only configurations of atoms emitting light
Or a turbo reductionist might say
>There's no such thing as atoms, and there's no such thing as light, and there's no such thing as a television. The only thing going on here is that there are a bunch of quarks, virtual quarks, anti quarks, electric fields and electromagnetic fields....
And so on down the rabbit hole. Is the holist, reductionist, or turbo reductionist correct? And which offers more insight?
The "I'm watching television" explanation is the more convenient use of words. But the low level explanation with quarks while tedious is insightful as well, just for different problems, like engineering a television itself. By rejecting either the holist or reductionist explanation you've lost a useful source of insight.
So it is for the holist "desire" and the reductionist "evolving configuration of neurons and chemicals underlying desire". They are compatible explanations that can be chosen on the fly depending on which is more useful for the problem at hand.
>There's only one meaning of evolution and that's the one I am talking about. Biological evolution IS evolution of atomic configurations, it's all the same, you'll have to point out what exactly it is you don't understand
I'm confused as well. When you say this:
>You don't understand evolution. It's just a self-replicating configuration of atoms, nothing more. The better the configuration the more self-replication resulting, again, in more entities of said configuration
>There is zero desire or want involved, it's just statistics
Are you saying my brain is evolving constantly as I make a decision?
The current development of ai is based on what we know about how the human brain works. It's basically replicating the human brain. So that's probably why. Although when it gets to a stage when it's bigger than the human brain, it may see things differently
Actually there are multiple branch on AI development. Some may try to emulate human brain, but currently more realistic view is the emulation of animal brain. Worms/Rat/etc small mammal brain emulation.
Human animal brains are extremely similar, just of different sizes. You're right about the animal brain part, but I was thinking more about the simulation of neurons in computers (neutral networks). Initially inspired by humans but downscaled to animal brains for obvious reasons
Yeah, human/animal brains are the inspiration for artificial neural networks (ANNs). But the learning algorithms for ANN are usually pretty different from what the brain uses. Like they'll do crazy shit that's biologically impossible. The most common learning algorithm, I think, is backpropagation, which is not realtime, it goes in rounds: get sensory data, make prediction, detect error in prediction, correct weights between neurons throughout the entire network, repeat. And that algorithm is all derived using calculus. I only know anything about this one cause i implemented it once (poorly) for keks.
But other than neural networks it seems like most of the stuff out there is almost entirely math based, not biologically based. So like building models of the world based on probabilities (bayesian networks), or thinking of sensory data as points in some n-space then finding curves and lines that fit those points, or pulling dimensions out of the space to simplify it, linear algebra shit.
Source: took a short online course on AI and dicked around in python for a bit. i don't know what really goes on in academia but that's my impression