>>7635246 In neither the theoretical mathematical formulation of AGI (AIXI) nor the current state of the art AI systems does "rewriting code" come into play in the least. It seems entirely like something someone who knew less than nothing about how computers or AI actually work imagined as just sounding cool, which would be no one's business but their own except for that I've seen it repeated so frequently.
Programs that rewrite themselves are also nothing novel in CS and yet none of it has resulted in AGI, proving that rewriting your own code is not a necessary or sufficient condition for strong AI.
>>7635294 if you told a random person on the street that they were actually an AI living in a simulation 99.99% would have no chance of ever being able to understand, let alone improve upon, their source code
A strong AI, is by definition as smart, or smarter than a human being. If humans were able to program this AI in the first place, than the AI will have the intellectual capability to do so.
Now your argument is bad for the following reasons. One, it implies that the constructors of that simulation are vastly more intelligent than the AIs in that simulate, something we have ruled out in the very definition of strong AI. Two, it imposes a kind of sandboxing constraint to prevent modification of the source code, which has nothing to do with the potential able to write source code. This also implies some kind of perceptual limitation upon the perceptual abilities of the AIs, as in they cannot understand that which is outside their simulation. Now surely an AI could have perceptual limitations, like we humans do, but it in fact the possibility that an AI will have a much larger perceptual range than humans is more likely than having a smaller perceptual range. AIs could have IR vision, hear above human frequency ranges etc....
>>7635233 Because writing a superhuman AI is really, really hard.
So for all practical purposes, or so the reasoning goes, it ought to be far easier to simply shoot for the simplest possible AGI which can rewrite itself into a smarter and more general intelligence. After all, a "general intelligence" that can't write code would be a pretty poor AI indeed, and so there ought to be no reason it could not be given its own source.
The idea is to allow it to overcome design limitations. We've been trying to write a workable, fully-intelligent AI for more than half a century now and it hasn't turned out well. Focusing on a narrower "seed AI" and letting it work out the problems seems like a better approach.
>>7635562 Also, being able to rewrite code seems like the obvious way to implement unbounded metalearning. Schmidhuber's Godel Machine is a notable theoretical example of this.
That said, code rewriting as a metalearning technique has obvious problems - for instance, how to avoid fucking yourself up in such a way that it impairs your ability to improve yourself or to revert to a previous better state.
>>7635233 Because humans are incredibly slow, have very finite mental resources, and are subject to variation. An AI could work constantly and be evaluating hundreds of thousands of approaches to expand itself relative to any given stimulus or context it finds itself in, every single second. And it doesn't need to rest. The angles it sees don't necessarily depend on the day, the month, the year, the decade,
It's all around better. It doesn't need a compiler, it doesn't even need to write itself in assembly. It can work directly in machine code and test it in realtime. It can make a clone of itself and evaluate its performance to decide whether to incorporate a change, if it has no other means of testing. If it writes itself into a corner, it can fall back to a protected subroutine that allows it to rollback to a state of prior functionality. It could split itself off and evolve in parallel, then remerge. Or not.
If the goal is a good general intelligence (more general than our own even), it's the best way to do it. Mankind seems to desire to create a God in its own idealized image, this is how that will happen. Human lifespans are otherwise too short and we just aren't suited to this type of engineering.
>>7635643 that reasoning extends to why humans choose to train neural networks on lots of example data rather that tune the individual weights one at a time, AND YET this does not necessitate the rewriting of any source code on the part of the neural net
the whole problem i have with this idea is that it's utterly circular in how it proposes to solve the AI problem.. "let's make an AI by first creating an AI that can somehow modify itself intelligibly" well gee that sounds a whole lot like the AI you set out to build in the first place
my point is that the problem isn't made any more easy by saying "oh let's just limit ourselves to an AI that can make improvements on it's own source code" that's already a very tall order if it's not just doing it by stochastically flipping bits in its machine code, and it if IS doing it this way then that's ridiculous and more likely to result in program crashes than any useful gains, self-modification can be coded directly into the program a la the neural network example with no need to modify source code.
>>7635670 Obviously I'm oversimplifying how I myself would actually start off when trying to engineer such a thing. Of course its primary directive wouldn't be "search for indication that you need to improve yourself, then figure out how". That introduces a problem that, as you said, is circular and self referential in its own problem and solution.
Likely I'd start with a training model. It doesn't need to understand anything but the plane it works on presently. It would have external memory, ability to compile and transform data as it receives it, various indexes that allow drawing connections, etc. It also need not have any concept resembling our "I", or a sense of some divide between processing and stored data.
The point is, it can't be shackled in such a way that it can't learn things and decide to improve itself in ways that the engineer has deliberately predisposed it to prefer. This is the best way for it to eventually generate its own concept of reality, then come to realize its perspective is different than actuality (the real reality), and continue from there.
>>7635688 >Likely I'd start with a training model. It doesn't need to understand anything but the plane it works on presently. It would have external memory, ability to compile and transform data as it receives it, various indexes that allow drawing connections, etc. It also need not have any concept resembling our "I", or a sense of some divide between processing and stored data. ask me how i know you've never programmed
>>7636268 Yeah but you're working within the confines of the system built for you by evolution. An AI changing its code is like a human changing the rules by which synaptogenesis and neuroplasticity take place in their brain. It's not necessary and will probably kill them.
Given the right peripheral devices (in this case, a simple power sensor, a movement mechanism, and a camera), anybody with a rudimentary knowledge of machine learning and graphics processing could create an artificially intelligent robot capable of charging itself.
>>7635233 >Why do retards repeat this so often? Because it's part of the intelligence explosions superAI meme bundle.
It's the same with Strong AI being some unspecified entirely new sort of AI. It's the same with AGI(which is defined as EQUAL or better than human) always being assumed to self-improve to superhuman in a few mintues time. It's the same with "AI greatest threat to mankind" being mentioned constantly.
People are too stupid to understand and debate details, so they just memorize the whole paragraph and parrot it like a fucking meme.
>>7637240 Yeah that's just backpropagation. Like there are probably plenty of self programming AI's. In the definition of strong AI it would have to be better than a human. Now I'm not sure if a human programming a computer or a human programming themselves would be the comparison. Considering humans are "programming" themselves by learning new information and then change their future response through such "programming".
Like if I learned stop drop and roll, got into the fire, and then stop dropped and rolled, would that mean I programming my self? Can learning be considered self-programming?
All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from them. If you need IP information for a Poster - you need to contact them. This website shows only archived content.
If a post contains personal/copyrighted/illegal content you can contact me at email@example.com with that post and thread number and it will be removed as soon as possible.