That was interesting...

Open discussion about any topic, as long as you abide by the rules of course!
Post Reply
andyman
Posts: 11198
Joined: Wed Feb 09, 2005 8:20 pm

That was interesting...

Post by andyman »




any resident programmers care to chime in? This is awesome
User avatar
Whiskey 7
Posts: 9709
Joined: Sat Jul 21, 2001 7:00 am

Re: That was interesting...

Post by Whiskey 7 »

SKYNET is born :paranoid:
[color=#FFBF00]Physicist [/color][color=#FF4000]of[/color] [color=#0000FF]Q3W[/color]
User avatar
seremtan
Posts: 36012
Joined: Wed Nov 19, 2003 8:00 am

Re: That was interesting...

Post by seremtan »

andyman wrote:[youtube]qv6UVOQ0F44[/youtube]
for non-flash users
Doombrain
Posts: 23227
Joined: Sat Aug 12, 2000 7:00 am

Re: That was interesting...

Post by Doombrain »

Cracking.
User avatar
PhoeniX
Posts: 4067
Joined: Fri Aug 04, 2000 7:00 am

Re: That was interesting...

Post by PhoeniX »

I'm a programmer, and have no real idea how this works. Very awesome though. :D

I covered neural networks for a module whilst at Uni and they were pretty mind-blowing then. As it happens in the next few weeks my company are sending me to work with a PHd student at a local University to develop a neural network image recognition system. I'm going to have to do some serious improvisation to make it look like I understand things.
User avatar
Eraser
Posts: 19174
Joined: Fri Dec 01, 2000 8:00 am

Re: That was interesting...

Post by Eraser »

PhoeniX wrote:I'm going to have to do some serious improvisation to make it look like I understand things.
Image
User avatar
Eraser
Posts: 19174
Joined: Fri Dec 01, 2000 8:00 am

Re: That was interesting...

Post by Eraser »

PhoeniX wrote:I'm a programmer, and have no real idea how this works. Very awesome though. :D
Same here btw.
Sounds like an awful lot of brute forcing is involved here though. Just try things and see how they work out. Almost as if you endlessly do random things and keep track of which of those things have a positive result, you'll eventually accidentally stumble upon the desired end result. When put like that, it doesn't seem all that impressive anymore.

Also, where I think the comparison with evolution is a bit wonky is that evolution has no predefined goal. Evolution can't think and go "oh, this random mutation has a more positive effect so I'll keep that and discard the other random thing". Maybe I'm wrong there, as I'm not a biologist or anything, but it sounds to me like it's an important difference.
User avatar
PhoeniX
Posts: 4067
Joined: Fri Aug 04, 2000 7:00 am

Re: That was interesting...

Post by PhoeniX »

No I suppose not, but ultimately I gather that's how you learn to do things. If you consider someone who's never played Mario before and has no idea how it works, but is just told their only aim is to progress Mario to the right and survive as long as possible then I imagine they'd learn in a similar way. They'd eventually figure out holding the right gets them moving to the right but sooner or later they'll hit things and die, they'd then realise they need to avoid those obstacles to extend their life.

With some neural networks you train them using labelled data - e.g., neural networks which perform OCR. You'd show them hundreds/thousands of photos of the letters A-z written by lots of people in lots of different handwriting styles. The network then starts to learn the subtle characteristics of the letter 'a' for example and then when shown an image it's never seen before is able to output a confidence of whether it thinks it's an 'a' or not.

The goal of evolution is survival isn't it? People who survive longer have more children and their genes/knowledge become more dominant in the gene pool, producing longer living people. Obviously it just takes a much longer time compared to Mario.
User avatar
Eraser
Posts: 19174
Joined: Fri Dec 01, 2000 8:00 am

Re: That was interesting...

Post by Eraser »

Right now it looks like it ties information it has learned to hard points in that specific map. Would be really cool if it could use learned info across maps. For instance, it might learn how to navigate certain block setups, or eventually figure out how to kill enemies. It could learn how to complete the entire game in a much more dynamic way.

That also just shows how incredible our brains are. Give a person who has never played Mario a controller and I'm sure they figure stuff out a lot faster than this neural network. It takes only a single run into a single enemy to learn to avoid all of those enemies, and perhaps even other types of enemies because the player now knows there are harmful things in this world. Appearance of dynamic, moving objects might actually tell the player whether it's a good item or a bad enemy. For a computer to learn that, it takes a lot longer.
User avatar
mrd
Posts: 4289
Joined: Sat Mar 25, 2000 8:00 am

Re: That was interesting...

Post by mrd »

PhoeniX wrote:The goal of evolution is survival isn't it? People who survive longer have more children and their genes/knowledge become more dominant in the gene pool, producing longer living people. Obviously it just takes a much longer time compared to Mario.
Don't really think evolution has a goal, strictly speaking. The goal of the human race may be survival, but evolution on its own is just adaptation to environmental pressure. That adaptation just happens to involve living. I guess it's kind of a grey area... where do you draw the line?
losCHUNK
Posts: 16019
Joined: Thu May 09, 2002 7:00 am

Re: That was interesting...

Post by losCHUNK »

Clever shit, think it's a neural network with a genetic algorithm.

There's only so many inputs so you'll have to assign a value to each input with multiple generations (loops) trying to find an optimul fitness.

So I think basically random values (inputs) are generated in the 1st generation and are assigned fitness depending on game progression (output), the fitter the value the higher the likelyhood of it being selected for the following generation. So when it learned to move a random value was created for X in a node and that nodes fitness was increased and carried forward to the next generation, on futher progression in the next generation the nodes fitness was increased. Eventually it figured out that jump was to avoid enemies etc by assigning a new node values and calculating fitness through trial and error.

There's mutation between nodes involved to find the optimum solution, that's what is getting the algorithm out of the stagnation in his little generation graph having found an incorrect optimul fitness.

I'm wondering if it can actually see the enemy or wether it just knows when to jump. I might sit down with a pen and paper and do this after some sleeps :)
[color=red] . : [/color][size=85] You knows you knows [/size]
Post Reply