Neural models in virtual environments
This project investigates neural network modeling by juxtaposing different neural networks in the context of behavior modeling and reinforcement learning. Taking advantage of several neural architectures, including densely connected, recurrent (RNN) and convolutional networks (CNN), their parameters and performances are analyzed within the "snake game". The experiments point to greater computational consumption, greater difficulty in adjusting parameters, but greater stability in some metrics of the more biologically detailed models, such as spike networks and liquid state machines, revealing a contrasting dynamic with the training of better known neural networks such as RNN and CNN. In comparison, these models performed satisfactorily in the tasks, obtaining high scores and producing balanced strategies of conservative and exploitative behaviors. It was concluded that the comparative analysis deepens the way of verifying and understanding the training of neural networks, complementing the analyzes of metrics carried out for the evaluation of artificial neural networks, such as the total loss or entropy.