Py之neurolab:Python库之neurolab的简介、安装、使用方法之详细攻略
目录
neurolab的简介
neurolab的安装
neurolab的使用方法
neurolab的简介
neurolab是一个简单而强大的Python神经网络库。包含基于神经网络、训练算法和灵活的框架来创建和探索其他神经网络类型。NeuroLab一个具有灵活网络配置和Python学习算法的基本神经网络算法库。为了简化库的使用,接口类似于MATLAB(C)的神经网络工具箱(NNT)的包。该库基于包NUMPY(http://NoPy.SimP.org),使用一些学习算法。neurolab
neurolab的安装pip install neurolab
Support neural networks types Single layer perceptron create function: neurolab.net.newp() example of use: newp default train function: neurolab.train.train_delta() support train functions: train_gd, train_gda, train_gdm, train_gdx, train_rprop, train_bfgs, train_cg Multilayer feed forward perceptron create function: neurolab.net.newff() example of use: newff default train function: neurolab.train.train_gdx() support train functions: train_gd, train_gda, train_gdm, train_rprop, train_bfgs, train_cg Competing layer (Kohonen Layer) create function: neurolab.net.newc() example of use: newc default train function: neurolab.train.train_cwta() support train functions: train_wta Learning Vector Quantization (LVQ) create function: neurolab.net.newlvq() example of use: newlvq default train function: neurolab.train.train_lvq() Elman Recurrent network create function: neurolab.net.newelm() example of use: newelm default train function: neurolab.train.train_gdx() support train functions: train_gd, train_gda, train_gdm, train_rprop, train_bfgs, train_cg Hopfield Recurrent network create function: neurolab.net.newhop() example of use: newhop Hemming Recurrent network create function: neurolab.net.newhem() example of use: newhem
Features:- Pure python + numpy
- API like Neural Network Toolbox (NNT) from MATLAB
- Interface to use train algorithms form scipy.optimize
- Flexible network configurations and learning algorithms. You may change: train, error, initialization and activation functions
- Unlimited number of neural layers and number of neurons in layers
- Variety of supported types of Artificial Neural Network and learning algorithms
>>> import numpy as np
>>> import neurolab as nl
>>> # Create train samples
>>> input = np.random.uniform(-0.5, 0.5, (10, 2))
>>> target = (input[:, 0] + input[:, 1]).reshape(10, 1)
>>> # Create network with 2 inputs, 5 neurons in input layer and 1 in output layer
>>> net = nl.net.newff([[-0.5, 0.5], [-0.5, 0.5]], [5, 1])
>>> # Train process
>>> err = net.train(input, target, show=15)
Epoch: 15; Error: 0.150308402918;
Epoch: 30; Error: 0.072265865089;
Epoch: 45; Error: 0.016931355131;
The goal of learning is reached
>>> # Test
>>> net.sim([[0.2, 0.1]]) # 0.2 + 0.1
array([[ 0.28757596]])