Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
300 views
in Technique[技术] by (71.8m points)

Python Tflearn machine learning Optimiser, loss and parameters

After fixing my code and prepare my data for training I've found myself in front of 2 question.

Background: I have data made of date (one entry per minute) for the first column and congestion (value, between 0 and 200) for the 2nd. My goal is to feed it to my neural network and so be able to predict for the next week the congestion at each minute (my dataset is more than 10M of entry, I shouldn't have problem of lack of data for training).

Problem: I now have two question. First about the loss, optimizer and linear. It seem there is a certain number of them and they all have a domain where they are better than the other, which one would you recommend for this project? (Currently on my test I use Adam as an optimizer and mean_square as loss and linear for activation).

My second question is more like an error that I have (may be linked to me using the wrong loss/optimizer). When using my code (10 000 data of training for now) I have an accuracy of 0, a low loss (0.00X) and a bad prediction (not even close to the reality). Do you have any idea of where it could come from?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

What you are trying to do is called time series prediction (given data at time t-n, t-(n+1) ... t-1: predict the state at time t) and is generally a task for a recurrent neural network. Here is the great blog post by Andrej Karpathy about the topic that you should have a look at.

About your two questions:

  1. This is hard to answer since the question of what optimizer to use highly depends on the input data. Generally speaking the network will converge no matter what optimizer you use. The time it takes to converge will differ however. Adaptive learning-rate methods, like Adagrad, Adadelta, and Adam tend to achieve convergence slightly faster. Here is a good write-up of the different optimizers.

  2. Basic neural networks (MLPs) don't do well with time series prediction. That would be an explanation for the low accuracy. However I don't know why the loss would be 0.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

56.8k users

...