What to feed to the input of the neural network? Your ideas... - page 41
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Thanks for the idea.
Unfortunately, ticks are hard to work with, I work with opening prices.
I made 60% per day on a typical chart.
Tell me the rules of trading
Tell me the rules of the trade
I read Bill Williams' book "Masters of the Markets". If volume increases on rising bars, you should buy. If volume decreases on rising bars, you should sell. The same for selling is similar. A pin bar with a shadow at the bottom is a buy signal. A small bar with a small shadow indicates the end of the trend. You can look at tick volume, it is built into the basic indicators in mt.
Okay, thanks.
Red should describe green in such a way that one should look for a global maximum utility.
Well, of course it should! But, how, better :) ?
You can take standard metrics - the same Accuracy or Precision, or other metrics describing the efficiency of classification.
And, what exactly are "standard", with what to compare?
Standard methods here rather mean known, the list is not at hand now, but they are essentially three categories:
1. Enumeration - here we start from adding/removing piece by piece or in groups - i.e. we rely on the result.
2. analysing tree models for frequency and use of predictors in them.
3. Statistical evaluations of reciprocal distributions with exclusion of correlated features and other statistical evaluations of utility.
Let's imagine that we already have a lot of predictors that we can't wait to feed to the input of the NS. But, the computer may overload from the excess of incoming data - we don't have millions of dollars for super computers. What to do in this case, what optimisation algorithm will be ideal for the task of selecting the most useful predictors, what kind of FF can be invented, and will it be more efficient than standard methods from the economic point of view? This is a question that never ceases to worry the poor :))))) What are your thoughts?
1. Well, of course, it is necessary! But, how, better :) ?
2.You can take standard metrics - Accuracy or Precision, or other metrics describing classification performance.
Standard methods here rather mean well-known ones, I don't have the list at hand now, but they are basically three categories:
1). Overshooting - here we start from adding/removing one by one or in groups - i.e. we rely on the result.
2). Analysing tree models for frequency and use of predictors in them.
3). Statistical estimation of reciprocal distributions with exclusion of correlated features and other statistical estimates of utility.
1- It is better when the global maximum of the system efficiency FF is singular and stationary. It will look like a stable solid surface island among a sea of waves (well, or waves of the sea). Robust parameters imply performance with similar system performance on new data. If there are no such stable islands, there are at least two possibilities: either the system has no robust parameters at all, or the entire FF (or one or more metrics included in it) has been chosen to be process-inconsistent. What do you mean by "process inappropriate"? For example, a rocket is launched into space and one of the metrics of the vehicle is implemented to measure the dynamics of the change in the percentage of cormorant chicks hatching over the last 100 years. How does this metric affect the success of the spacecraft launch? - Nothing, such a metric only dilutes the overall metrics of the spacecraft launch process.
2. How does the application of FF differ from "standard" methods?
1. it is better when the global maximum of the system efficiency FF is single and stationary. It will look like a stable solid island of the surface among a sea of waves (well, or waves of the sea). Robust parameters imply performance with similar system performance on new data. If there are no such stable islands, there are at least two possibilities: either the system has no robust parameters at all, or the entire FF (or one or more metrics included in it) has been chosen to be process-inconsistent. What do you mean by "process inappropriate"? For example, a rocket is launched into space and one of the metrics of the vehicle is implemented to measure the dynamics of the change in the percentage of cormorant chicks hatching over the last 100 years. How does this metric affect the success of the spacecraft launch? - Nothing, such a metric only dilutes the overall metrics of the launch process.
That's right, 100% is good.
That's what we need to find out - are there islands or only oceans!
2. How does the application of FF differ from "standard" methods?
Algorithm? But, I would like to answer differently - "efficiency" and speed of finding a good solution.
1. That's right - 100% is good. That's what we need to find out - are there islands or just oceans!
2. by algorithm? But, I would like to answer differently - by "efficiency" and speed of finding a good solution.
1. So it's easy to check - in a floating window. The FF of non-robust parameters will look like waves, and robust ones like a stable island.
2. It's hard to say about speed, but about efficiency - there is no difference, both there and there everything is in the FF. Another thing, where there is a higher level of control over each component of the system.
1. It's easy to check in a floating window. FF of non-robust parameters will look like waves, and robust ones like a stable island.
2. It's hard to say about speed, but about efficiency - there is no difference, both there and there everything rests in the FF. Another thing, where there is a higher level of control over each component of the system.
1. Let's synchronise the clocks! We have a set of predictors - our task is to select the effective ones, i.e. those that contribute to the improvement of our indicator - let it be accuracy. We make a selection, build a simple wooden model and evaluate the classification performance on a validation sample - so for each agent. And so on - we get the result - we send agents to explore new coordinates.
In the end we have a lot of binary variables - a switch - on/off.
How do we graph the waves here and at what point?
2- Efficiency per number of iterations/wasted time, plus I described 3 different methods globally - interesting to do a full comparison between them.