Machine learning in trading: theory, models, practice and algo-trading - page 3066

 
library(mt5R)
symb <- MT5.GetSymbol(sSymbol = "EBAY" ,iTF =1440,xts = T,iRows = 10000)

library(PerformanceAnalytics)
library(TTR)
library(quantmod)

rsi <- RSI(symb$Close, n = 14)    
signal <- ifelse(rsi<30,1,0)
trade <- Lag(signal)
ret <- dailyReturn(symb)*trade   

names(ret) <- "Ebay"
charts.PerformanceSummary(ret)

11 lines from obtaining real time data to creating and testing a trading system


 
Женя #:

It depends on how you organise chips and targets, for example, if you use returns or any popsy indicators with different window to vang ZZ, you will get 60-90% akurasi on SB easily. And with the right chips and targets, 60% of akurasi is the grail, which annual SR will give above 5, depending on the frequency of transactions.

Akkurasi, mistakes are irrelevant in our business. 50/50 is about the balance line or earnings.

 
Женя #:

That's the thing that one is a function of the other, there is akurasi sign of the future return (ZZ, etc.) above 51%, it is already possible to trade(on stock markets and crypto), at least to repay the rush and the commission (if reasonable risk management), and 55% - you can choose a lambo and a yacht.

seriously?????

 
Женя #:

That's the thing that one is a function of the other, there is akurasi sign of the future return(ZZ ,etc.) above 51%, it is already possible to trade(on stock markets and crypto), at least to repay the rush and the commission (if reasonable risk management), and 55% - you can choose a lambo and a yacht.

But if on SB akurasi is above 51% on a dataset of >100kcandlesticks, it means that the chips and targeting are badly mixed, the system hallucinates, as not infrequently chat-gpt when he is asked about what he does not know.

Generally speaking, usually ashitty backtest(unoptimised) should make you think that something is wrong, for example akurasi 70% on classification, and on backtest annual SR isless than 0.5, this isnonsense, when as should be double digit, but apparently not, they do not think about it.

But it is also possible to adjust the backtest, which used to be engaged in the mass distribution of MO, in short, pure intellectual self-satisfaction, adherents of which are fiercely defended when they try to bring them out into the open.

Recently showed balance charts with 9 and 8% classification error.
9% was 50/50 on returns. The 8% was already earning something, in change.
The point is that 1 loss takes away the profit of 10 wins. The teacher's markup was TP = 50 and SL = 500

You can also make 1% mistakes (or 99% accurasi). These are all just numbers. Which you can't put in your pocket.))))

What is SR?
 
Forester #:
Recently showed balance charts with a classification error of 9 and 8%.
9% was 50/50 on profitability. 8% was already earning something, in change.
The point is that 1 loss takes away the profit of 10 wins. The teacher's markup was TP = 50 and SL = 500

You can also make 1% mistakes (or 99% accurasi). These are all just numbers. Which you can't put in your pocket.))))

What is SR?

Are you on your Forrest? Are you not using packages? ) Any luck?

 

A zoo of models. Many (hundreds of pieces or more) are used to confirm no overfit. If they perform well on average, then it is not accidental.

OOS is chosen complex, with a change of global trend.

Some representatives, all from one pack:


 

Traine did a great job on the test.

Nobody needs to see the validation.


not a scam 3.0?

 

It's about time the packers got their brains fixed.

information doesn't come in at all (although it didn't before).

can you go digging a vegetable garden together? ) if you have nothing to say on the MoD
 

))))

as required

 
mytarmailS #:

))))

as required

Prove what to whom? ) You were offered a bot, you can't even put it on your account or what?

Another was offered a bot - he doesn't want to trade Forex at all... what are you doing here? ) The third one has been pulling packages for 10 years.

Real clowns

I suggested a topic to discuss - no one had enough brains.

They can't program, they can't think. We're out of packages.