Systems Workshop: "Buttonwood" E-mini daytrading system

Post anything related to mechanical systems and automated trading here.
Post Reply
DC1
Posts: 25
Joined: Tue Aug 11, 2015 6:39 pm
Contact:

Re: Systems Workshop: "Buttonwood" E-mini daytrading system

Post by DC1 » Mon Nov 02, 2015 5:40 pm

Great to see this Earik

Another powerful means to help the Wave community .

Although I have and have used the Astro system information for some time


Where do you find the color code for each planet

Thanks again for you over the top input into trading!

DC

DC1
Posts: 25
Joined: Tue Aug 11, 2015 6:39 pm
Contact:

Re: Systems Workshop: "Buttonwood" E-mini daytrading system

Post by DC1 » Mon Nov 02, 2015 8:36 pm

Ok Found the color codes for each of the planets under "Astro" tab on the top selecting "transit to Natal" and there they are in the Format Select Transiting Planet and Watch Aspects


for those new to this program

User avatar
earik
Site Admin
Posts: 474
Joined: Mon Dec 01, 2014 12:41 pm
Contact:

Re: Systems Workshop: "Buttonwood" E-mini daytrading system

Post by earik » Mon Nov 02, 2015 8:51 pm

Hi Gang,

I'm going to do a longer Buttonwood_v5 post sometime soon, but wanted to respond in the meantime to some of this before I forget...

DC: check help - wave59 help, and look for "planet colors". There's a list there.

Regarding optimization/curve-fitting/etc, yes, that's definitely an issue with any mecahnical system, and is one of the big challenges when building this sort of thing. We need to select various parameters, but we also need to be careful that in doing so we aren't curve-fitting the model to past data. I've seen models with only a couple parameters that were curve fit and blew up moving into the future, but I've also seen models with lots and lots of parameters that continued working for quite a long time. So there's no way to know ahead of time if your system will continue to perform. But that's not different from any other kind of trading either - no one knows how long their method will continue to perform, or whether the drawdown experienced in the future will be much larger than the one experienced in the past, etc. Uncertainty is part of this game, and we accept the risk of uncertainty in exchange for the opportunity to make profit.

My thoughts on how to deal with test sets, etc, are a little different from what you'll find in the public domain, and I spent a good deal of time ranting about that in MTS. The issue we have is that most of the information available about how to create trading systems isn't written by people who actually know how to create trading systems. There are not credentials in this industry that you need to possess in order to write a book aside from the willingness to write one. Because of that, I've come across system development material that is not only wrong, but completely dangerous.

For example, one well known author back in my earlier days wrote a book about system development and spent a lot of time discussing analyzing MFE and MAE (maximum favorable/adverse excursion), which is basically a plot of how far losing trades go in your favor before they turn around, as well as how far winning trades go against you before they turn profitable. The gist of what he was doing was to add to a losing position at that magic number when the winners would go against him, using that value to place stops, and taking profits at whatever level let him turn some losers into winners. In other words, slice and dice the data very fine, then use dollar-based stops and targets in order to fake the basic statistics of the system out. It made crappy systems look good, but was complete curve-fitting, and I guarantee that if you add to losers in real life trading where the outcomes aren't known, you will eventually find the streak that works just right to blow the account out of the water. That author (not surprisingly) sold a suite of software to do MAE/MFE analysis at the time, so I get why he was into it. This is a good example of curve-fitting: trying to set your parameters (stops and targets) exactly right so that a system makes money when it wouldn't otherwise.

Anyway, this is just one example of many that tell you why you need to be careful when reading books about this sort of thing, and to really spend some time thinking and doing your own testing to prove everything out. If you build 20 systems that all average losers, you will eventually learn that it's a REALLY bad idea, and you will know exactly why. But I wonder how many bright-eyed newbie traders got all excited about that book, didn't test the material properly, then averaged themselves down out of their accounts. There's a picture of Paul Tudor Jones that I really like, where he's sitting at his desk and you can see a note on the wall that says "Losers average losers".

Coming to the idea of breaking data up into test/training/unseen sets, doing walk-forward optimizations, etc, that's all the same thing. Developers noticed that they were having problems with optimization, so they devised all these tricks to make sure that the optimizations that they used worked "better" than just running them on the whole set. Then they went and wrote books and articles about it. No matter how you chop your data set up, and how many passes you make on it, and how many chunks you hold back, etc, as long as you eventually use all the data in order to make a decision on whether or not a system is good, you are curve-fitting in the same way as if you just ran your system report over the entire data set one time. The only difference is that you have made the process complicated enough to trick yourself into thinking that you aren't curve-fitting. A system built in that way has no better chance of success than the one built using just one pass through the entire data set, despite the fact that we've all been told that it's "better" to do it that way.

What is more important than splitting data up, etc, is to consider the following points:

1) How many trades do we have?
2) How many degrees of freedom does the model have?
3) How do the results look for the system when looked at in aggregate, rather than just at the "special" settings?

We want lots of trades (2000 is really good), rather than fewer ones. We also want fewer degrees of freedom. Think of each parameter as a degree of freedom. We really don't want more than 10-12 of those. In this system, we've got natal dates, corrections to the natal dates, orbs, thresholds, and three average lengths. If you consider natal dates as one parameter, then that's about 8 degrees of freedom. So we're still OK, but we have to be careful not to add too many other moving parts or we're going to get in some trouble.

Finally, and most importantly, if we vary the parameters around, how does that change profitability? Do we have a whole bunch of parameters that all make money, or do we just have a narrow band of parameters that make money? That's the kicker there. If you've got 1000 different parameter sets that you could choose from, but they ALL make money, then your system is most likely going to work. If you've only got 50 different possible combinations, but they all lose money except for one special setting, then that system is most likely going to blow up. The point of optimizing is to get a feel for the range of what is going to work, not to find the one perfect parameter setting. So for that reason, it's good to look at big spreads like orionsbelt did. We just need to make sure that we're doing it with the right mindset, and not just to find the number that makes this all look good.

I haven't looked at the results deeply enough yet to be able to comment on them, so I'll save that for later. I also apologize for the length of this post - kind of ended up with a life of its own... :roll:

More later,

Earik

User avatar
earik
Site Admin
Posts: 474
Joined: Mon Dec 01, 2014 12:41 pm
Contact:

Re: Systems Workshop: "Buttonwood" E-mini daytrading system

Post by earik » Wed Nov 04, 2015 9:40 pm

Buttonwood_v5: changing the signals and adding a time filter

Thanks to orionsbelt for running the big optimization run. It can be difficult when trying to determine which way to go with potential improvements like that, because (as a few people mentioned) there's a definite concern about potential curve-fitting when we start trying to find "better" solutions. I put better in quotes, because the optimizer thinks better means higher profit, when in reality we are more concerned with stability and robustness than profit numbers from a backtest.

Because of that, when presented with the question of whether to stick with our original signals, or go to different ones, I felt a little overwhelmed by the numbers. We have a new solution that gives us a higher average trade value, which is great, but is it really better than our other one, or did we just stumble onto a higher value since the optimization run was so long? In situations like this, I find the best approach to try and think about things in the most simplistic ways possible, as that sometimes makes the decisions easier.

In that light, I built a quick and dirty system that traded only the signals themselves, without any of the astro filters at all, just like what we did at the very beginning. I then did an optimization that was designed to explore the area around our solutions. For example, our original system used an 8/34 simple average, and a 5-period simple channel. So you can think of that as an 8-34-5 solution. I want to know how the results look when each of those numbers is smaller, as well as larger than our current choices. So I "dithered" around those values, checking three settings on each parameter to give me an idea. I looked at 6/8/10 for the first, 30/34/38 for the second, and 3/5/7 for the third parameter. In other words, put our chosen settings in the middle, then go up from there, and down from there on each one. You can think of it as sort of a stress test for the parameters we have selected.

Here are the results:
simple_avg_explore.png
Dithering around the 8/34/5 solution
simple_avg_explore.png (97.55 KiB) Viewed 23917 times
First of all, don't pay any attention to the highest average trade or profit value here. Consider it a guarantee that whatever parameter selection we end up with won't be the optimal choice in the future. That's the nature of optimizing numbers. More important than the best value is the spread of values over a particular set of numbers. We'll end up somewhere in the range, probably towards the middle, so we need to think in terms of averages. In this case, it looks like the average trade tends to be around $18-$19, at least from my scan of the numbers. That's not bad - there's an edge to this entry/exit method that I wasn't expecting to find, and although it's not large enough to trade on it's own, it's roughly enough to overcome commissions and a tick of slippage, using most of the parameters we're looking at. Maybe I've been too hard on these averages all this time. :lol:

So that gives you an idea of performance of the "family" of simple averages in this system, which is the important part. Now let's implement orion's idea about using AMAs in the channel part, using his 12-46-7 starting point, and doing the same thing, testing 8/12/16 for the first, 40/46/52 for the second, and 4/7/10 for the third parameter.

The results:
ama_avg_explore.png
Dithering around orionsbelt's solution
ama_avg_explore.png (92.66 KiB) Viewed 23917 times
So what we're doing is comparing the simple average family of solutions to the simple+ama family of solutions. You can see that the ama family in general has a lower number of trades, as well as average trade values that are slightly higher. It looks to be about $2 per trade higher on average, but it's pretty easy to see if you just scan through the two reports one after another. I'd expect a higher average trade value as the number of trades dropped, because that means the second set is just going a little slower than the first, which will eliminate some whipsaws, etc. Fewer trades also means lower total profit numbers, which is also obvious in the reports. That's OK though - average trade is more important to us than total profit, so that's the metric that wins.

What this means is that, ignoring the astro filters, it's going to be safter for us to trade orion's modified approach than our original, which in my mind makes it a preferable approach. Goodbye Fibonacci parameters!

Here's the system report with the new triggers, which is very close to what orion reported earlier:
with_new_triggers.png
System report with the new triggers
with_new_triggers.png (35.09 KiB) Viewed 23917 times
Now let's add one more filter, since we're building a new version. I had left this for some of you all to add since it's such a low-hanging piece of fruit, but it's really time for us to get it in there. ;) What I did was filter out all trades that happened after 14:30 ET. That's a pretty obvious way to get rid of a bunch of trades that we really don't want to take. Makes sense that you wouldn't place a trade 5min before the close, knowing you have to exit on the very next bar, but computers aren't smart like that, and will just blindly do what you tell them, so we've been placing some pretty bone-headed trades thus far when it comes to time.
with_time_filter.png
System report with new triggers and time filter
with_time_filter.png (29.92 KiB) Viewed 23917 times
You can see what a big difference this makes. We gave up $2075 in profits by chopping out 334 trades. Divide those numbers and you find that trades after 14:30ET have an average trade value of only $6.21, which illustrates how much of a drag they are on the overall results. Taking them out bumped our average trade up to $48, and reduced our drawdown by almost a third. This is a very simple way of improving the results of an intraday system, and almost always helps.

Making good progress! All for now.

Earik
Attachments
buttonwood_v5.zip
(6.03 KiB) Downloaded 421 times

DC1
Posts: 25
Joined: Tue Aug 11, 2015 6:39 pm
Contact:

Re: Systems Workshop: "Buttonwood" E-mini daytrading system

Post by DC1 » Thu Nov 05, 2015 8:29 pm

Earik

I understand that the first two parameters are measures of the suns position.

I would be very happy to know what the Sun Position is currently and its anticipated input into the decision to buy or sell.

Is it recommending buy or sell on a particular day?

DC

User avatar
earik
Site Admin
Posts: 474
Joined: Mon Dec 01, 2014 12:41 pm
Contact:

Re: Systems Workshop: "Buttonwood" E-mini daytrading system

Post by earik » Thu Nov 05, 2015 10:02 pm

Hi DC,

The astro in this system doesn't say whether to buy or sell - it says whether or not the market will be volatile enough so that trend following systems will work or not, which is a big difference. I've been referring to it as "market weather" - basically, if there are going to be big waves, let's trade, and if not, then we'll stand aside. The actual entries come down to moving averages in this scheme. You could actually use anything though, as long as you go with the trend. Check out the first couple of posts in the thread and it should be more clear.

As far as your question about what the Sun's current position is, you can just call that with the astro( ) function, like this:

Code: Select all

sun_right_now = astro(year,month,day,time,astro_sun,true,astro_long,true);
The two "true" parameters are for geocentric and tropical positions, respectively. Is that what you meant?

Regards,

Earik

pleiterr
Posts: 10
Joined: Wed Oct 28, 2015 4:54 pm
Contact:

Re: Systems Workshop: "Buttonwood" E-mini daytrading system

Post by pleiterr » Fri Nov 06, 2015 3:03 pm

What a great add Earik :D

I want to ask you if 6 years of data ( since 2009 ) is enough ? And why did you start with 2009 and not any other year ?

Thanks !

kurthulse
Posts: 15
Joined: Tue Jul 21, 2015 9:16 pm
Contact:

Thoughts about the date ranges for input data

Post by kurthulse » Fri Nov 06, 2015 4:10 pm

First, addressing the question posted above:
I want to ask you if 6 years of data ( since 2009 ) is enough ? And why did you start with 2009 and not any other year ?
Earik mentioned this concern briefly in an earlier comment, saying:
I started in 1/1/2009, which gives us almost 8 years to test on. 2008 also works, but it tends to skew results a little, since there was too much money to be made that year. You can use it or not, but I think 8 years is plenty for an intraday system like this.
viewtopic.php?f=2&t=37&start=10#p166

Second, some thoughts about choosing input dates...

Since we are developing a system that trades only intraday, it isn't really necessary to use contiguous blocks of days for the trading simulation. An entry or exit signal on one day is not informed by entries or exits that took place on an earlier day, even though some of the underlying data that governs trading decisions does come from earlier days (e.g., a slow moving average).

When I used to do statistical analysis on enormous data sets in environmental consulting, we developed a way of artificially expanding our initial data set by separating it into "chunks" and choosing randomly from among those chunks. Then we'd redo the simulation with a different randomly chosen set of chunks. And again and again. This allowed us to have enough statistical samples to narrow the confidence intervals of our estimates. Usually I have seen this approach called "bootstrapping." That's how we mapped out how much garbage is thrown away in the State of California, and precisely what it consists of.

In the case of market data and trading decisions for the current system, a chunk would be equivalent to a single trading day. By trading on a simulation of constantly-shifting price terrain -- a different and non-contiguous set of days for each run -- I think many of the problems of curve-fitting can be avoided. (I don't actually have the mathematical chops to prove that hypothesis.)

Along that line of thinking, I am just now beginning a set of trading simulations in a genetic algo engine I built (not yours, Earik, although I'm looking forward to trying yours). It pulls a randomly chosen set of intraday data for each run. Perhaps in a few days/weeks (/months/years) we'll see if evolution starts producing interesting results. :)

I don't have much else to contribute to this thread yet, but I'm reading it with great interest. Thanks for starting it.

----------------------
Update: Okay, I looked at the output of the natal functions, and I see why it it wouldn't be so easy to randomize the days that go into the trading simulation. When the natal functions are used as filters, there aren't all that many eligible days left, and multiple sampling of the small-ish population probably would not help in avoiding curve-fitting. Also, the technique would interfere with the ability to detect advantages to using the filter.

orionsbelt
Posts: 33
Joined: Tue Jul 21, 2015 9:19 pm
Contact:

Re: Systems Workshop: "Buttonwood" E-mini daytrading system

Post by orionsbelt » Thu Nov 12, 2015 1:56 am

does anyone know how to write Ultra_Smooth_Momentum into a "function" that i can then place into our system here

Code: Select all

#compute trading signal --------------------------------------

signal1 = Buttonwood_Signal(8,34);
signal2 = Channel_Breakout(5);
signal = signal1 + signal2; 
as maybe a 3rd signal or to replace signal 1 or 2 so i can test different combinations of the signals we have to see if it helps our overall performance? i have attempted several times with no real luck.

also, has anyone been testing anything else with any decent results that they are willing to share?

thanks

User avatar
earik
Site Admin
Posts: 474
Joined: Mon Dec 01, 2014 12:41 pm
Contact:

Re: Systems Workshop: "Buttonwood" E-mini daytrading system

Post by earik » Thu Nov 12, 2015 9:55 pm

Hi orionsbelt,

The function is ultramom(price, length, smooth). That returns the USM curve itself. To get a signal out of it, you have to figure out how to have the function return a +1 or -1, which is what the system expects. +1 means buy, -1 means sell.

So if it's something simple, like buy if USM > 0, or sell if USM < 0, you would do it like:

Code: Select all

input:len;

if (barnum == barsback) {
  signal = 0;
}

usm = ultramom(close, len, 4);
if (usm > 0) signal = 1;
if (usm < 0) signal = -1;

return signal;
Name it something (QScript - QScript properties), and make sure to check the "function" checkbox. Then, go to the Buttonwood_v5 script, and you can call your function there. Let's assume you named it "USM_Signal". In that case, you'd add it to the signal section like this:

Code: Select all

#compute trading signal --------------------------------------

signal1 = Buttonwood_Signal(8,34);
signal2 = Channel_Breakout(5);
signal3 = USM_Signal(8);
signal = signal1 + signal2 + signal3;

Hopefully that gets you started.

Regards,

Earik

Post Reply

Who is online

Users browsing this forum: No registered users and 42 guests