Don't get me wrong - you CAN use stops/limits with hives. You just can't include them as part of the algorithm itself. So you'd build your hive, then would bolt on the stops/targets afterward. You can run the whole thing through W59's brute force optimizer, and find the most profitable combinations over a set lookback period.
The difficulties come when you want to let those stops/targets be dynamic and self-adjust to market conditions. That can be done too, it's just a lot more difficult to accomplish. In that case, you'd write a script that had a built-in backtester, and every bar (or every X bars), you'd run the routine, gather all the profit/loss metrics for a range of stops, and then would pick the best one moving forward in time. That's called an "adaptive" system, since all the variables dynamically adapt to market conditions. The classic example is a channel breakout system, when you adjust the length of the channel lookback to conform to whatever the dominant cycle is at the time. It was all the rage 15-20 years ago in currency futures.
Coming back to a more theoretical level... The stops/targets that get hammered into everyone's head is meant to get traders to focus on risk management. That is ABSOLUTELY essential, and anyone who doesn't do that eventually blows up. So I'm not challenging that by any means. It's more important than almost everything else. The part where I disagree is the best way to do that. I think it should be done through position sizing, rather than stops. If you trade a position too large for your account, and try to use a tight stop to limit that risk, those stops won't save the account from blowing up - you'll just blow up over a sequence of losing trades rather than all at once. Similarly, using limits to take quick profits only results in the system leaving lots of money on the table. All of this flies in the face of what those educators like to teach, but all of this is easily tested using systems and an optimizer, so I'm not saying anything that I can't back up with hard data. Again, don't take my word for it. It's better to prove it to yourself. I just tend to be opinionated on some of these things, so I take the opportunity to lecture people when I get the chance.
The most amazing system I ever built was a genetic algorithm that evolved fuzzy logic enabled neural nets. In other words, a neural network, that used fuzzy logic to determine whether inputs were high/medium/low/etc, and then a GA to tweak every possible parameter in the entire model. The results were unbelievable. Unfortunately, they were only unbelievable on the training data, and those models all immediately crashed and burned on unseen data because they were insanely overfit. It's easy to see ideas work in the past if you allow the optimizer to make a million passes over the data and tweak everything. It's much harder to see ideas work when you don't get to make any passes over the data set, and have to accept the results as-is in your account. There's an appropriate use for genetic algorithms, and there's an inappropriate use of them. They aren't magic - they're just fancy search algorithms. They won't be able to turn bad ideas into good ones, at least not on the unseen data set. I could hook one up to count the time of day my dog scratches at the door to go out, and a GA is going to be able to map that data over to the EUR/USD exchange rate and make me think I've solved the currency markets. But since my dog actually has no connection to currency markets, those systems are all going to fail miserably when run on future data sets.