Determining the amount of data recommended for using a certain statistical method is not a simple problem, and there are no universally correct answers. Some rules of thumb exist, but any such rule is likely to be an oversimplification. This is because the amount of data required depends on the characteristics of the data in question.
The most important factor is the signal-to-noise ratio in the data. If there exists a clear relationship between the input predictors and the target, which is not obscured by noise, a model may be able to identify the relationship with relatively little data. On the other hand, when there is much noise, you may require large amounts of data.
Another important factor is the model you choose to use, and in particular the degree of freedom of the given model. For univariate linear regression with just one input predictor, just a handful of points can give good results. For each input predictor you add to this model, you increase the degree of freedom by 1, requiring more data points to avoid overfitting. For ARIMA most rules of thumb would recommend something closer to 30 data points, and if you include seasonal components as well, we might recommend 50 or even 100 data points as a minimum.
It also depends on your purpose with the analysis. If your aim is to quantify the relationships between the time series so well that you can use them for predicting future data points, you will typically need more data. An extreme example is the task of predicting stock prices. In an efficient market the available knowledge you could potentially put into the model has, to a large extent, already been priced into the market, and it is likely to be difficult to extract whatever signal remains in the data, as it is swamped in noise. In such cases you may need an extraordinary amount of data.
Note also that it is not always better to include more data. If your time series are stationary, meaning that the properties of the time series do not depend on the time when the time series are observed, it never hurts to add more data. However, many time series you encounter in the real world are not stationary. If you go sufficiently far back in time the situation may have been markedly different from what it is today, and relations that held a few years ago, may no longer be relevant to understanding the current market.
A stark example of this is the financial crisis of 2007 and 2008. Many macroeconomic indicators and other economic time series show conspicuous behaviour during the crisis, and for many data sets you cannot expect behaviour from this period to be informative with regard to the current time. Worse, the magnitude of the time series movements during such chaotic periods are often large compared to those in other time periods, leading many statistical models to give them disproportionate weight. You should therefore consider leaving aberrant time periods out of the training data.
In some cases it may make sense to create a model with a lower resolution, even if you have access to data with higher resolution.
One example is if the time series consist of the differences of other time series which do not change very often, such as the trading prices of illiquid stocks. If you use such time series with too high resolution, you risk that most of the values are zero, making it difficult to detect correlations between the time series. In this case the resolution should be reduced to make it harmonise with the rate at which the stock is actually traded.
Another case is where one time series influences another, but where there is a delay in the influence. If you, for example, have data with daily resolution, but it takes a few days before a change in one time series propagates to the other time series, the model might fail to pick up the correlation, and it may be better captured if you reduce the resolution to weekly.
You certainly need at least a handful of backtests for the calculated R^2 value to be valuable, but again it is difficult to give a hard-and-fast rule for the minimum number required.
Many of the considerations regarding the minimum amount of training data, apply in this case as well. If you believe that the data have a low signal-to-noise ratio, you should have more backtests, and in general, the more backtests you have, the better.
A useful technique for certain models, especially non-linear models with a non-deterministic component, such as neural networks, is to rerun the backtests. If the backtests results remain roughly the same between reruns, the model is more likely to be a robust model than if they vary widely.