How You Can Trade Better Than The Big Boys

I had a good idea. It was going to make my company millions of dollars per year. It was different, but certainly doable.

“How much do you think it will bring in?” our finance executive asked me.

“If we just focus on emerging markets, I think it could be up to $10 million next year.”

“Not interested.”

His reply was flat and caught me off-guard. 

“It’s $10 million!” I insisted.

For him, it just wasn’t worth it. A million here and there made no difference to him or (more importantly) his boss; it’s a rounding error on the balance sheet. Such is the life at a multi-billion dollar firm.

I share this because it highlights one of the critical differences between large institutions and retail players: scale.

When people say retail can’t compete with the funds like Citadel, Renaissance Technologies, DE Shaw, and others – they’re right. You don’t try to play that game.

Instead you “go where they ain’t” and focus on making money in places that they aren’t interested in playing. 

Where are those places?

A matter of perspective

As the anecdote above highlighted – you can go to find markets that can’t scale to meet the needs of a larger fund. A retail trader may be thrilled if he can make a few thousand dollars or tens of thousands on some of their trades – for some people that’s anywhere from a nice bonus to paying the bills for a year. A wall street behemoth looks at that as the tab for lunch.

Think of it another way. My wife and I travel a lot and she always has friends in various countries who want us to bring something to them that they can’t buy in their home country. They’re often willing to pay a 50% markup for these items. Does this mean we ought to drop everything and get into an import/export business because we’re sitting on a 50% profit margin? No! This doesn’t scale. It’s nice for some one-off exchanges, but we’re talking about paying for lunch here.

You may be able to make a 50% return, but if that tops out on a few hundred dollars of profit, then it’s not worth the time and effort. For the big boys, they look at those small opportunities and scoff; not enough liquidity and not enough scale. That’s where the disciplined small investor can thrive.

Longer term for less competition

Prioritizing this month’s or this quarter’s earnings has long been an issue in capital markets. Fund managers are eager to keep clients happy so their AUM doesn’t take a hit, but they’re pressured to release monthly earnings statements which show growth, or they risk losing clients (and their hefty fees).

We can see this in the data as the average holding period for a stock has continued to fall over the years.

This short-term, chase-what’s-hot-now-approach is part of human nature, so good luck trying to make money in the same realm that everyone else is working in.

You can go against the flow, zig when everyone else zags, by trading more slowly and longer term. Taking long term approaches can help you avoid playing in an overcrowded market, because you’re the only investor you need to answer to. If you can develop a plan and stick to it, then ride it without apology.

How You Can Use Algos to Reduce Your Emotions

Sticking to a plan is where the rub lies. We are emotional creatures, particularly when it comes to money.

Too many live and die with every up and down day in the market, changing their “strategy” accordingly.

So how do we deal with ourselves – our biggest enemy in investing?

My answer has long been algorithmic trading.

We can program a cold, emotionless computer trade for us. We just need to develop some rules, hand them over and let the machine run.

This raises the question, how do we find good rules?

There are a few ways to go about that, but it always boils down to doing your own research.

This should be obvious to some extent. If you want to come up with a good strategy, you’re going to need to research it to make sure it works. Get the data, try the rules and see if your stats check out. If not (and most don’t work well) then you tweak it or come up with a new idea and try it again.

Can you take a shortcut? Like reading a book that promises to make you a stock market millionaire? 

That’s all great! But you better go test it before you start trading it. See if the claims you read actually match up with reality. There’s no shortage of authors who will print a beautiful equity curve that marches up and to the right. It gets eyeballs and sells books. But is it real or just an illusion?

So you’re back at spending time to get data, code a strategy, debug it, and see if it works (and have fun if you’re going to do this right without coding!).

And you repeat that cycle every time you want to test something new. We haven’t even gotten to deploying it so you can make money either.

It doesn’t have to be that way. At Raposa, we’re building a no-code algorithmic trading platform. We want you to be able to design and test a strategy in just a few clicks with high-quality data, and get results fast so you spend less time working through the tedious minutiae of debugging your code and can find trading strategies that fit you.

Get the tools to play where the big boys ain’t and join our waitlist here.

How Random is the Market? Testing the Random Walk Hypothesis

A mainstay of academic research into the market is the Random Walk Hypothesis (RWH). This is the idea that market moves are random and follow a normal distribution that can be easily described using a concept borrowed from physics called Brownian Motion.

This makes the market mathematics manageable, but is it true? Is the market really random?

If it is, then there’s little point to trying to beat it. But if it isn’t, then there are repeatable patterns that can be algorithmically exploited.

Thankfully, the issue of randomness is very important for fields like cryptography, so it is well studied and there are statistical tests that we can apply to market data to investigate this.

We’re going to borrow a few standard tests for randomness and apply it to historical data to see just how randome the markets really are.

Measuring Market Randomness

There are a host of randomness tests that have been developed over the years which look at binary sequences to determine whether or not a random process was used to generate these values. Notably, we have test suites such as the Diehard TestsTestU01NIST tests and others that have been published over the years.

We could run a large battery of tests (maybe we’ll get to that in a future post)to test our market data, but for now, we’ll just select three tests to see how the RWH holds up: runs test, discrete Fourier Transform test, and the Binary Matrix Rank test from the NIST suite.

Runs Test

If the market truly is random, then we shouldn’t see any dependence on previous prices; the market being up today should have no impact on what it will do tomorrow (and vice versa).

The runs test can help us look this aspect of randomness. It works by looking at the total number of positive and negative streaks in a sequence and checking the lengths.

We’ll take our prices and make all positive price changes into 1s and negative changes into 0s, and keep this binary vector as X. We’ll set n as the number of observations we have (e.g. n = len(X)). Then, to implement the runs test, we take the following steps (adapted from section 2.3 of the NIST Statistical Test Suite):

1. Compute the proportion of 1s in the binary sequence:

\pi = \frac{\sum_j X_j}{n}

2. Check the value \pi against the frequency test. It passes if: \mid \pi - 1/2 \mid < \tau, where \tau = \frac{2}{\sqrt{n}}. If the frequency test is failed, then we can stop and we don’t have a random sequence and we’ll set our P-value to 0. If we pass, then we can continue to step 3.

3. Compute our test statistic V_n where:

V_n = \sum_{k=1}^{n-1} r(k) + 1

where r(k) = 0 if X_k = X_{k+1}, otherwise r(k) = 1. So if we have the sequence [0, 1, 0, 0, 0, 1, 1], then this becomes: V_n = (1 + 1 + 0 + 0 + 1 + 0) + 1 = 4

4. Compute our P-value where:

p = erfc\bigg( \frac{ \mid V_n - 2n \pi (1 - \pi) \mid}{2 \pi (1-\pi) \sqrt{2n}} \bigg)

Note that erfc is the complementary error function (given below). Thankfully, this is available in Python with scipy.special.erfc(z):


With all of that, we can now use our P-value to determine whether or not our sequence is random. If our P-value is below our threshold (e.g. 5%), then we reject the null hypothesis, which means we have a non-random sequence on our hands.

import numpy as np
from scipy.special import erfc

def RunsTest(x):
  # Convert input to binary values
  X = np.where(x > 0, 1, 0)
  n = len(X)
  pi = X.sum() / n
  # Check frequency test
  tau = 2 / np.sqrt(n)
  if np.abs(pi - 0.5) >= tau:
    # Failed frequency test
    return 0
  r_k = X[1:] != X[:-1]
  V_n = r_k.sum() + 1
  num = np.abs(V_n - 2 * n * pi * (1 - pi))
  den = 2 * pi * (1 - pi) * np.sqrt(2 * n)
  return erfc(num / den)

The NIST documentation gives us some test data to check that our function is working properly, so let’s drop that into our function and see what happens.

# eps from NIST doc
eps = '110010010000111111011010101000100010000101101' + \ 
x = np.array([int(i) for i in eps])

p = RunsTest(x)
H0 = p > 0.01
# NIST P-value = 0.500798
print("Runs Test\n"+"-"*78)
if H0:
  print(f"Fail to reject the Null Hypothesis (p={p:.3f}) -> random sequence")
  print(f"Reject the Null Hypothesis (p={p:.3f}) -> non-random sequence.")
Runs Test
Fail to reject the Null Hypothesis (p=0.501) -> random sequence

We get the same P-value, so we can be confident that our implementation is correct. Note also that NIST recommends we have at least 100 samples in our data for this test to be valid (i.e. $n \geq 100$).

Discrete Fourier Transformation Test

Our next test is the Discrete Fourier Transformation (DFT) test.

This test computes a Fourier Transform on the data and looks at the peak heights. If their are too many high peaks, then it indicates we aren’t dealing with a random process. It would take us too far afield to dive into the specifics of Fourier Transforms, but check out this post if you’re interested to go deeper.

Let’s get to the NIST steps. We have data (x) and we need to set a threshold, which is usually 95% as inputs.

1. We need to convert our time-series x into a sequence of 1s and -1s for positive and negative deviations. This new sequence is called \hat{x}.

2. Apply discrete Fourier Transform (DFT) to \hat{x}:

\Rightarrow S = DFT(\hat{x})

3. Calculate M = modulus(S') = \left| S \right|, where S' is the first n/2 elements in S and the modulus yields the height of the peaks.

4. Compute the 95% peak height threshold value. If we are assuming randomness, then 95% of the values obtained from the test should not exceed T.

T = \sqrt{n\textrm{log}\frac{1}{0.05}}

5. Compute N_0 = \frac{0.95n}{2}, where N_0 is the theoretical number of peaks (95%) that are less than T (e.g. if n=10, then N_0 = \frac{10 \times 0.95}{2} = 4.75).

6. Compute the P-value using the erfc function:

P = erfc \bigg( \frac{\left| d \right|}{\sqrt{2}} \bigg)

Just like we did above, we’re going to compare our P-value to our reference level and see if we can reject the null hypothesis – that we have a random sequence – or not. Note too that it is recommended that we use at least 1,000 inputs (n \geq 1000) for this test.

def DFTTest(x, threshold=0.95):
  n = len(x)
  # Convert to binary values
  X = np.where(x > 0, 1, -1)
  # Apply DFT
  S = np.fft.fft(X)
  # Calculate Modulus
  M = np.abs(S[:int(n/2)])
  T = np.sqrt(n * np.log(1 / (1 - threshold)))
  N0 = threshold * n / 2
  N1 = len(np.where(M < T)[0])
  d = (N1 - N0) / np.sqrt(n * (1-threshold) * threshold / 4)
  # Compute P-value
  return erfc(np.abs(d) / np.sqrt(2))

NIST gives us some sample data to test our implementation here too.

# Test sequence from NIST
eps = '110010010000111111011010101000100010000101101000110000' + \
x = np.array([int(i) for i in eps])
p = DFTTest(x)
H0 = p > 0.01
print("DFT Test\n"+"-"*78)
if H0:
  print(f"Fail to reject the Null Hypothesis (p={p:.3f}) -> random sequence")
  print(f"Reject the Null Hypothesis (p={p:.3f}) -> non-random sequence.")
DFT Test
Fail to reject the Null Hypothesis (p=0.646) -> random sequence

Same as the NIST documentation, we reject the null hypothesis.

Binary Matrix Rank Test

We’ll choose one last test out of the test suite – the Binary Matrix Rank Test.


1. Divide the sequence into 32 by 32 blocks. We’ll have N total blocks to work with and discard any data that doesn’t fit nicely into our 32×32 blocks. Each block will be a matrix consisting of our ordered data. A quick example will help illustrate, say we have a set of 10, binary data points: X = [0, 0, 0, 1, 1, 0, 1, 0, 1, 0] and we have 2×2 matrices (to make it easy) instead of 32×32. We’ll divide this data into two blocks and discard two data points. So we have two blocks (B_1 and B_2) that now look like:

B_1 = \begin{bmatrix}0 & 0 \\0 & 1\end{bmatrix} B_2 = \begin{bmatrix}1 & 0 \\1 & 0\end{bmatrix}

2. We determine the rank of each binary matrix. If you’re not familiar with the procedure, check out this notebook here for a great explanation. In Python, we can simply use the np.linalg.matrix_rank() function to compute it quickly.

3. Now that we have the ranks, we’re going to count the number of full rank matrices (if we have 32×32 matrices, then a full rank matrix has a rank of 32) and call this number F_m. Then we’ll get the number of matrices with rank one less than full rank which will be F_{m-1}. We’ll use N to denote the total number of matrices we have.

4. Now, we compute the Chi-squared value for our data with the following equation:

\chi^2 = \frac{(F_m-0.2888N)^2}{0.2888N} + \frac{(F_{m-1} - 0.5776N)^2}{0.5776N} + \frac{(N - F_m - F_{m-1} - 0.1336N)^2}{0.1336N}
  1. Calculate the P-value using the Incomplete Gamma Function, Q\big(1, \frac{\chi^2}{2} \big):
P = Q \bigg(1, \frac{\chi^2}{2} \bigg) = \frac{1}{\Gamma(a)} \int_x^{\infty} t^{a-1} e^{-1} = e^{\frac{\chi^2}{2}}

Scipy makes this last bit easy with a simple function call to scipy.special.gammaincc().

Don’t be intimidated by this! It’s actually straightforward to implement.

from scipy.special import gammaincc

def binMatrixRankTest(x, M=32):
  X = np.where(x > 0, 1, 0)
  n = len(X)
  N = np.floor(n / M**2).astype(int)
  # Create blocks
  B = X[:N * M**2].reshape(N, M, M)
  ranks = np.array([np.linalg.matrix_rank(b) for b in B])
  F_m = len(np.where(ranks==M)[0])
  F_m1 = len(np.where(ranks==M - 1)[0])
  chi_sq = (F_m - 0.2888 * N) ** 2 / (0.2888 * N) \
    + (F_m1 - 0.5776 * N) ** 2 / (0.5776 * N) \
    + (N - F_m - F_m1 - 0.1336 * N) ** 2 / (0.1336 * N)
  return gammaincc(1, chi_sq / 2)

If our P-value is less than our threshold, then we have a non-random sequence. Let’s test it with the simple example given in the NIST documentation to ensure we implemented things correctly:

eps = '01011001001010101101'
X = np.array([int(i) for i in eps])
p = binMatrixRankTest(X, M=3)
H0 = p > 0.01
print("Binary Matrix Rank Test\n"+"-"*78)
if H0:
  print(f"Fail to reject the Null Hypothesis (p={p:.3f}) -> random sequence")
  print(f"Reject the Null Hypothesis (p={p:.3f}) -> non-random sequence.")
Binary Matrix Rank Test
Fail to reject the Null Hypothesis (p=0.742) -> random sequence

And it works! Note that in this example, we have a much smaller data set, so we set M=3 for 9-element matrices. This test is also very data hungry. They recommend at least 38 matrices to test. If we’re using 32×32 matrices, then that means we’ll need 38x32x32 = 38,912 data points. That’s roughly 156 years of daily price data!

Only the oldest companies and commodities are going to have that kind of data available (and not likely for free). We’ll press on with this test anyway, but take the results with a grain of salt because we’re violating the data recommendations.

Testing the RWH

With our tests in place, we can get some actual market data and see how well the RWH holds up. To do this properly, we’re going to need a lot of data, so I picked out some indices with a long history, a few old and important commodities, some of the oldest stocks out there, a few currency pairs, and Bitcoin just because.

Data from:

  • Dow Jones
  • S&P 500
  • Gold
  • Oil

One thing to note as well, we want to also run this against a baseline. For each of these I’ll be benchmarking the results against NumPy’s binomial sampling algorithm, which should have a high-degree of randomness.

I relied only on free sources so you can replicate this too, but more and better data is going to be found in paid subscriptions. I have defined a data_catalogue as a dictionary below which will contain symbols, data sources, and the like so our code knows where to go to get the data.

data_catalogue = {'DJIA':{
    'source': 'csv',
    'symbol': 'DJIA',
    'url': '^dji&i=d'
    'S&P500': {
        'source': 'csv',
        'symbol': 'SPX',
        'url': '^spx&i=d'
    'WTI': {
        'source': 'yahoo',
        'symbol': 'CL=F',
    'Gold': {
        'source': 'yahoo',
        'symbol': 'GC=F',
    'GBP': {
        'source': 'yahoo',
        'symbol': 'GBPUSD=X'
    'BTC': {
        'source': 'yahoo',
        'symbol': 'BTC-USD'

Now we’ll tie all of this together into a TestBench class. This will take our data catalogue, reshape it, and run our tests. The results are going to be collected for analysis, and I wrote a helper function to organize it into a large, Pandas dataframe for easy viewing.

import pandas as pd
import pandas_datareader as pdr
import yfinance as yf
from datetime import datetime

class TestBench:

  data_catalogue = data_catalogue

  test_names = ['runs-test',

  def __init__(self, p_threshold=0.05, seed=101, 
               dftThreshold=0.95, bmrRows=32):
    self.seed = seed
    self.p_threshold = p_threshold
    self.dftThreshold = dftThreshold
    self.bmrRows = bmrRows
    self.years = [1, 4, 7, 10]
    self.trading_days = 250
    self.instruments = list(self.data_catalogue.keys())

  def getData(self):
    self.data_dict = {}
    for instr in self.instruments:
        data = self._getData(instr)
      except Exception as e:
        print(f'Unable to load data for {instr}')
      self.data_dict[instr] = data.copy()
    self.data_dict['baseline'] = np.random.binomial(1, 0.5, 
      size=self.trading_days * max(self.years) * 10)

  def _getData(self, instr):
    source = self.data_catalogue[instr]['source']
    sym = self.data_catalogue[instr]['symbol']
    if source == 'yahoo':
      return self._getYFData(sym)
    elif source == 'csv':
      return self._getCSVData(self.data_catalogue[instr]['url'])
    elif source == 'fred':
      return self._getFREDData(sym)

  def _getCSVData(self, url):
    data = pd.read_csv(url)
    close_idx = [i 
      for i, j in enumerate(data.columns) if j.lower() == 'close']
    assert len(close_idx) == 1, f"Can't match column names.\n{data.columns}"
      std_data = self._standardizeData(data.iloc[:, close_idx[0]])
    except Exception as e:
      raise ValueError(f"{url}")
    return std_data

  def _getYFData(self, sym):
    yfObj = yf.Ticker(sym)
    data = yfObj.history(period='max')
    std_data = self._standardizeData(data)
    return std_data

  def _getFREDData(self, sym):
    data = pdr.DataReader(sym, 'fred')
    data.columns = ['Close']
    std_data = self._standardizeData(data)
    return std_data

  def _standardizeData(self, df):
    # Converts data from different sources into np.array of price changes
      return df['Close'].diff().dropna().values
    except KeyError:
      return df.diff().dropna().values

  def runTests(self):
    self.test_results = {}
    for k, v in self.data_dict.items():
      self.test_results[k] = {}
      for t in self.years:
        self.test_results[k][t] = {}
        data = self._reshapeData(v, t)
        if data is None:
          # Insufficient data

        self.test_results[k][t]['runs-test'] = np.array(
          [self._runsTest(x) for x in data])
        self.test_results[k][t]['dft-test'] = np.array(
          [self._dftTest(x) for x in data])
        self.test_results[k][t]['bmr-test'] = np.array(
          [self._bmrTest(x) for x in data])

        print(f"Years = {t}\tSamples = {data.shape[0]}")

  def _reshapeData(self, X, years):
    d = int(self.trading_days * years) # Days per sample
    N = int(np.floor(X.shape[0] / d)) # Number of samples
    if N == 0:
      return None
    return X[-N*d:].reshape(N, -1)

  def _dftTest(self, data):
    return DFTTest(data, self.dftThreshold)

  def _runsTest(self, data):
    return RunsTest(data)

  def _bmrTest(self, data):
    return binMatrixRankTest(data, self.bmrRows)

  def tabulateResults(self):
    # Tabulate results
    table = pd.DataFrame()
    row = {}
    for k, v in self.test_results.items():
      row['Instrument'] = k
      for k1, v1 in v.items():
        row['Years'] = k1
        for k2, v2 in v1.items():
          pass_rate = sum(v2>self.p_threshold) / len(v2) * 100
          row['Test'] = k2
          row['Number of Samples'] = len(v2)
          row['Pass Rate'] = pass_rate
          row['Mean P-Value'] = v2.mean()
          row['Median P-Value'] = np.median(v2)
          table = pd.concat([table, pd.DataFrame(row, index=[0])])
    return table

We can initialize our test bench and call the getData() and runTests() method to put it all together. The tabulateResults() method will give us a nice table for viewing.

When we run our tests, we have a print out for the number of years and full samples of data we have. You’ll notice that for some of these (e.g. Bitcoin) we just don’t have a great amount of data to go off of, but we’ll do our best with what we do have.

tests = TestBench()
Years = 1	Samples = 129
Years = 4	Samples = 32
Years = 7	Samples = 18
Years = 10	Samples = 12
Years = 1	Samples = 154
Years = 4	Samples = 38
Years = 7	Samples = 22
Years = 10	Samples = 15
Years = 1	Samples = 21
Years = 4	Samples = 5
Years = 7	Samples = 3
Years = 10	Samples = 2
Years = 1	Samples = 20
Years = 4	Samples = 5
Years = 7	Samples = 2
Years = 10	Samples = 2
Years = 1	Samples = 18
Years = 4	Samples = 4
Years = 7	Samples = 2
Years = 10	Samples = 1
Years = 1	Samples = 10
Years = 4	Samples = 2
Years = 7	Samples = 1
Years = 10	Samples = 1
Years = 1	Samples = 100
Years = 4	Samples = 25
Years = 7	Samples = 14
Years = 10	Samples = 10

We have 129 years of Dow Jones data, which gives us 12, 10-year samples and 154 years for the S&P 500 (the index doesn’t go back that far, but our data source provides monthly data going back to 1789). This is in contrast to most of our other values which have two decades or less.

To take a look at the results, we can run the tabulateResults() method, and do some pivoting to reshape the data frame for easier viewing.

table = tests.tabulateResults()
pivot = table.pivot_table(index=['Instrument', 'Years'], columns='Test')
samps = pivot['Number of Samples'].drop(['bmr-test', 'dft-test'], axis=1)
pivot.drop(['Number of Samples'], axis=1, inplace=True)
pivot['Number of Samples'] = samps

Let’s start with the baseline.

As expected, NumPy’s random number generator is pretty good, and it passes most of the tests without issue. The median P-values for the runs and DFT tests remain fairly high as well, although they are lower for the BMR test. Another thing to note, the 1 and 4 year BMR tests didn’t return any values because we were unable to complete a single 32×32 matrix with such small sample sizes. Overall, the lack of data for the BMR test makes the results here dubious (we could recalculate it with a smaller matrix size, but we’d need to recalibrate all of the probabilities for these different matrices).

The DFT test showed randomness for most cases in our test set. For what it’s worth, the P-values for our DFT tests of all sizes remained fairly high regardless of the sample size.

The runs test provides the most varied and interesting results.

import matplotlib.pyplot as plt

plt.figure(figsize=(12, 8))
for i, instr in enumerate(tests.instruments):
  sub = table.loc[(table['Instrument']==instr) &
  plt.plot(tests.years, sub['Pass Rate'], label=instr, 
           c=colors[i], marker='o')

plt.ylabel('Pass Rate (%)')
plt.title('Runs Test Pass Rate for all Instruments')

The runs test tends to produce less random results as time goes on. The notable exception being our WTI data, which passes more tests for randomness over time. However, if we look at our P-values, we do see them falling towards 0 (recall, our null hypothesis is that these are random processes).

plt.figure(figsize=(12, 8))
for i, instr in enumerate(table['Instrument'].unique()):
  sub = table.loc[(table['Instrument']==instr) &
  plt.plot(tests.years, sub['Median P-Value'], label=instr, 
           c=colors[i], marker='o')
plt.title('Median P-Values for Runs Test for all Instruments')

We added the baseline to this plot to show that it remains high even as the time frame increases, whereas all other values become less random over time. We’re showing P-values here, which are the probabilities that the results are due to noise if the process we’re testing is random. In other words, the lower our values become, the less likely it is that we have a random process on our hands.

This downward sloping trend may provide evidence that supports the value of longer-term trading.

Jerry Parker, for example, has moved toward longer-term trend signals (e.g. >200 day breakouts) because the short term signals are no longer profitable in his system. Data is going to be limited, but it could be interesting to run this over multiple, overlapping samples as in a walk forward analysis to see if randomness in the past was lower during shorter time frames. Additionally, there are more statistical tests we could look at to try to tease this out.

Death of the Random Walk Hypothesis?

The evidence from these few tests is mixed. Some tests show randomness, others provide an element of predictability. Unfortunately, we can’t definitively say the RWH is dead (although I think it, and the theories it is based on, are more articles of academic faith than anything).

To improve our experiment we need more data and more tests. We also used a series of binary tests, although technically the RWH asserts that the changes in price are normally distributed, so statistical tests that look for these patterns could strengthen our methodology and lead to more robust conclusions.

If you’d like to see more of this, drop us a note at and let us know what you think!

Why I Left Value Investing Behind

You have probably heard the old adage “buy low and sell high.”

That’s great, but the question is what is “low” and what is “high?”

To know the difference between high and low, you need an evaluation framework. Here we have two main camps — the fundamental analysis camp (also known as value investing methodology) and the quantitative analysis camp.

Both methodologies are used to grow portfolios. I started off in the fundamental value camp in my early 20’s, studying the great value investors. I sorted through scores of annual reports, balance sheets, and income statements to build investment theses. Things worked out well, blessed by excellent timing, I was more lucky than good. No matter how many reports I read or industries I analyzed, I couldn’t separate the emotional and subjective nature of this approach from my decision making. On top of that, value has continued to lag other approaches since the 2008 crash.

Over time, I began building algorithms to guide my trading decisions. The more progress I made the more I transitioned into the quantitative analysis camp. The result of this journey has become Raposa Technologies — a way to validate your trading strategies without any coding knowledge.

Fundamental Analysis: One Stock at a Time

Fundamental analysis seeks to find the “intrinsic value” of a security based on attributes like free cash flow, debt levels, book value, and so forth. To do it well — like Warren Buffett — you need to read a copious amount of annual reports, quarterly earnings, and understand drivers in an industry you’re investing in. You will become an expert about companies you consider investing in. A core step of fundamental analysis is estimating all future cash flows and year over year profitability. In order to do this well— you must discount all future cash flows because money today is worth more than money 10 years from now.

So if you have the time to apply careful analysis to dozens of companies, read hundreds of reports, understand several industries, and carefully calculate future cash flows — — you will have an estimate for the fundamental price of a security. If the market price is lower than what your fundamental analysis estimates it to be, congratulations! You now have a good candidate to buy low while you wait for the market to realize the value of this stock and raise the price.

Value investors tend to get a lot of press (who hasn’t heard of Buffett?) because they can weave a narrative around a stock’s price journey. These narratives appeal to the emotional centers in our brains. Our brains are quick to use these stories as rationalization to support our gut feeling telling us to buy (or sell) a particular stock.

Your gut is quickly convinced by fundamental value narratives — particularly when they come from people who made fortunes riding these stocks to the top. Stories of double or triple digit returns from Amazon, Apple, Google, and even meme-stocks make it all too easy to believe the first fundamental narrative we hear.

During a bull market — it is easy to imagine the profits rolling in — but do not forget the emotional toll of holding names like Amazon through the long dark periods of doubt and uncertainty. You forget that their triumph wasn’t inevitable in the mid-2000s when the competitive landscape was forming. Could you have held on through the tech bust? What about the 2008 crash? Will you be confident in your fundamental analysis the morning you wake up to a 50–80% drop in your portfolio?


But that’s fine. It takes a LOT of research and emotional work to invest in stocks based on fundamental analysis — which is why Buffett himself recommends people just buy an index fund and let it ride.

Quantitative Analysis: Separate the Signal from the Noise

After years of trying to invest based on fundamentals, not only was I treading water trying to balance my time — but I came to the realization that most of my investment decisions boiled down to a gut feeling no matter how rational and logical I tried to be.

There are successful quants and successful value investors. But if you are reading this far, you are probably unimpressed with the prospect of spending hundreds of hours researching companies for your fundamental analysis spreadsheets. You want new strategies in your war chest.

Quants are unconcerned about the intrinsic value of a stock or security, we look at the statistical profile of its price. How is it correlated with other prices? How does volume impact it? Are there regular patterns in price that can be leveraged for profit? Once we find a pattern — we can design algorithms to automatically execute trades that over time will grow our profiles.

These patterns often make no sense from a value perspective. Why would you buy a stock that appears to be incredibly overvalued? If you’re running a momentum or trend following strategy, you could find yourself buying near all-time highs. The value investor views that as insanity, but you do it because the algorithm shows that you have a potentially profitable pattern in your data set. That means you’re playing the odds that you can buy high and sell higher.

Do most investors have data-backed confidence for their trades? Or are decisions the results of a gut feeling? Considering most people run from math and code, I wager many trades are emotionally driven.

Break Away from the Narrative

Quantitative methods involve complicated statistical analysis, calculus, and machine learning capabilities. If you want to do it yourself, you’re going to need to learn to code. The upside? Your algorithms are working for you — which won’t eliminate your emotions or temptation to intervene — but the emotionless data will provide a beacon of rationality when FOMO or headline panic sets in.

For me, this was a big upside. I decided to apply my data science skills (those same skills I had honed during a PhD and applied everyday in a 9–5 for many years) and found that the math and stats in the quant world were much better for me, and improved my returns.

I firmly planted myself in this camp and never looked back.

I realize too that these methods aren’t easy, so that’s why I started Raposa — to build a no-code platform to enable investors to test and trade a variety of quantitative strategies. If you hate math and stats, then it’s not for you. Otherwise, join the waitlist and sign up below.

Your Professors are Wrong: You Can Beat the Market

“I have noticed that everyone who ever told me that the markets are efficient is poor.”

Larry Hite

If you’ve had any academic training in economics, you have likely been told that nobody can “beat the market” because markets are efficient.

Right out of the gate your hopes and dreams of becoming the next Jim Simons or Warren Buffett are dashed by the Efficient Market Hypothesis (EMH). In fact, according to the EMH, their performance is the result of blind luck; if you have enough monkeys banging on typewriters, you will eventually produce some Shakespeare.

The EMH (or at least the strong version) states that all information has been priced into stocks and securities. In other words, there’s no way to consistently beat the broad market averages. Your best bet then is to passively invest in a low-cost index fund and just let the market do its thing.

One of the theory’s strongest proponents, Burton Malkiel, writes:

The theory holds that the market appears to adjust so quickly to information about individual stocks and the economy as a while that no technique of selecting a portfolio…can consistently outperform a strategy of simply buying and holding a diversified group of securities…A blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do just as well as one carefully selected by the expert.

EMH and Random Walks

Proponents of EMH argue that the market moves in a “random walk” that can be neatly described by equations borrowed from physics, i.e. Brownian Motion. This follows from the EMH. If investors are pricing in all information, both public and private into their decisions and forecasting rationally, then only new information – which is random and unforecastable in nature – is the only thing that can move prices. Thus, prices move randomly.

This assumption underlies most financial modeling and risk measurements, despite some spectacular failures.

The academic edifice of EMH seems daunting – Nobel Prizes have been bestowed on its developers! Despite this, the EMH remains a deeply flawed theory; thankfully so for intrepid investors seeking to beat the market.

Rational Expectations – Irrational Investors

A core problem of EMH is that it relies on rational expectations theory, a theory that states people are free from biases in their decision making and are utility optimizers.

Unfortunately for the theory, investors are far from rational actors – take a minute or two to watch TikTok investors, or read some of the posts on just about any investing message board. To call some of these discussions “rational” stretches the meaning of the word beyond recognition.

Contra rational expectations, investors – who are just normal people after all – are fraught with cognitive biases. These biases can’t simply be assumed away to make the stock market math more manageable (Nobel Prizes have been awarded for work on these cognitive biases as well).

EMH: Simplifying Investor Decisions

If the EMH is correct, however, the price right now is “correct” in that all information is baked in and properly discounted. If one investor thinks a stock is undervalued (again, according to her cold, rational analysis), then she is going to bid the price up to the point she no longer considers it undervalued. Likewise if one believes a stock is overvalued, he’s going to push it down until it is no longer overvalued by selling or shorting the shares.

There is some plausibility (and truth) to this story, but only partially because it ignores real constraints that investors have.

Take for example, someone who believes firmly that Tesla is massively overvalued – there seems no shortage of such short selling bears. If they could really bid the price down accordingly, they would. However there remains far too much buying pressure on the other side of these trades for even large funds and groups of investors to move the price significantly lower.

To borrow another example from Bob Murphy. Imagine you get a time machine and can jump forward five years and see that the best performing stock over this period increased from $1 today to $500 five years from now. According to our EMH and rational expectations theory, you should value that stock at $500 today (ignoring discounting for simplicity’s sake) and buy every share until it reaches that price. You may leverage your home, max out your credit cards, get friends and family to invest, and throw every spare dollar you can into this stock, but how high could you really move it? If you’re like most people, you simply won’t have enough capital to significantly move the price on your own despite your perfectly rational expectations.

Markets are not Random

Let’s pick on Prof. Malkiel again. In his famous book, he provides an anecdote whereby he proceeds to flip a coin for his class to determine the daily price of a hypothetical security. Heads yielded a slight rise, tails a slight reduction in price. Over time, a chart was developed using this random generation process, when the good professor brought his creation to a chartist. The chartist suggested that this stock should be bought and had a strong, bullish forecast for this random stock. Based on this prognostication, Prof. Malkiel concluded that stocks are indistinguishable from random processes.

Is this a real or simulated stock?

It’s a nice story, but Malkiel made a number of critical errors in his conclusion. First, while the chartist was unable to visually differentiate between a random process and actual stocks, it could just as well be that the random pattern Malkiel created was an excellent forgery; repeating this test may have yielded different results. Second, while the human eye may not have been able to distinguish between a randomness and an actual time series of prices, that doesn’t mean a computer can’t. In fact, auto-correlation – the tendency for stocks to trend in a given direction over time – is a well known phenomenon and goes sharply against EMH and the random walk hypothesis. Finally, even if stock prices stand up to strong tests of randomness, it doesn’t mean that additional information (e.g. volume, earnings or fundamental data) couldn’t reveal important, non-random patterns in prices.

In short, Malkiel’s test was too simple and he reached his conclusion far too hastily.

Too Many Outliers to Count

Perhaps one of the most difficult issues to reconcile with the EMH worldview are the long list of outliers. Both market events – typically crashes – and highly successful investors with long track records which should not exist if the EMH were true.

We have the October 1987 crash, a single day loss that the markets have never seen before or since. The blow-up of Long Term Capital Management (LTCM), a hedge fund founded by Nobel Prize winners in a series of events that should have never happened if their theories were correct. My favorite example comes from the Great Financial Crisis where the CEO of Goldman Sachs said they experienced losses from a series of 25-sigma events!

If you don’t know how often a 25-sigma event should occur, we have a handy table available:

From How Unlucky is 25-Sigma?

The authors, Dowd et al. give some context for these numbers:

These numbers are on truly cosmological scales, and a natural comparison is with the
number of particles in the Universe, which is believed to be between 1.0e+73 and 1.0e+85 (Clair, 2001). Thus, a 20-event corresponds to an expected occurrence period measured in years that is 10 times larger than the higher of the estimates of the number of particles in the Universe. For its part, a 25-sigma event corresponds to an expected occurrence period that is equal to the higher of these estimates but with the decimal point moved 52 places to the left!

It seems safe to say if you experience a few events with a 1/1.3019 x 10^135 chance of occurring, your model might have some faulty assumptions.

On the other hand, there is a long list of investors who have consistently beat the market year in and year out. Yes, some of this is certainly luck, but the longer your track record, the less likely luck is at play. Luck simply cannot account for everything. If the EMH was true, these people should not exist.

Inefficiencies are Available to You!

Thankfully for us, markets do exhibit inefficiencies. These enable savvy investors to outperform the market. Unfortunately, this is easier said than done.

We try to make it as easy as possible for investors by providing robust tools using high-quality data to allow you to develop your own quantitative strategies. You can design your signals, adjust your risk, and test your strategy to see how it performs in a variety of market environments. When you’re happy with the results, just deploy the system and you’re ready to trade it!

Check out our free demo here.