Code Example: Extracting Historical Data from Interactive Brokers API

Early in my trading adventure I had need of data, and I knew my broker would provide it. I have since moved on to IQFeed, but starting with Interactive Brokers’ data feed was still a valuable first step.

I found their API clunky to use. It does not reliably respond to all requests. I’ve had to restart my data collection session several times to get it working, and there were no exceptions to indicate that there was a problem.

There are also known, official limitations to using their API for historical data. You will not get much depth of history. They have no expired futures data older than 2 years. No expired options data. No expired spread data. Besides what they absolutely do not have, you have to throttle yourself or have your connection to the historical data server invalidated. If you request too much data it will throw a “pacing violation” and you will have to re-initiate your connection.

All that said, you can certainly get some data out of it, and if you’re already an Interactive Brokers customer you may as well start there. You can find my code on GitHub, here.

Compliment to Standard Move Risk Management: Volatility Bands

In my last post I shared my max size risk management process: standard move. There is a critical flaw in this process. It assumes that the log returns of the security are normally distributed. If that assumption were true, my standard move would happen 68% of the time. Unfortunately markets are not so orderly, and events that should be so rare they only happen once in a thousand years are actually pretty ordinary – you’ve probably heard of these events called black swans.

To protect myself against this, I intend to purchase options to hedge my position against moves that should be very large and infrequent. Let’s say a 3-standard deviation move, which if normality held would cover 99.7% of all daily moves. By hedging with options that are reasonably far away from the current price, I shouldn’t have to pay up too much, but there’s no free lunch. I’m also considering selling options to finance the protection, but I want to backtest that first!

To help determine the strike prices for this protection I developed this indicator for MultiCharts. It is a modification of the standard Bollinger Bands Area code included with the software. Instead of centering the bands at the average price, it centers it at the prior close. It then draws the bands based on the standard deviation of log returns and the standard deviations parameter. This gives an at a glance view of the “black swan” risk.

You can download the indicator here.

Method to Sync MultiCharts Database + Studies Across Computers

I use my desktop and a laptop for MultiCharts. I usually use the laptop in offline mode for strategy development. Naturally it’s become annoying to pick up my progress from one computer on another.

I have a working solution using Dropbox and junction links. Please be warned: you risk your database and studies attempting this. Back them up! I had a little scare but it worked out okay.

General procedure:

  1. Have Dropbox
  2. Shutdown MultiCharts, QuoteManager, PowerLanguage Editor, etc.
  3. Go to C:\ProgramData\TS Support\Your MultiCharts Version\
  4. Back up “Databases” and “StudyServer”
  5. Rename Databases to Databases_x
  6. Rename StudyServer to StudyServer_x
  7. Create a folder in Dropbox for this purpose – I made one called “MultiCharts Sync”
  8. Create a folder called “Databases” and another called “StudyServer” in that Dropbox folder
  9. copy the files from Databases_x to your new Databases Dropbox folder
  10. copy the files from StudyServer_x to your new StudyServer Dropbox folder
  11. Open an elevated command prompt (“run as administrator”)
  12. Run these commands, substituting your actual paths to your multicharts version and dropbox directory:

mklink /d "C:\ProgramData\TS Support\Your MultiCharts Version\Databases" "C:\Where\Dropbox\Is\Multicharts Sync\Databases"

mklink /d "C:\ProgramData\TS Support\Your MultiCharts Version\StudyServer" "C:\Where\Dropbox\Is\Multicharts Sync\StudyServer"

Last, important step: Run MultiCharts as an Administrator on the “master” computer that has these files. Do the same on other computers when Dropbox is done syncing. Otherwise you’ll get weird errors on your first run and things won’t work right.

Dropbox is pretty smart about updating the database files. Despite giving very long estimated upload times after adding new data, it actually finishes the sync quickly because it only transfers the difference.

I’ve only been using this for a little bit, so no promises! It’s working okay for me so far.

Function + Indicator for Risk Management: Standard Move

I wrote this indicator for MultiCharts .NET, but you could implement it elsewhere pretty easily.

I size my positions based on realized volatility (standard deviation of log returns). If I have several positions on for the same reason, I’ll size them so that the dollar PnL moves from expected volatility “noise” are approximately equal. I’ve developed a little function and indicator that shows this expected dollar PnL move.

I’ve done some back-testing on this, and have found that a 28-day lookback period is approximately optimal. I also like that it requires relatively little data, as most realized volatility calculations are based on a 252 lookback period.

Here’s a short illustrating example:

As of this writing, QQQ’s 28 day realized volatility was ~0.52% and GLD’s was 0.62%. Does that mean that a GLD position has been more volatile? In terms of % returns, yes. However, QQQ’s last print was $205.10/share and GLD’s was $137.86. Apply those realized volatilities to these prices, and you get what I call a “standard move”.

QQQ: $205.1/share * 0.52% = $1.07
GLD: $137.86/share * 0.62% = $0.85

So the holder of one share of QQQ would actually have experienced more dollar PnL volatility than the holder of one share of GLD. My goal is to keep those moves equal; if for some crazy reason I want to be long both of these, I’d try to keep the standard move of both positions approximately equal. In this case, if you held QQQ and GLD with a 4:5 position ratio you’d be pretty close to equal.

I use this indicator to keep an eye on my risk. It has a multiplier parameter which is useful if you are trading futures. Set it to your position size times the multiplier appropirate for your instrument. For example, a full point in COMEX gold futures is worth $100, so if you have three of those the multiplier should be set to $300.

Choosing 1 standard deviation is pretty arbitrary. It doesn’t matter at all when you’re using this approach to equal-vol weight different positions, as the “2” in a 2-standard deviation calculation would cancel out anyway. Regardless, I have chosen for myself a 1 standard deviation dollar move that I am comfortable with based on emotional trading experience. So I use this indicator to keep an eye on that so I know when to pare down or when I can load up.

Here’s the exported MultiCharts .NET code.

Interactive Brokers Order Types and Algo Overview (For Ordinary Folks)

I’m an Interactive Brokers (IB) client that has been placing orders in the typical way since starting with them because I did not know any better. Then I saw this page on their site and it broke my mind.

This entry is a summary of the order types and algos that IB supports, written with my own education in mind. I’m skipping all of the types and products that I consider exotic or for people with significantly more capital than me.

This is a very high-level overview to simply tell you a thing exists, and will not go in great depth.

First – How Do I Pay Commission In These Fancy Ways?

On mobile you’ll often have to use IBot. I’ve shied away from IBot the same way I shy away from chat bots on business websites, but this one is alright. You get an interactive menu form that you can quickly move through to do complex things. It’s not perfect: it wasn’t available to me today, because the servers were busy. It’s also not very smart, so I’ve been frustrated at times because it didn’t understand me.

On desktop you can find all of these in the Trader Workstation platform in an order ticket window in the same dropdown where you’d find LMT or MKT. If you don’t find it there, it’s probably a fancybeast that has its own window. There’s also IBot on TWS.

Accumulate Distribute

This is a monster of a algo that is primarily designed for filling large orders without spooking the market. I was thinking of using it to get an average price over a given period of time so it really caught my eye and started this investigation but it turns out they have a simpler algo for that!


This order spreads out a total order size into chunks filled across time, getting you an average price over that period of time. I want this often! Having it dutifully executing the full order according to your specifications takes the emotion out of your fills, and I like that.

Adaptive Algo

This algo attempts to trade within the spread of a security to get fills at better prices, and should be most helpful when the spread is wide. It will scan the bid/ask as the patience level you specify to try and get you the best fill. There is a risk that the market will move away from you while it waits. IB claims that using this leads to better fill prices.


Good til date/time and good after date/time. Exactly what they sound like.


A limit order that left incompletely filled at the end of the trading session will be automatically converted into a Market-On-Close order ensuring a complete fill.

Box Top

A market order that if partially filled becomes a limit order at the filled price. This avoids eating through a couple good prices and filling the stink ones behind it.

Installing TA-Lib for Python (via the ta-lib module, not SWIG) on x64

Following my completion of Andrew Ng’s Deep Learning Specialization, I am on a quest to create a model that predicts stock prices using a Recursive Neural Network (RNN) model. I built my model using a recursive neural network architecture followed by a dense layer with some batch normalization to speed up optimization. I used relu activations for their speed, though I had a brief detour exploring the use of the swish activation function. My network is damn simple, and most of my work has been spent on data manipulation.

I trust my model and data pipeline now, because after a few epochs of training it tells me that it can’t guess the direction of the stock market better than a coin flip. It gets it wrong 50% of the time, right on the money! I don’t think the problem is the density of my model, I think it’s the lack of features I’m feeding into it. Right now it gets 14 days of open, high, low, close, and volume values and I ask it to predict what the 3 day simple moving average close price will be in two days (with the simple moving average centered on those two days). I asked it to predict the average rather than the actual close price because I wanted to set it up for success by making it easier. I believe my model when it tells me that you can’t accurately forecast that value with these features.

My next step is additional features. Staying with my purely technical approach, I think Bollinger bands, 100-SMA, RSI, and MACD are valid things to try out. To that end, I need a library I can use to add that data with reasonable performance given the size of my data set (17 million examples of 14-day sequences). For Python, the choice is ta-lib.

Now that I’m done rambling, here’s exactly what I had to do to get this installed on my x64 computer that is using Anaconda to manage my packages. Anaconda did not come into play, honestly.

  • Get the library source from Select the msvc version. As of this writing, the latest is 0.4.
  • Unzip the library so that c:\ta-lib has the readme.txt and the c, dotnet, etc directories. pip will look for this later.
  • Get build tools for Visual Studio:
  • When launching the installer, select Individual Components and pick VC++2015.3 v14.00  (v140) toolset for desktop and Windows Universal CRT SDK. You’ll also need the Windows SDK. I installed the latest Windows 10 SDK, and the Windows 8.1 SDK.
  • Open a fresh command prompt (so that it has the new PATH environment with the build tools in it) and go to c:\ta-lib\c\make\cdr\win32\msvc
  • Execute “nmake”
  • pip install msgpack
  • %comspec% /k "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" amd64 8.1
  • pip install TA-Lib

If you’re lucky, smart, and beautiful, you’ll have TA-Lib installed and working. Try it out with the example code in the ta-lib github:

import numpy
import talib

close = numpy.random.random(100)
output = talib.SMA(close)

Building an Ada Toolchain for AVR (Arduino) on Ubuntu 14.04 LTS x64

This is an adventure! I kept all of my stupid failures in here in case someone else is also lazy and stupid. Behold, the Internet age!

A post on reddit regarding the aerospace industry’s use of Ada got me interested in the language. I’ve programmed enough in C to know the dangers inherent in the freedom it gives you, and I still have a lot to learn about “undefined behaviour”. I like that Ada is a trusted language for embedded environments, so I thought I’d try it out on my Arduino (ATMEGA-328P).

The toolchain isn’t available in the base apt repositories, unfortunately. There are two posts online that I found that both recommend building it yourself to ensure you have the latest patches and detail how to do it.

There’s this post by Tero Koskinen at the site which seems to be the primary reference for building a toolchain for Ada programming on AVR platforms. Tero’s post is a couple years out of date now though, so I’ll be building against the latest versions of everything involved.

Another post is this one by rolf_ebert on the AVR-ada sourceforge site. That post specifically targets Debian-based environments, which is useful for me. Again, it is quite out of date.

The version of gcc used in the second link is 4.7.0, whereas the latest available stable release is 4.9.1. The changelog is available here, and you can see that one change is that it defaults to Ada 2012 instead of 2005. The 4.8 changelog doesn’t reference Ada at all, and the 4.7 changelog mentions that a change was made that reduces debugger overhead and slightly reduces compilation time, by default. It looks like not much has changed, really. Regardless, I proceed in my blind march for progress!

I followed the build procedure for GCC to the letter for the second link, including the “export CC=gnatgcc” line at the top which is required to run the configure script.

The installation phase is where things went wrong. I thought I was a brilliant mastermind of the all-science and decided that the decimal at the beginning of the commands executing the and were superfluous. They are not. Executing them directly does nothing, you have to “source” them. Presumably the decimal is an alias for source. I tested it separately and learned that it was true, yay learning.

I renamed the directory from /opt/gnat-4.7 to /opt/gnat-4.9, should have done that in the environment setup scripts. Following along with the second link, I replaced all references to 4.7 to 4.9. I also changed any references to i386-linux-gnu to my architecture of x86_64-linux-gnu, as reported by “dpkg-architecture -qDEB_HOST_MULTIARCH”.

I had to eventually edit the environment building files to reflect my new directory, I should have done it before my first make install.

In my blind march for progress I downloaded the latest available binutils, but the latest patches from the avr-ada-code project are for 2.20.1. This is the point where I realized I had the same problem with gcc. I applied the 4.7.2 patches successfully on gcc 4.9.1 with no errors, but I am a little concerned at this point that I’m missing something. The binutils patches failed, and so I reverted to the latest version that had the patches, 2.20.1.

I’ve had to fix patch files in the past before, but I decided against trying for this, as compilers and toolchains are basically magic to me still and I don’t want to introduce bugs.

I had these fun binutil make errors greet me halfway through the process:

../../../binutils-2.20.1/bfd/doc/bfd.texinfo:326: unknown command `colophon'
../../../binutils-2.20.1/bfd/doc/bfd.texinfo:337: unknown command `cygnus'
make[3]: *** [] Error 1

Here’s a thread that seems relevant, but its about a more recent version. I just went into the relevant file and added an extra @ sign next to @colophon and @cygnus (@@colophon, @@cygnus) as I saw in the patch. This IRC conversation seems to indicate that this is an issue with texinfo 5 incompatibility with texinfo 4 targets, yay.

I downgraded to texinfo 4 by blindly accepting the advice of this Github issue post. A little apt-get remove texinfo here and a little dpkg -i there and I was done that bit. I ran “make install” again and it proceeded successfully. Awesome! In hindsight, my editing of the previous error was unnecessary. What I really needed to do was downgrade texinfo.

I had a build error when making gcc on file gcc-4.9.1/gcc/ada/switch-c.adb. The case/switch structure seemed to duplicate the ‘t’ switch, so I commented out the second one because it was shorter. Clearly, I am on a path of great success here. It compiled fine after that, but maybe I’m missing a command line argument. Whatever!

The next step for me was avr-libc, which has a slightly newer version available, 1.8.1 versus 1.8.0. Onward to progress!

On my first attempt I neglected to unset CC, CXX, and CPP. This gave me an error: “Wrong C compiler found; check the PATH!”. After actually following the instructions, it was smooth sailing for avr-libc.

For AVR-Ada I did not have gprbuild installed natively which is required. I apt-get install’ed gprbuild, and encountered a few errors on build:

no matching compiler found for --config=blablabla

After updating my path (which the instructions mention but I I didn’t understand until now) with export PATH=$PATH:/opt/gnat-4.9/bin it compiled without error. “make install” didn’t work after that, but that’s because I was supposed to “make install_libs”. All built, all installed! Woot! I don’t seem to have any examples to try and build yet, but this post is long enough anyway!

Sadly, the ultimate result is that code does not compile. Instead, I am greeted with the following:

$> make
avr-gnatmake -aL/opt/avrada/avr/lib/gnat/avr_lib/atmega328p/lib -XMCU=atmega328p -p -Pblink.gpr -XAVRADA_MAIN=blink_rel
avr-gnatmake: cannot find RTS rts/avr5
make: *** [blink_rel.elf] Error 4

I found this post on the avr-ada-devel mailing list by Rolf Ebert who wrote the main document I’ve been following, it seems he’s found the same problem.

Synology DS416j – From Stock to pycurl

I got a Synology DS416j NAS and wanted to use it for some basic network infrastructure tasks on top of data storage. One of these network infrastructure tasks involves updating a DNS record with my service provider, which requires a python script that uses curl, and therefore pycurl. You can install python with the package manager, but it doesn’t come with much. If you pip install pycurl, it’ll complain about curl-config not existing. To fix that, you need to go down a rabbit hole into a messy world of missing dependancies; proceed at your own discretion.

First, you’ll need the community package manager “ipkg” that’ll let you install most of what you need. It’s called a bootstraper, and there’s an easy installer available here.

Dump that on your NAS, go to the package manager, and “manually install” it. You’ll also need to enable SSH.

By default the new “ipkg” binaries and associated bits will not be in your path, so you’ll have to modify your path to include it. I did this with a PATH=$PATH:/usr/opt in a .bashrc.

You can then “ipkg install” several packages as root. You’ll need a bunch:

  • busybox
  • binutils
  • diffutils
  • gawk
  • gcc
  • libc-dev
  • libcurl
  • libcurl-dev
  • make
  • openssl
  • openssl-dev
  • patch
  • sed

This is what I needed for pycurl, anyway.

Next, you need the python headers, which you are not going to be able to get via ipkg. You have to go to and get the appropriate source tarball for the version you have (python –version). Dump that in /tmp, extract it, run ./configure to produce pythonconfig.h. I had to modify pythonconfig.h so that Py_UNICODE_SIZE was 4, not 2, as it seems the binary I have installed was compiled with that. If you don’t change this you’ll get some undefined references to a symbol “PyUnicodeUCS2_Decode”.

Make a directory /opt/lib/python, dump everything in your extracted tarballs Include directory in there, and also put your modified pythonconfig.h in there.

Now you can pip install pycurl, except that won’t work. Run it with “pip install –no-clean pycurl” so you can manually complete it. It’ll leave a directory in “/tmp/pip-build-something/pycurl”, go there. is going to look for gcc in the wrong place. There’s probably a good way to fix that, but it was really quick to just symlink the wrong place to the right one:

ln -s /opt/bin/gcc /usr/local/arm-unknown-linux-gnueabi/bin/arm-unknown-linux-gnueabi-ccache-gcc

It’s also not going to know where to find those python includes we just collected. There’s probably a proper way to do that too, but I just modified so that:

self.include_dirs = [‘/opt/include/python’]

Under the instantiation definition for the class ExtensionConfiguration.

Now, you can make and make install and you’ll have pycurl.

Aruba 620 Password Reset

If you forget your password to an Aruba 620, you can perform a password reset it using the RS232 (serial) console port with some magic words.

Connect to the Aruba 620 using an RS232 DB9-to-RJ45 cable. This cable is pretty standard, and is also used for a lot of Cisco equipment.

The default serial settings are as follows:

9600 baud
8 data bits
no parity bit
1 stop bit
no flow control

At the user: prompt, enter “password”. As a password, enter “forgetme!”.

Type “enable” to enter a more privileged mode that will allow you to make changes.

Enter the system configuration menu with “configure terminal”. Then, you can use “mgmt-user admin PASSWORD” to set the password.

You’ve set the password, but it doesn’t actually matter yet. You have to write the new settings to make them stick. Use “write memory” to accomplish that.

That’s it!