Systems update…

Currently trading the Nasdaq, the Sp500, the Nikkei, the Euro, and Oil. Although you would think that the first three assets are highly correlated (they are), all three have systems that go long/short; it is quite common to be long 1 or 2 and short 1 or 2 markets. This provides necessary hedge function.

Have no opinion about the market that I’m will to trade upon. I know it is expensive and probably will get more so. I suspect next year will be at least one 10%+ decline. I’m ready to surf…

Trade update

I remain long DJIA futures, and I’ve been short oil recently. Also, yesterday was so overstretched to the buy side that I bought some puts as a cheap hedge. I do not understand what is driving market, and unlike most people I will not pretend to. It is probably still a central bank liquidity function. But, like a remora, I follow the shark, not the inverse.

I am fully expecting a fear event soon. I welcome it. One should be suspicious of any market that just goes one way. But don’t sell. Ride it. Get a sell process. Does not have to be sophisticated. Use a 10 month moving average if you have nothing else. And follow the shark. You do not process information better than the market. You are not smarter than the market. Practice humility and laugh at all these “gurus” online. Who are so rich they need you to buy their ideas.

Understanding systems…

There is no perfect trading system.

I have developed, coded, and tested literally thousands of trading systems. I am not exaggerating. They are all imperfect. Now, I would have always agreed with that on a intellectual level. But I think I did not really “get” that until I had done the work of developing these systems and tested them though various market conditions.

Now I think of systems in terms of yield. A system is not unlike a bond that has a yield that streams off of the instrument itself.

If you like things more literal. Imagine you build windmills. You can maximize efficiency of your blade shape for different speeds and frequency of winds. But one blade will not be efficient in all conditions. Just like a real wind farm, you maximize your blade shape for your prevailing wind condition. If the winds are significantly above or below that condition it is often best to just brake-stop the windmill.

System designers obsess over the shape of their blade, but they often do not think about measuring the conditions in which that blade has maximum efficiency.

Happy trading.

9/23/17

We live in interesting times.

Bovespa and China a50 have been the futures markets with best momentum. I have missed this trade due to excess caution and the fact that my best models require data I cannot get for these markets. I’ve adjusted this and will trade these markets more in future.

http://stockcharts.com/freecharts/perf.php?$SPX,$NDX,GLD,$INDU,IEF,$TBSP,$NIFTY,$NIKK,$FXT&n=200&O=011000

 

Gold and bonds are having a tough time of it, responding to the end of QE by the Fed. However, the Fed is the fourth largest central bank. China, ECB, and Japan are larger by assets. They are still printing money. I will not pretend to understand all of this. I follow markets like a remora shadowing a shark. I do not predict where the shark is going.

My current worries:

  1. Don the Con and Rocket-boy
  2. Market structure (vix and skew)
  3. October

 

Have a great weekend.

Serious thinking on AI

 

Difficult ethical questions have been raised this month after artificial intelligence (AI) was shown to be 91% accurate at guessing whether somebody was gay or straight from a photograph of their face. If a madman with a nuclear weapon is a 20th century apocalypse plot, climate change and AI are the two blockbuster contenders of the 21st. Both play to the tantalisingly ancient theme of humanity’s hubristic desire to be greater, bringing about their own downfall in the process. Barring recent political setbacks, the risks of climate change are no longer controversial. AI, on the other hand, seems to still be.

In Oxford, an ancient city of spires and philosophers, seemingly standing out against technological advance, there is the Future of Humanity Institute. The institute is headed by Nick Bostrom, a philosopher who believes that the advent of artificial intelligence could well bring about the destruction of civilisation. Meanwhile, in the New World, the top ‘schools’ are turning overwhelmingly to the study and development of artificial intelligence. At Stanford University, about 90% of undergraduates now take a computer science course. In Silicon Valley, the religion is one of self-improvement. Optimization (with a ‘z’) is their religion. With a strong culture of obsessing over their own ‘productivity’, it’s little wonder that the promises of AI, the ultimate optimiser, have such a powerful draw on so many brilliant brains on the West Coast.

Back home, Bostrom is a leading source of warnings against AI. His papers have titles like “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards”. It’s clear that he is a man not shy of staring back at the abyss, though I’m not entirely sure what ‘Related Hazards’ we need to be concerned about post-Human Extinction. He’s said that our work on artificial intelligence is “like children playing with a bomb”.

Bostrom extrapolates from the way that humans have dominated the world to highlight the risks of AI. Our brains have capabilities beyond that of the other animals on this planet, and that advantage alone has been distinctive in making us so overwhelmingly the dominant species, and in a relatively tiny amount of time. The dawn of civilisation could be described as a biological singularity, an intelligence explosion. If an artificial brain could be made more powerful than our own, why should we not see a second ‘intelligence explosion’? This brings up the technological singularity – a paradigm in which humans might come to have as little power over our lives as battery farmed chickens do now over theirs. The difference, I suppose, is that chickens didn’t inadvertently create humanity, so Bostrom sees a chance for us to control our creation – we choose how and when to turn it on.

However, AI need not be inherently malignant in order to destroy us. By illustration, one of Bostrom’s fun apocalypse scenarios (another of his light reads is Global Catastrophic Risks) is that of the end of the world by a runaway artificial intelligence algorithm initially designed to help a paperclip factory manufacture as many paperclips as possible. Within seconds, the machine has quite logically reasoned that this would be best achieved by wiping out the human race in order to make a bit more space for paperclip manufacturing, and cheerfully, obediently embarked on its task.

My engineering degree at Oxford is definitely a bit backward. Most of the course seems not to have changed since Alan Turing cracked the Enigma Machine at Bletchley Park. My decision to focus my final year on AI thus makes me – I would like to think – a dangerous maverick. Probably, the professors discuss me in hushed tones, fear mixed equally with reverence. I’m actually extremely grateful for this environment.

Away from the fervent centre of the Religion of Optimisation, it’s far easier to see the bigger picture, without being blinded by the light of enthusiasm. The Laboratory of Applied Artificial Intelligence which I just became part of sits within the Oxford Robotics Institute. The important nuance in this hierarchy is that at Oxford, artificial intelligence is more a prosaic tool to be applied to an end, than a quasi-religious holy grail in itself. Say, making better driverless cars. It is only in very specific, tailored ways like this that artificial intelligence is, and can be, currently used. This concrete embodiment of AI is called Machine Learning, and is nothing more glamorous than particularly clever statistics, run by relatively powerful computers.

It is these mundane algorithms that optimise all online ads to their audience, determine your sexuality from a photo, get your Über driver to you within 2 minutes, or even replace that Über driver altogether. Long before Bostrom’s artificial superintelligence surpasses the human brain and crushes us like ants, civilisation will be tested by the extreme turbulence in lifestyle and employment that will be brought by this far more mundane embodiment of computer intelligence. Besides a bit of philosophy and working out how to unplug a monster brain, we should be considering a far closer future, in which boring machines that can’t even hold a conversation will have nonetheless put most of us out of work.

more here