Market update…

Markets are currently being misshapen by the re-balancing needs of the quarter-end. However, the algos i rely upon do not know that, so march forward like good soldiers.

I closed my hedge positions yesterday, ending the 10y and gold trades, both at losses. Remember, my hedges are discretionary, my core positions are algorithmic. Also, the core positions are programmed to trade the medium/long term, not the short-term. I currently cannot compete with the intelligence of the short-term bots that the HFT firms are running. They are getting scarily good. One can avoid this fight by only trading longer term. My core position has been long /NQ and remains long /NQ, even though I have given back a third of total position profits. This is typical! It was never our money, not until we sell.

This is the summer scare in my opinion. I’m expecting a rally into the end of the year. However, we haven’t seen a puke-day. So, probably more to go.

Notice of the total option ratio is close to a sell on NDX (when the green line crosses firmly below the red). Not there yet.

Also, note the early recession warning continues to climb. I believe we will have a recession ’18 or ’19

Look at gold versus real rates (tips). Gold seems oversold here. And i’ll get long again soon. I’m just waiting for my algo to say buy again. But even then, remember, it is as a hedge, not a core position. If North Korea goes bang owning gold will look like genius.

If you are looking to put some slow money to work, healthcare and technology look good, one has momentum the other is oversold. Vanguard has a great healthcare fund. I’ll be adding some tomorrow.

Finally, the current ten positions of the Momentum Model

 

Happy July 4th!

 

 

 

 

 

 

Future of jobs… it’s more complicated than you think…

https://youtu.be/7Pq-S557XQU
This video is the best argument I have seen in a easily understood context about the coming challenges of employability and future jobs with the rise of automation. Although I have a problem with the use of horses as a metaphor to understand future human jobs. Particularly in that the horses were never the creator of their own jobs where as we humans are the creator of our jobs. I do believe there will be tremendous deflation in the future that currently is expressing itself in some type of technological deflation which is hard to measure using the tools of an industrial time such as GDP. This deflation will make so many of our living expenses smaller. This will probably first be seen in the tremendous lowering of the cost of transportation. And has already been seen as mentioned above in the decreasing cost of technological gadgets and services. This will leave our largest cost as housing and healthcare. It is possible that a universal basic income will become increasingly likely. Indeed we live in interesting times.

Atos weekend…

So I’m lucky enough to be able to train at Atos in San Diego for the last few days. It’s one of jiujitsu’s centers-of-excellence. Some of the biggest champions in the world are either here or from here. Andre Galvao and his wife. Leo Veira. Keenan Cornelius. The Mendes bros. There are more. And also a friend of mine Dominique Bell. I’ve been rolling with Dom for several days. He is reminding me the difference in performance and world-class performance. It’s been fun. I’m feeling lucky at the moment –Rolling with some of the best in the world. Eating incredible Mexican food. And recovering on a balcony overlooking San Diego bay. 

Internet of things… more developmentĀ 

Researchers at UC San Diego have developed a temperature sensor that runs on tiny amounts of power — just 113 picowatts, around 10 billion times less power than a watt. The sensor was described in a study recently published in Scientific Reports. “We’re building systems that have such low power requirements that they could potentially run for years on just a tiny battery,” Hui Wang, an author of the study, said in a statement.
The team created the device by reducing power in two areas. The first was the current source. To do that, they made use of a phenomenon that many researchers in their field are actually trying to get rid of. Transistors often have a gate with which they can stop the flow of electrons in a circuit, but transistors keep getting tinier and tinier. The smaller they get, the thinner the gate material becomes and electrons start to leak through it — a problem called “gate leakage.” Here, the leaked electrons are what’s powering the sensor. “Many researchers are trying to get rid of leakage current, but we are exploiting it to build an ultra-low power current source,” said Hui.
The researchers also reduced power in the way the sensor converts temperature to a digital readout. The result is a temperature sensor that uses 628 times less power than the current state-of-the-art sensors.
Read more here

Noah effects…

This story illustrates the difference between good companies in great companies. However in the future as big data is integrated and the Internet of things floods us with decision optionalities, this kind of math will not simply be the province of the gifted leaders or researchers. But will become the necessary tool kit for growth. It will be necessary…

In 1958, a brilliant young mathematician named Benoit Mandelbrot went to work as a researcher for IBM. His first assignment seemed like a straightforward problem, but turned out to be devilishly complex. He was tasked with figuring out how noise in communication lines arises and identifying some way of minimizing it.
His solution was simple but ingenious. He realized that there was not one type of effect at play but two. The first, which he called “Joseph effects,” after the biblical story about seven good years and seven bad years, was predictable. The second, which he termed “Noah effects”, was chaotic and unpredictable.
He soon found that these two effects were present in more than communication lines, but in everything from the flooding of the Nile River to crashes in financial markets and they play havoc with our ability to see the future. Even more importantly, they can help us navigate an increasingly volatile, uncertain and complex business environment and survive for the long-term
How Google leveraged Joseph Effects To Build The World’s Most Popular Email Service

By 2004, Google was already was already dominant in its category and hugely profitable. Handling almost 90% of all Internet searches, it had recently went public at a valuation of over $20 billion (which was big money before back then). After just 5 years in business, the company seemed unstoppable.
However, it had a problem. People would go to Google, find what they were searching for and leave. While that was great for users, the company recognized that it could benefit from having people stick around for a while. The question: How could it increase the time that people spent on its platform without undermining the core search business?
It found the answer in Gmail, a new email service that Google launched with an offer few could refuse — 1GB of storage. That didn’t just one-up the competition; but completely changed the game, delivering literally hundreds of times the 2MB-4MB that the market leaders, Hotmail and Yahoo, were offering at the time.
Google was able to leapfrog the competition because it understood Joseph effects. With very few email users, 1 GB of storage would cost the competition much more than Google. While at the same time, with storage costs decreasing rapidly, the company’s leaders could safely predict that in the time it took for its user base to grow, 1GB of storage would be feasible.
By understanding the predictable continuity of Joseph effects, Google had scored a coup.
Why Microsoft and IBM Are Still Thriving Today

Every enterprise is, essentially, a square-peg business waiting for a round-hole world. As much as you analyze and plan, every strategic decision is essentially a coin flip. You do your best to narrow down the choices, but in the end you need to take a chance among several seemingly viable options.
Microsoft and IBM are primary examples of this principle. Both have been around for decades — over a century in IBM’s case — and have survived through multiple technology cycles. Most of their former competitors have either become irrelevant or gone out of business entirely, but these two still run profitable businesses with strong margins.
Perhaps not surprisingly, both have fallen prey to the discontinuities of Noah effects. Microsoft missed mobile — horribly — and more recently the shift from installed systems to the cloud has resulted in 20 straight quarters of revenue decline for IBM. Eventually, every business model fails.
Yet both are thriving in new technologies. Microsoft’s cloud business is growing 100% annually and IBM has leadership positions in both artificial intelligence and quantum computing.
By understanding the unpredictable discontinuities of Noah effects, Microsoft and IBM have managed to compete for the long term.
How Google Came To Embrace Noah Effects

As noted above, Google is a master of Joseph effects. For nearly two decades, it has dominated the search business and has leveraged its position to build other great businesses, like YouTube and the Android mobile operating system. It’s also incredibly profitable, with EBITDA over $30 billion and top line growth of more than 20%.
Yet its success in search is also its Achilles’ heel. Over 90% of the company’s revenues still come from advertising related to its core search business. Eventually, it will face the same problems that earlier tech giants like IBM and Microsoft did. What will become of Google when it can no longer make money on search?
Clearly, the company understands this and has embraced Noah effects. It regularly invites around 30 top researchers to spend a sabbatical year at Google, offering the world’s best technology and data sets for them to work with. It has also set up new organizational structures like its X division and and Verily to pursue opportunities unrelated to its core business.
None of these bets have paid off yet. So from the perspective of Joseph effects, they don’t make much sense. In terms of Noah effects though, it’s always better to build the ark before the the storm.
Not All Who Wander Are Lost

In 1993, IBM successful performed its now famous quantum teleportation experiment. Scientifically, it was a triumph, helping to finally disprove one of Einstein’s last theories. It did little, however, to benefit IBM as a company, which at the time was near bankruptcy. Many believed IBM was a relic from an earlier age and science experiments did little to change that perception.
Today, however, it’s beginning to pay off. As Moore’s Law is nearing its theoretical limits, the predictability of Joseph effects are giving way to the discontinuity of Noah effects and the need for new computing architectures is becoming increasingly dire. So IBM’s recent announcement of a 17 qubit quantum computer makes that early work seem like a really smart bet.
And that’s the dilemma every business finds itself in. We are judged — by both customers and investors — by how we are able to handle Joseph effects. We need to accurately predict what customers want, in what quantity, and deliver it. Get that wrong and you will pay a steep price.
Over the long haul, however, Noah effects become predominant and we have to prepare for a future that is impossible to see. That’s why, in the final analysis, it’s more important to explore than predict. We never know exactly what the flood will look like, but we can be absolutely sure that it will come.

Link

Fixing crispr… this is a big deal…

Scientists have already learned how to use CRISPR to edit errors in almost any genome ā€” and itā€™s these errors that can cause a wide range of diseases. Many forms of cancer, Huntingtonā€™s disease, and even HIV can be targeted using CRISPR. That being said, itā€™s not a perfect solution. Just as the autocorrect on your smartphone can cause you to send an unintentional and embarrassing text message, CRISPR can ā€œcorrectā€ something that was actually right ā€” the consequences of which can make it a dangerous mistake. One that actually causes a disease as opposed to an embarrassing social gaffe.The researchers developed a method for quickly testing a CRISPR molecule against a personā€™s entire genome, rather than only the target area, in order to predict other segments of DNA the tool might accidentally interact with. This new technique functions like an early warning system, giving doctors a chance to more closely tailor gene therapies to specific patients, while ensuring they are effective and safe.

Read more

IBMā€™s Artificial Brain Has Grown From 256 Neurons to 64 Million Neurons in 6 Years

More here
IBM and the US Air Force have built a brain-inspired supercomputing system with the equivalent of 64 million artificial neurons and 16 billion synapses. The system is designed to operate as close to a biological brain as possible.
The system contains 64 of IBMā€™s TrueNorth chips fits inside a standard server rack and will be able to scale to half a billion artificial neurons with its current architecture. The TrueNorth chips are different from CPUs in that each core can operate in parallel and without a clock. The system is inherently resilient, If one core stops working the rest can continue operating without interruption.
Just like a biological brain the system is designed for extreme energy efficiency, It requires only 10 watts to power all 64 million neurons and 16 billion synapses. The system isnā€™t yet as efficient as a biological system but it is getting close, for comparison the human brain contains 100 billion neurons and uses around 20 watts.
The ultra-low power usage will enable vastly more capable mobile artificial intelligence systems for use in self-driving cars, smartphones, and aircraft which is the Air Forceā€™s main interest. IBM has made impressive progress over the last six years with its TrueNorth chip architecture, far outstripping the pace of Mooreā€™s Law.

At a lecture I listened to last year the head of IBMā€™s Brain-Inspired Computing division, Dharmendra Modha spoke about his ultimate goal of a human brain equivalent in a two liter box. By 2020 Dr. Modha believes his team will have built a 10 billion neuron equivalent system that can fit in a two liter box and require only 20 watts to operate, that would give them a desktop system with roughly 10% the compute power of the human brain that could easily run on a smartphone battery. Dr. Modha also said that in order to achieve this goal by 2020 he only needs 7nm chips, the FABs for which are already under construction and will be producing 7nm chips in 2018, two years ahead of Dr. Modhaā€™s schedule. And as Iā€™ve written about previously IBM has demonstrated that 5nm chips are achievable.
Six years ago IBMā€™s TrueNorth Neurosynaptic System contained only 256 neurons per system and today it contains 64 million, thatā€™s an annual eight fold increase. If Dr. Modhaā€™s team can maintain this rate of increasing artificial neurons in the system then they are well on their way to building a system with 10 billion artificial neurons by 2020, and a human brain equivalent of 100 billion artificial neurons before 2025.

Trade update…

So perhaps this is our summer scare… seems like it. Happy about it, even though it is costing money. If you believe in your process then relax. The money in your account will fluctuate with markets. If you don’t need it tomorrow, relax. If you did need it tomorrow you are silly to have risked it. Let price discovery be an ally not an enemy… volatility is the price of performance in a central bank driven world. I know where I’ll sell, and I’ll tell you. We aren’t there yet. 

Quants still trust the yield-curve… duh…

More here

Financial markets are just as excited about artificial intelligence, machine learning, and big data as the technology industry. So-called ā€œquantsā€ have benefitted from this enthusiasm, with money from investors steadily flowing into their algorithm-driven funds.Quant investing spans a vast range of strategies, but one of the most talked about methods is using artificial intelligence to sift through vast amounts of market data to uncover signals that humans canā€™t see. The idea is to use advanced mathematics and computing power, rather than traditional research and intuition, to gain an edge in the market.

Take PhaseCapital in Boston. Its chief investment officer has a PhD from Oxford with a focus on numerical optimization, artificial intelligence, and neural networks. Michael DePalma, its CEO, previously helped run quantitative investing strategies at AllianceBerstein.

And yet, for all its sophistication, the hedge fund says it still trusts an old-school indicator that investors have tracked for decades: the yield curve. When the spread between short- and long-term interest rates fell recently, the firm slashed its market exposure, DePalma told the Financial Times (paywall.)

DePalma later told Quartz that yield-curve analysisā€”such as the observation that short-term interest rates rising relative to long-term rates may signal the economy is sputteringā€”remains useful for helping assess risk. The slide-rule era tool is based on widely available bond price information, and economists have been documenting its predictive powers since the 1960s.

While thereā€™s no guarantee that massive data sets crunched by algorithms will surface any sensible investing strategies, the yield curve is a tried and true measure. Yet the buzz around quants is so intense that investment managers may feel pressured adopt some sort of algorithmic strategy, or else risk raising less money, DePalma said. Computer-driven hedge funds have doubled their assets under management, to $932 billion, following eight straight years of inflows since 2009, according to data-tracking firm HFR.

Air forces sees a neuromorphic future.

 

https://www.nextplatform.com/2017/06/26/u-s-military-sees-future-neuromorphic-computing/

The neuromorphic investment we first reported in 2015 for neuromorphic devices in military applications has come to fruition. the U.S. Air Force Research Laboratory (AFRL) and IBM are working together to deliver a 64-chip array based on the TrueNorth neuromorphic chip architecture. While large-scale computing applications of the technology are still on the horizon, AFRL sees a path to efficient embedded uses of the technology due to the size, weight, and power limitations of various robots, drones, and other devices.

ā€œThe scalable platform IBM is building for AFRL will feature an end-to-end software ecosystem designed to enable deep neural-network learning and information discovery. The 64-chip arrayā€™s advanced pattern recognition and sensory processing power will be the equivalent of 64 million neurons and 16 billion synapses, while the processor component will consume the energy equivalent of a dim light bulb ā€“ a mere 10 watts to power.ā€

The IBM TrueNorth Neurosynaptic System can efficiently convert data (such as images, video, audio and text) from multiple, distributed sensors into symbols in real time. AFRL will combine this ā€œright-brainā€ perception capability of the system with the ā€œleft-brainā€ symbol processing capabilities of conventional computer systems. The large scale of the system will enable both ā€œdata parallelismā€ where multiple data sources can be run in parallel against the same neural network and ā€œmodel parallelismā€ where independent neural networks form an ensemble that can be run in parallel on the same data.

ā€œAFRL was the earliest adopter of TrueNorth for converting data into decisions,ā€ said Daniel S. Goddard, director, information directorate, U.S. Air Force Research Lab. ā€œThe new neurosynaptic system will be used to enable new computing capabilities important to AFRLā€™s mission to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage.ā€

ā€œThe evolution of the IBM TrueNorth Neurosynaptic System is a solid proof point in our quest to lead the industry in AI hardware innovation. Over the last six years, IBM has expanded the number of neurons per system from 256 to more than 64 million ā€“ an 800 percent annual increase over six years.ā€™ā€™ ā€“ Dharmendra S. Modha, IBM Fellow, chief scientist, brain-inspired computing, IBM Research ā€“ Almaden

The system fits in a 4U-high (7ā€) space in a standard server rack and eight such systems will enable the unprecedented scale of 512 million neurons per rack. A single processor in the system consists of 5.4 billion transistors organized into 4,096 neural cores creating an array of 1 million digital neurons that communicate with one another via 256 million electrical synapses. For CIFAR-100 dataset, TrueNorth achieves near state-of-the-art accuracy, while running at >1,500 frames/s and using 200 mW (effectively >7,000 frames/s per Watt) ā€“ orders of magnitude lower speed and energy than a conventional computer running inference on the same neural network.

The Air Force Research Lab might be eyeing larger implementations for more complex applications in the future. Christopher Carothers, Rensselaer Polytechnic Instituteā€™s Director of the instituteā€™s Center for Computational Innovations described for The Next Platform in 2015 how True North is finding a new life as a lightweight snap-in on each node that can take in sensor data from the many components that are prone to failure inside, say for example, an 50,000 dense-node supercomputer (like this one coming online in 2018 at Argonne National Lab) and alert administrators (and the scheduler) of potential failures This can minimize downtime and more important, allow for the scheduler to route around where the possible failures lie, thus shutting down only part of a system versus an entire rack.

A $1.3 million grant from the Air Force Research Laboratory will allow Carothers and team to use True North as the basis for a neuromorphic processor that will be used to test large-scale cluster configurations and designs for future exascale-class systems as well as to test how a neuromorphic processor would perform on a machine on that scale as a co-processor, managing a number of system elements, including component failure prediction. Central to this research is the addition of new machine learning algorithms that will help neuromorphic processor-equipped systems not only track potential component problems via the vast array of device sensors, but learn from how these failures occur (for instance, by tracking ā€œchatterā€ in these devices and recognizing that uptick in activity as indicative of certain elements weakening).

With actual vendor investments in neuromorphic computing, most notably Intel and Qualcomm, we can expect more momentum around these devices in the year to come. However, as we have talked about with several of those we have interviewed on this topic over the last couple of years, building a programmable and functional software stack remains a challenge. No architecture can thrive without an ecosystemā€”and while AFRLā€™s has been custom built for their applications running on the array, for market viability, it could take years of software stack innovation for these devices to break into the mainstream.