9/23/17

We live in interesting times.

Bovespa and China a50 have been the futures markets with best momentum. I have missed this trade due to excess caution and the fact that my best models require data I cannot get for these markets. I’ve adjusted this and will trade these markets more in future.

http://stockcharts.com/freecharts/perf.php?$SPX,$NDX,GLD,$INDU,IEF,$TBSP,$NIFTY,$NIKK,$FXT&n=200&O=011000

 

Gold and bonds are having a tough time of it, responding to the end of QE by the Fed. However, the Fed is the fourth largest central bank. China, ECB, and Japan are larger by assets. They are still printing money. I will not pretend to understand all of this. I follow markets like a remora shadowing a shark. I do not predict where the shark is going.

My current worries:

  1. Don the Con and Rocket-boy
  2. Market structure (vix and skew)
  3. October

 

Have a great weekend.

AEI wishes upon a productivity star…

As much as I love this article, it is just an interesting way of saying maybe cool things will happen in the future.

Fresh off their failure to repeal Obamacare, Republicans are eager to pivot to tax reform. Of course they are. Tax reform is what they do. It’s been their policy safe space for 40 years. Targaryens ride dragons, Lannisters pay their debts, and Republicans cut taxes. 

But Republicans’ tax reform effort may collapse, too. The GOP has reached agreement on only broad goals. And the tax code is arguably just as complicated at the health-care system, if not more so. Both are full of economically thorny and politically unpalatable trade-offs. Republicans might have to settle for a temporary corporate tax cut, which might have little long-term impact on economic growth. Or maybe nothing.
And that might be okay — not optimal, but tolerable. After all, the U.S. is not in economic crisis. The current expansion is the third-longest ever, the economy grew at a solid-if-unspectacular 2.6 percent last quarter, and job gains continue to average nearly 200,000 a month. Policymakers don’t need to scramble to juice growth through quickie tax cuts that reduce marginal rates but also revenues.
Actually, Washington might not need to do much of anything for the economy to grow at 3 percent annually on a sustained basis — a stated GOP goal — versus the 2 percent average of the 2000s. One reason growth pessimists think the economy is stuck permanently in a low-gear New Normal is that productivity has been historically weak, both since the Great Recession and just before. If workers fail to become more productive, the economy and living standards will stagnate. And if America isn’t technologically innovative, workers won’t become more productive.
Yet America sure looks pretty innovative, at least if you pay attention to what’s happening in places like Silicon Valley, Seattle, and New York. Indeed, there’s reason to believe official stats are underestimating tech-driven innovation. As my AEI colleague Stephen Oliner, the Federal Reserve’s David Byrne, and Daniel Sichel of Wellesley College write in their new paper, “Prices of high-tech products, mismeasurement, and pace of innovation”: “We believe that these faster rates of growth in high-tech could presage a second wave of higher productivity growth spurred by the digital revolution.”
Here’s the problem: The IT revolution seems confined to a narrow group of superstar tech firms and isn’t spreading throughout corporate America. For innovation to lift productivity and the broader economy, new technologies must be broadly and efficiently used. We must spread the innovation wealth.
Of course, that still might happen. Economic history suggests such “diffusion” typically takes time. It took decades for factories to figure out how to use electric dynamos rather than steam. Likewise, economists in the 1980s wondered why the arrival of PCs wasn’t transforming firms — until that 1990s productivity boom happened.
And in the same way, “the rapid innovation and robust investment of recent years will eventually have an impact, but it could take some time for the next wave of productivity growth to become visible at the aggregate level,” concludes a new Peterson Institute paper, “The Case for an American Productivity Revival.” A similar argument is made in “The Coming Productivity Boom” by the Progressive Policy Institute’s Michael Mandel and AEI Fellow Bret Swanson: “The 10-year productivity drought is almost over. The next waves of the information revolution — where we connect the physical world and infuse it with intelligence — are beginning to emerge.”
Some researchers think the widespread and innovative use of big data, AI, and robotics in areas such as health care, education, and the service sector could eventually boost productivity growth high enough that overall 3 percent growth is doable. And this tech wave may be unstoppable as long as government doesn’t do something profoundly dumb such as banning or taxing new technologies. Instead, policymakers should be trying to hasten, enhance, and spread this transformation through a variety of public policies, such as making it easier for global tech talent to work in America, reducing regulatory barriers to the adoption of new technologies, boosting competition, and more generously funding science research.
More here

Future of jobs… it’s more complicated than you think…

https://youtu.be/7Pq-S557XQU
This video is the best argument I have seen in a easily understood context about the coming challenges of employability and future jobs with the rise of automation. Although I have a problem with the use of horses as a metaphor to understand future human jobs. Particularly in that the horses were never the creator of their own jobs where as we humans are the creator of our jobs. I do believe there will be tremendous deflation in the future that currently is expressing itself in some type of technological deflation which is hard to measure using the tools of an industrial time such as GDP. This deflation will make so many of our living expenses smaller. This will probably first be seen in the tremendous lowering of the cost of transportation. And has already been seen as mentioned above in the decreasing cost of technological gadgets and services. This will leave our largest cost as housing and healthcare. It is possible that a universal basic income will become increasingly likely. Indeed we live in interesting times.

Quants still trust the yield-curve… duh…

More here

Financial markets are just as excited about artificial intelligence, machine learning, and big data as the technology industry. So-called “quants” have benefitted from this enthusiasm, with money from investors steadily flowing into their algorithm-driven funds.Quant investing spans a vast range of strategies, but one of the most talked about methods is using artificial intelligence to sift through vast amounts of market data to uncover signals that humans can’t see. The idea is to use advanced mathematics and computing power, rather than traditional research and intuition, to gain an edge in the market.

Take PhaseCapital in Boston. Its chief investment officer has a PhD from Oxford with a focus on numerical optimization, artificial intelligence, and neural networks. Michael DePalma, its CEO, previously helped run quantitative investing strategies at AllianceBerstein.

And yet, for all its sophistication, the hedge fund says it still trusts an old-school indicator that investors have tracked for decades: the yield curve. When the spread between short- and long-term interest rates fell recently, the firm slashed its market exposure, DePalma told the Financial Times (paywall.)

DePalma later told Quartz that yield-curve analysis—such as the observation that short-term interest rates rising relative to long-term rates may signal the economy is sputtering—remains useful for helping assess risk. The slide-rule era tool is based on widely available bond price information, and economists have been documenting its predictive powers since the 1960s.

While there’s no guarantee that massive data sets crunched by algorithms will surface any sensible investing strategies, the yield curve is a tried and true measure. Yet the buzz around quants is so intense that investment managers may feel pressured adopt some sort of algorithmic strategy, or else risk raising less money, DePalma said. Computer-driven hedge funds have doubled their assets under management, to $932 billion, following eight straight years of inflows since 2009, according to data-tracking firm HFR.

age of machine learning

This is a quite interesting article. And I agree that we are entering the age of machine learning. However the argument is structured as though the other ages have ended. They most certainly have not. Indeed much of humanity is still in the age of faith. And this is not an altogether bad thing . We are still in the industrial age … just in certain sectors of the economy. Technology age is not gone but still grinding ahead. History is ultimately a layer cake … it is just on top of this large cake we now add machine learning. This is very important when valuing companies. When a company gets a value or multiple assigned to it from one age it is almost impossible to move the valuation forward. Thanks for example of IBM. 

https://venturebeat.com/2017/06/25/the-information-age-is-over-welcome-to-the-machine-learning-age/

I first used a computer to do real work in 1985.

I was in college in the Twin Cities, and I remember using the DOS version of Word and later upgrading to the first version of Windows. People used to scoff at the massive gray machines in the computer lab, but secretly they suspected something was happening.

It was. You could say the information age started in 1965 when Gordon Moore invented Moore’s Law (a prediction about how transistors would double every year, later changed to every 18 months). It was all about computing power escalation, and he was right about the coming revolution. Some would argue the information age started long before then, when electricity replaced steam power. Or maybe it was when the library system in the U.S. started to expand in the 30s.

Who knows? My theory is it started when everyone had access to information on a personal computer. That was essentially what happened for me around 1985 — and a bit before that in high school. (Insert your own theory here about the Apple II ushering in the information age in 1977. I’d argue that was a little too much of a hobbyist machine.)

We can agree on one thing. We know that information is everywhere. That’s a given. Now, prepare for another shift.

In their book Machine, Platform, Crowd: Harnessing Our Digital Future, economic gurus Andrew McAfee and Erik Brynjolfsson suggest that we’re now in the “machine learning” age. They point to another momentous occasion that might be as significant as Moore’s Law. In March of last year, an AI finally beat a world champion player in Go, winning three out of four games.

Of course, pinpointing the start of the machine learning age is difficult. Beating Go was a milestone, but my adult kids have been relying on GPS in their phones for years. They don’t know how to read normal maps, and if they didn’t have a phone, they would get lost. They are already relying on a “machine” that essentially replaces human reasoning. I haven’t looked up showtimes for a movie theater in a browser for several years now. I leave that to Siri on my iPhone. I’ve been using an Amazon Echo speaker to control the thermostat in my home since 2015.

In their book, McAfee and Brynjolfsson make an interesting point about this radical shift. For anyone working in the field of artificial intelligence, we know that this will be a crowdsourced endeavor. It’s more than creating an account on Kickstarter. AI comes alive when it has access to the data generated by thousands or millions of users. The more data it has the better it will be. To beat the Go champion, Google DeepMind used a database of actual human-to-human games. AI cannot exist without crowdsourced data. We see this with chatbots and voicebots. The best bots know how to adapt to the user, how to use previous discussions as the basis for improved AI.

Even the term “machine learning” has crowdsourcing implications. The machine learns from the crowd, typically by gathering data. We are currently seeing this play out more vibrantly with autonomous cars than any other machine learning paradigm. Cars analyze thousands of data points using sensors that watch how people drive on the road. A Tesla Model S is constantly crowdsourcing. Now that GM is testing the self-driving Bolt on real roads, it’s clear the entire project is a way to make sure the cars understand all of the real-world variables.

The irony here? The machine age is still human-powered. In the book, the authors explain why the transition from steam power to electric power took a long time. People scoffed at the idea of using electric motors in place of a complex system of gears and pulleys. Not everyone was on board. Not everyone saw the value. As we experiment with AI, test and retest the algorithms, and deploy bots into the home and workplace, it’s important to always keep in mind that the machines will only improve as the crowdsourced data improves.

We’re still in full control. For now.

Crispr and the asymmetry of biotech investing…

The current price spurt of biotechnology leads me to think that something is brewing in the research. Biotech is a sector with tremendous asymmetric information.   The market will move before the news makes sense… this story isn’t the mover of course, just another crispr story. But something is happening. I’m watching and waiting, once more on the wrong side of the asymmetrical info…

http://Firefly Gene Illuminates Ability of Optimized CRISPR-Cpf1 to Efficiently Edit Human Genome – Scicasts https://apple.news/AVDguv3zBMy-5B3xidbUq0g

Over the last five years, the CRISPR gene editing system has revolutionized microbiology and renewed hopes that genetic engineering might eventually become a useful treatment for disease. But time has revealed the technology’s limitations. For one, gene therapy currently requires using a viral shell to serve as the delivery package for the therapeutic genetic material. The CRISPR molecule is simply too large to fit with multiple guide RNAs into the most popular and useful viral packaging system.

The new study from Farzan and colleagues helps solve this problem by letting scientists package multiple guide RNAs.

This advance could be important if gene therapy is to treat diseases such as hepatitis B, Farzan said. After infection, hepatitis B DNA sits in liver cells, slowly directing the production of new viruses, ultimately leading to liver damage, cirrhosis and even cancer. The improved CRISPR-Cpf1 system, with its ability to ‘multiplex,’ could more efficiently digest the viral DNA, before the liver is irrevocably damaged, he said.

“Efficiency is important. If you modify 25 cells in the liver, it is meaningless. But if you modify half the cells in the liver, that is powerful,” Farzan said. “There are other good cases—say muscular dystrophy—where if you can repair the gene in enough muscle cells, you can restore the muscle function.”

Two types of these molecular scissors are now being widely used for gene editing purposes: Cas9 and Cpf1. Farzan said he focused on Cpf1 because it is more precise in mammalian cells. The Cpf1 molecule they studied was sourced from two types of bacteria, Lachnospiraceae bacterium and Acidaminococus sp., whose activity has been previously studied in E. coli. A key property of these molecules is they are able to grab their guide RNAs out of a long string of such RNA; but it was not clear that it would work with RNA produced from mammalian cells. Guocai tested this idea by editing a firefly bioluminescence gene into the cell’s chromosome. The modified CRISPR-Cpf1 system worked as anticipated.

“This means we can use simpler delivery systems for directing the CRISPR effector protein plus guide RNAs,” Farzan said. “It’s going to make the CRISPR process more efficient for a variety of applications.”

Looking forward, Farzan said the Cpf1 protein needs to be more broadly understood so that its utility in delivering gene therapy vectors can be further expanded.

China and the block chain…

This story is important and illustrates the advantage a slightly more totalitarian society has over a democracy in that it can quickly push innovations  rather than waiting for market forces. However this comes at the cost of a more brittle and less resilient civil society. Which depends entirely  upon the relative wisdom and judgment of a small number of leaders. Think of it as a experiment on Plato’s idea of the greatest leader being a philosopher with power. Secondarily, after the risk of poor leaders exercising bad judgment, you have the chronic problem of misallocation of capital.  This is something that Plato did not think of or understand. But the western philosopher – economist Adam Smith  perceived. The market is a much better allocator of capital than any manifestation of central planning. Nevertheless, watching the speed of China pursue any technical innovation is impressive. 

https://www.technologyreview.com/s/608088/chinas-central-bank-has-begun-cautiously-testing-a-digital-currency/

China’s central bank is testing a prototype digital currency with mock transactions between it and some of the country’s commercial banks.

Speeches and research papers from officials at the People’s Bank of China show that the bank’s strategy is to introduce the digital currency alongside China’s renminbi. But there is currently no timetable for this, and the bank seems to be proceeding cautiously.

Nonetheless the test is a significant step. It shows that China is seriously exploring the technical, logistical, and economic challenges involved in deploying digital money, something that could ultimately have broad implications for its economy and for the global financial system.

A digital fiat currency—one backed by the central bank and with the same legal status as a banknote—would lower the cost of financial transactions, thereby helping to make financial services more widely available. This could be especially significant in China, where millions of people still lack access to conventional banks. A digital currency should also be cheaper to operate, and ought to reduce fraud and counterfeiting.

Even more significantly, a digital currency would give the Chinese government greater oversight of digital transactions, which are already booming. And by making transactions more traceable, this could also help reduce corruption, which is a key government priority. Such a currency could even offer real-time economic insights, which would be enormously valuable to policymakers. And finally, it might facilitate cross-border transactions, as well as the use of the renminbi outside of China because the currency would be so easy to obtain.

The flaw at the core of the EU

http://bilbo.economicoutlook.net/blog/?p=36270

Periodically, the European Commission puts out a new report or paper on how it is going to fix the unfixable mess that the Eurozone continues to wallow in. I say unfixable because all of the proposed reforms refuse to confront the original problem, which, at inception, the monetary union builders considered to be a desirable design feature – a lack of a federal fiscal capacity

The conclusion that anyone who understands these matters would reach is that the differences between the European nations are so great that such a shift towards a true federation is highly unlikely despite the fact that the EMU could function effectively if the capacity was developed.

The other conclusion is that by failing to solve the inherent design problem either by introducing a full federal fiscal capacity or disbanding the monetary union, the European Commission is setting the Eurozone up for the next crisis.

While there is some growth now, after nearly a decade of malaise, the residual damage from the crisis remains. The private sector still has elevated levels of debt, the banking system is far from recovered (particularly in Italy), the property market is still depressed, governments have elevated levels of foreign-currency debt (euros), and the labour market remains depressed.

What that means is that when the next economic downturn comes – and economic cycles repeat – the crisis will be magnified and the mechanisms set in place as emergency measures to deal with the GFC will fail immediately.

It is only a matter of time.

That is enough for today!

 

arguing with fed…

http://bruegel.org/2017/06/the-feds-problem-with-inflation/

Larry Summers offers 5 reasons why he thinks the Fed may be making a mistake. First, the Fed is not credible with the markets at this point. Its dots plots predict four rate increases over the next 18 months compared with the market’s expectation of less than two. The markets do not share the Fed’s view that inflation acceleration is a major risk; indeed they do not believe the Fed will attain its 2 percent inflation target for a long time to come. Second, the Fed proclaims that it has a symmetric commitment to its 2 percent inflation target. After a full decade of sub-target inflation, policy should be set with a view to modestly raise target inflation during a boom with the expectation that inflation will decline during the next recession. A higher inflation target would entail easier policy than is now envisioned. Third, preemptive attacks on inflation, such as preemptive attacks on countries, depend on the ability to judge threats accurately. The truth is we have little ability to judge when inflation will accelerate in a major way. The Phillips curve is at most barely present in data for the past 25 years. Fourth, there is good reason to believe that a given level of rates is much less expansionary than it used to be given the structural forces operating to raise saving propensities and reduce investment propensities. Fifth, the Fed to abandon its connection to price stability, it simply needs to assert that its objective is to assure that inflation averages 2 percent over long periods of time. Then it needs to acknowledge that although inflation is persistent, it is very difficult to forecast and signal that it will focus on inflation and inflation expectations data rather than measures of output and employment in forecasting inflation. With these principles internalized, the Fed would lower its interest-rate forecasts to those of the market and be more credible. It would allow inflation to get closer to target and give employment and output more room to run.