Google Cofounder Sergey Brin Warns of AI’s Dark Side | WIRED

It is my current belief that AI is the next great capital cycle, similar to steam, electrical power, railroads, etc. It will play out over decades. But this is the play. Additionally, it may be the last great human invention.

Google cofounder calls advances in artificial intelligence “the most significant development in computing in my lifetime,” but warns of ethical concerns.
— Read on

Systems update…

Currently trading the Nasdaq, the Sp500, the Nikkei, the Euro, and Oil. Although you would think that the first three assets are highly correlated (they are), all three have systems that go long/short; it is quite common to be long 1 or 2 and short 1 or 2 markets. This provides necessary hedge function.

Have no opinion about the market that I’m will to trade upon. I know it is expensive and probably will get more so. I suspect next year will be at least one 10%+ decline. I’m ready to surf…


We live in interesting times.

Bovespa and China a50 have been the futures markets with best momentum. I have missed this trade due to excess caution and the fact that my best models require data I cannot get for these markets. I’ve adjusted this and will trade these markets more in future.$SPX,$NDX,GLD,$INDU,IEF,$TBSP,$NIFTY,$NIKK,$FXT&n=200&O=011000


Gold and bonds are having a tough time of it, responding to the end of QE by the Fed. However, the Fed is the fourth largest central bank. China, ECB, and Japan are larger by assets. They are still printing money. I will not pretend to understand all of this. I follow markets like a remora shadowing a shark. I do not predict where the shark is going.

My current worries:

  1. Don the Con and Rocket-boy
  2. Market structure (vix and skew)
  3. October


Have a great weekend.

AEI wishes upon a productivity star…

As much as I love this article, it is just an interesting way of saying maybe cool things will happen in the future.

Fresh off their failure to repeal Obamacare, Republicans are eager to pivot to tax reform. Of course they are. Tax reform is what they do. It’s been their policy safe space for 40 years. Targaryens ride dragons, Lannisters pay their debts, and Republicans cut taxes. 

But Republicans’ tax reform effort may collapse, too. The GOP has reached agreement on only broad goals. And the tax code is arguably just as complicated at the health-care system, if not more so. Both are full of economically thorny and politically unpalatable trade-offs. Republicans might have to settle for a temporary corporate tax cut, which might have little long-term impact on economic growth. Or maybe nothing.
And that might be okay — not optimal, but tolerable. After all, the U.S. is not in economic crisis. The current expansion is the third-longest ever, the economy grew at a solid-if-unspectacular 2.6 percent last quarter, and job gains continue to average nearly 200,000 a month. Policymakers don’t need to scramble to juice growth through quickie tax cuts that reduce marginal rates but also revenues.
Actually, Washington might not need to do much of anything for the economy to grow at 3 percent annually on a sustained basis — a stated GOP goal — versus the 2 percent average of the 2000s. One reason growth pessimists think the economy is stuck permanently in a low-gear New Normal is that productivity has been historically weak, both since the Great Recession and just before. If workers fail to become more productive, the economy and living standards will stagnate. And if America isn’t technologically innovative, workers won’t become more productive.
Yet America sure looks pretty innovative, at least if you pay attention to what’s happening in places like Silicon Valley, Seattle, and New York. Indeed, there’s reason to believe official stats are underestimating tech-driven innovation. As my AEI colleague Stephen Oliner, the Federal Reserve’s David Byrne, and Daniel Sichel of Wellesley College write in their new paper, “Prices of high-tech products, mismeasurement, and pace of innovation”: “We believe that these faster rates of growth in high-tech could presage a second wave of higher productivity growth spurred by the digital revolution.”
Here’s the problem: The IT revolution seems confined to a narrow group of superstar tech firms and isn’t spreading throughout corporate America. For innovation to lift productivity and the broader economy, new technologies must be broadly and efficiently used. We must spread the innovation wealth.
Of course, that still might happen. Economic history suggests such “diffusion” typically takes time. It took decades for factories to figure out how to use electric dynamos rather than steam. Likewise, economists in the 1980s wondered why the arrival of PCs wasn’t transforming firms — until that 1990s productivity boom happened.
And in the same way, “the rapid innovation and robust investment of recent years will eventually have an impact, but it could take some time for the next wave of productivity growth to become visible at the aggregate level,” concludes a new Peterson Institute paper, “The Case for an American Productivity Revival.” A similar argument is made in “The Coming Productivity Boom” by the Progressive Policy Institute’s Michael Mandel and AEI Fellow Bret Swanson: “The 10-year productivity drought is almost over. The next waves of the information revolution — where we connect the physical world and infuse it with intelligence — are beginning to emerge.”
Some researchers think the widespread and innovative use of big data, AI, and robotics in areas such as health care, education, and the service sector could eventually boost productivity growth high enough that overall 3 percent growth is doable. And this tech wave may be unstoppable as long as government doesn’t do something profoundly dumb such as banning or taxing new technologies. Instead, policymakers should be trying to hasten, enhance, and spread this transformation through a variety of public policies, such as making it easier for global tech talent to work in America, reducing regulatory barriers to the adoption of new technologies, boosting competition, and more generously funding science research.
More here

Future of jobs… it’s more complicated than you think…
This video is the best argument I have seen in a easily understood context about the coming challenges of employability and future jobs with the rise of automation. Although I have a problem with the use of horses as a metaphor to understand future human jobs. Particularly in that the horses were never the creator of their own jobs where as we humans are the creator of our jobs. I do believe there will be tremendous deflation in the future that currently is expressing itself in some type of technological deflation which is hard to measure using the tools of an industrial time such as GDP. This deflation will make so many of our living expenses smaller. This will probably first be seen in the tremendous lowering of the cost of transportation. And has already been seen as mentioned above in the decreasing cost of technological gadgets and services. This will leave our largest cost as housing and healthcare. It is possible that a universal basic income will become increasingly likely. Indeed we live in interesting times.

Quants still trust the yield-curve… duh…

More here

Financial markets are just as excited about artificial intelligence, machine learning, and big data as the technology industry. So-called “quants” have benefitted from this enthusiasm, with money from investors steadily flowing into their algorithm-driven funds.Quant investing spans a vast range of strategies, but one of the most talked about methods is using artificial intelligence to sift through vast amounts of market data to uncover signals that humans can’t see. The idea is to use advanced mathematics and computing power, rather than traditional research and intuition, to gain an edge in the market.

Take PhaseCapital in Boston. Its chief investment officer has a PhD from Oxford with a focus on numerical optimization, artificial intelligence, and neural networks. Michael DePalma, its CEO, previously helped run quantitative investing strategies at AllianceBerstein.

And yet, for all its sophistication, the hedge fund says it still trusts an old-school indicator that investors have tracked for decades: the yield curve. When the spread between short- and long-term interest rates fell recently, the firm slashed its market exposure, DePalma told the Financial Times (paywall.)

DePalma later told Quartz that yield-curve analysis—such as the observation that short-term interest rates rising relative to long-term rates may signal the economy is sputtering—remains useful for helping assess risk. The slide-rule era tool is based on widely available bond price information, and economists have been documenting its predictive powers since the 1960s.

While there’s no guarantee that massive data sets crunched by algorithms will surface any sensible investing strategies, the yield curve is a tried and true measure. Yet the buzz around quants is so intense that investment managers may feel pressured adopt some sort of algorithmic strategy, or else risk raising less money, DePalma said. Computer-driven hedge funds have doubled their assets under management, to $932 billion, following eight straight years of inflows since 2009, according to data-tracking firm HFR.

age of machine learning

This is a quite interesting article. And I agree that we are entering the age of machine learning. However the argument is structured as though the other ages have ended. They most certainly have not. Indeed much of humanity is still in the age of faith. And this is not an altogether bad thing . We are still in the industrial age … just in certain sectors of the economy. Technology age is not gone but still grinding ahead. History is ultimately a layer cake … it is just on top of this large cake we now add machine learning. This is very important when valuing companies. When a company gets a value or multiple assigned to it from one age it is almost impossible to move the valuation forward. Thanks for example of IBM.

I first used a computer to do real work in 1985.

I was in college in the Twin Cities, and I remember using the DOS version of Word and later upgrading to the first version of Windows. People used to scoff at the massive gray machines in the computer lab, but secretly they suspected something was happening.

It was. You could say the information age started in 1965 when Gordon Moore invented Moore’s Law (a prediction about how transistors would double every year, later changed to every 18 months). It was all about computing power escalation, and he was right about the coming revolution. Some would argue the information age started long before then, when electricity replaced steam power. Or maybe it was when the library system in the U.S. started to expand in the 30s.

Who knows? My theory is it started when everyone had access to information on a personal computer. That was essentially what happened for me around 1985 — and a bit before that in high school. (Insert your own theory here about the Apple II ushering in the information age in 1977. I’d argue that was a little too much of a hobbyist machine.)

We can agree on one thing. We know that information is everywhere. That’s a given. Now, prepare for another shift.

In their book Machine, Platform, Crowd: Harnessing Our Digital Future, economic gurus Andrew McAfee and Erik Brynjolfsson suggest that we’re now in the “machine learning” age. They point to another momentous occasion that might be as significant as Moore’s Law. In March of last year, an AI finally beat a world champion player in Go, winning three out of four games.

Of course, pinpointing the start of the machine learning age is difficult. Beating Go was a milestone, but my adult kids have been relying on GPS in their phones for years. They don’t know how to read normal maps, and if they didn’t have a phone, they would get lost. They are already relying on a “machine” that essentially replaces human reasoning. I haven’t looked up showtimes for a movie theater in a browser for several years now. I leave that to Siri on my iPhone. I’ve been using an Amazon Echo speaker to control the thermostat in my home since 2015.

In their book, McAfee and Brynjolfsson make an interesting point about this radical shift. For anyone working in the field of artificial intelligence, we know that this will be a crowdsourced endeavor. It’s more than creating an account on Kickstarter. AI comes alive when it has access to the data generated by thousands or millions of users. The more data it has the better it will be. To beat the Go champion, Google DeepMind used a database of actual human-to-human games. AI cannot exist without crowdsourced data. We see this with chatbots and voicebots. The best bots know how to adapt to the user, how to use previous discussions as the basis for improved AI.

Even the term “machine learning” has crowdsourcing implications. The machine learns from the crowd, typically by gathering data. We are currently seeing this play out more vibrantly with autonomous cars than any other machine learning paradigm. Cars analyze thousands of data points using sensors that watch how people drive on the road. A Tesla Model S is constantly crowdsourcing. Now that GM is testing the self-driving Bolt on real roads, it’s clear the entire project is a way to make sure the cars understand all of the real-world variables.

The irony here? The machine age is still human-powered. In the book, the authors explain why the transition from steam power to electric power took a long time. People scoffed at the idea of using electric motors in place of a complex system of gears and pulleys. Not everyone was on board. Not everyone saw the value. As we experiment with AI, test and retest the algorithms, and deploy bots into the home and workplace, it’s important to always keep in mind that the machines will only improve as the crowdsourced data improves.

We’re still in full control. For now.