age of machine learning

This is a quite interesting article. And I agree that we are entering the age of machine learning. However the argument is structured as though the other ages have ended. They most certainly have not. Indeed much of humanity is still in the age of faith. And this is not an altogether bad thing . We are still in the industrial age … just in certain sectors of the economy. Technology age is not gone but still grinding ahead. History is ultimately a layer cake … it is just on top of this large cake we now add machine learning. This is very important when valuing companies. When a company gets a value or multiple assigned to it from one age it is almost impossible to move the valuation forward. Thanks for example of IBM. 

https://venturebeat.com/2017/06/25/the-information-age-is-over-welcome-to-the-machine-learning-age/

I first used a computer to do real work in 1985.

I was in college in the Twin Cities, and I remember using the DOS version of Word and later upgrading to the first version of Windows. People used to scoff at the massive gray machines in the computer lab, but secretly they suspected something was happening.

It was. You could say the information age started in 1965 when Gordon Moore invented Moore’s Law (a prediction about how transistors would double every year, later changed to every 18 months). It was all about computing power escalation, and he was right about the coming revolution. Some would argue the information age started long before then, when electricity replaced steam power. Or maybe it was when the library system in the U.S. started to expand in the 30s.

Who knows? My theory is it started when everyone had access to information on a personal computer. That was essentially what happened for me around 1985 — and a bit before that in high school. (Insert your own theory here about the Apple II ushering in the information age in 1977. I’d argue that was a little too much of a hobbyist machine.)

We can agree on one thing. We know that information is everywhere. That’s a given. Now, prepare for another shift.

In their book Machine, Platform, Crowd: Harnessing Our Digital Future, economic gurus Andrew McAfee and Erik Brynjolfsson suggest that we’re now in the “machine learning” age. They point to another momentous occasion that might be as significant as Moore’s Law. In March of last year, an AI finally beat a world champion player in Go, winning three out of four games.

Of course, pinpointing the start of the machine learning age is difficult. Beating Go was a milestone, but my adult kids have been relying on GPS in their phones for years. They don’t know how to read normal maps, and if they didn’t have a phone, they would get lost. They are already relying on a “machine” that essentially replaces human reasoning. I haven’t looked up showtimes for a movie theater in a browser for several years now. I leave that to Siri on my iPhone. I’ve been using an Amazon Echo speaker to control the thermostat in my home since 2015.

In their book, McAfee and Brynjolfsson make an interesting point about this radical shift. For anyone working in the field of artificial intelligence, we know that this will be a crowdsourced endeavor. It’s more than creating an account on Kickstarter. AI comes alive when it has access to the data generated by thousands or millions of users. The more data it has the better it will be. To beat the Go champion, Google DeepMind used a database of actual human-to-human games. AI cannot exist without crowdsourced data. We see this with chatbots and voicebots. The best bots know how to adapt to the user, how to use previous discussions as the basis for improved AI.

Even the term “machine learning” has crowdsourcing implications. The machine learns from the crowd, typically by gathering data. We are currently seeing this play out more vibrantly with autonomous cars than any other machine learning paradigm. Cars analyze thousands of data points using sensors that watch how people drive on the road. A Tesla Model S is constantly crowdsourcing. Now that GM is testing the self-driving Bolt on real roads, it’s clear the entire project is a way to make sure the cars understand all of the real-world variables.

The irony here? The machine age is still human-powered. In the book, the authors explain why the transition from steam power to electric power took a long time. People scoffed at the idea of using electric motors in place of a complex system of gears and pulleys. Not everyone was on board. Not everyone saw the value. As we experiment with AI, test and retest the algorithms, and deploy bots into the home and workplace, it’s important to always keep in mind that the machines will only improve as the crowdsourced data improves.

We’re still in full control. For now.

Robo-economist… not kidding

http://www.belfasttelegraph.co.uk/business/news/pwc-predicts-roboeconomist-could-make-firm-most-accurate-forecaster-on-market-35863155.html

PwC is on the cusp of launching a robo-economist that could make the company the “most accurate” economic forecaster on the market.

The professional services firm has developed a form of artificial intelligence (AI) with a 92% strike rate when it comes to predicting the result of UK gross domestic product (GDP).

It discovered the AI’s “incredible accuracy” after testing to see if the machine could pinpoint historic GDP results without knowing the outcome.

But while Jonathan Gillham, PwC’s director of economics, joked that the AI had already started to supersede his job, the firm said there were no plans to replace staff with automation and the program would work alongside human economists.

He said: ” We have been using an AI technique to forecast the UK economy and we will be launching that (…) in July.

“Each quarter, the Office for National Statistics publishes its estimate for GDP and we have been able to use an AI technology base to get that right 92% of the time for the last five years.

“We pretended five years ago what we would have forecast if we didn’t know the number, and the machine learning process that we have developed has predicted it with incredible accuracy.

“Whether that will hold up in July when we launch it, who knows, (…) but if it’s correct than it would make us the most accurate forecaster on the market.”

PwC said the programme still needed “input and judgement” from human economists to identify problems, select the best models and interpret the results.

AI checkup…

This conference adds much needed perspective on the AI revolution and the current state of the art. Indeed we are many breakthroughs away from artificial general intelligence.  The current level of technology does not quite justify the current level of hype. However my interest in AI is related to the fact that of all the current narratives that can drive investors to want to put their money in a stock, AI and it’s many sub-fields (such as transportation as a service, big data, etc) it’s by far the most beguiling story. 

https://venturebeat.com/2017/06/23/ai-is-still-several-breakthroughs-away-from-reality/

While the growth of deep neural networks has helped propel the field of machine learning to new heights, there’s still a long road ahead when it comes to creating artificial intelligence. That’s the message from a panel of leading machine learning and AI experts who spoke at the Association for Computing Machinery’s Turing Award Celebration conference in San Francisco today.

We’re still a long way off from human-level AI, according to Michael I. Jordan, a professor of computer science at the University of California, Berkeley. He said that applications using neural nets are essentially faking true intelligence but that their current state allows for interesting development.

“Some of these domains where we’re faking intelligence with neural nets, we’re faking it well enough that you can build a company around it,” Jordan said. “So that’s interesting, but somehow not intellectually satisfying.”

Those comments come at a time of increased hype for deep learning and artificial intelligence in general, driven by interest from major technology companies like Google, Facebook, Microsoft, and Amazon.

Fei-Fei Li, who works as the chief scientist for Google Cloud, said that she sees this as “the end of the beginning” for AI, but says there are still plenty of hurdles ahead. She identified several key areas where current systems fall short, including a lack of contextual reasoning, a lack of contextual awareness of their environment, and a lack of integrated understanding and learning.

“This kind of euphoria of AI has taken over, and [the idea that] we’ve solved most of the problem is not true,” she said.

One pressing issue identified by Raquel Urtasun, who leads Uber’s self-driving car efforts in Canada, is that the algorithms used today don’t model uncertainty very well, which can prove problematic.

“So they will tell you that there is a car there, for example, with 99 percent probability, and they will tell you the same thing whether they are wrong or not,” she said. “And most of the time they are right, but when they are wrong, this is a real issue for things like self-driving [cars].”

The panelists did concur that an artificial intelligence that could match a human is possible, however.

“I think we have at least half a dozen major breakthroughs to go before we get close to human-level AI,” said Stuart Russell, a professor of computer science and engineering at the University of California, Berkeley. “But there are very many very brilliant people working on it, and I am pretty sure that those breakthroughs are going to happen.”

Crispr and the asymmetry of biotech investing…

The current price spurt of biotechnology leads me to think that something is brewing in the research. Biotech is a sector with tremendous asymmetric information.   The market will move before the news makes sense… this story isn’t the mover of course, just another crispr story. But something is happening. I’m watching and waiting, once more on the wrong side of the asymmetrical info…

http://Firefly Gene Illuminates Ability of Optimized CRISPR-Cpf1 to Efficiently Edit Human Genome – Scicasts https://apple.news/AVDguv3zBMy-5B3xidbUq0g

Over the last five years, the CRISPR gene editing system has revolutionized microbiology and renewed hopes that genetic engineering might eventually become a useful treatment for disease. But time has revealed the technology’s limitations. For one, gene therapy currently requires using a viral shell to serve as the delivery package for the therapeutic genetic material. The CRISPR molecule is simply too large to fit with multiple guide RNAs into the most popular and useful viral packaging system.

The new study from Farzan and colleagues helps solve this problem by letting scientists package multiple guide RNAs.

This advance could be important if gene therapy is to treat diseases such as hepatitis B, Farzan said. After infection, hepatitis B DNA sits in liver cells, slowly directing the production of new viruses, ultimately leading to liver damage, cirrhosis and even cancer. The improved CRISPR-Cpf1 system, with its ability to ‘multiplex,’ could more efficiently digest the viral DNA, before the liver is irrevocably damaged, he said.

“Efficiency is important. If you modify 25 cells in the liver, it is meaningless. But if you modify half the cells in the liver, that is powerful,” Farzan said. “There are other good cases—say muscular dystrophy—where if you can repair the gene in enough muscle cells, you can restore the muscle function.”

Two types of these molecular scissors are now being widely used for gene editing purposes: Cas9 and Cpf1. Farzan said he focused on Cpf1 because it is more precise in mammalian cells. The Cpf1 molecule they studied was sourced from two types of bacteria, Lachnospiraceae bacterium and Acidaminococus sp., whose activity has been previously studied in E. coli. A key property of these molecules is they are able to grab their guide RNAs out of a long string of such RNA; but it was not clear that it would work with RNA produced from mammalian cells. Guocai tested this idea by editing a firefly bioluminescence gene into the cell’s chromosome. The modified CRISPR-Cpf1 system worked as anticipated.

“This means we can use simpler delivery systems for directing the CRISPR effector protein plus guide RNAs,” Farzan said. “It’s going to make the CRISPR process more efficient for a variety of applications.”

Looking forward, Farzan said the Cpf1 protein needs to be more broadly understood so that its utility in delivering gene therapy vectors can be further expanded.

China and the block chain…

This story is important and illustrates the advantage a slightly more totalitarian society has over a democracy in that it can quickly push innovations  rather than waiting for market forces. However this comes at the cost of a more brittle and less resilient civil society. Which depends entirely  upon the relative wisdom and judgment of a small number of leaders. Think of it as a experiment on Plato’s idea of the greatest leader being a philosopher with power. Secondarily, after the risk of poor leaders exercising bad judgment, you have the chronic problem of misallocation of capital.  This is something that Plato did not think of or understand. But the western philosopher – economist Adam Smith  perceived. The market is a much better allocator of capital than any manifestation of central planning. Nevertheless, watching the speed of China pursue any technical innovation is impressive. 

https://www.technologyreview.com/s/608088/chinas-central-bank-has-begun-cautiously-testing-a-digital-currency/

China’s central bank is testing a prototype digital currency with mock transactions between it and some of the country’s commercial banks.

Speeches and research papers from officials at the People’s Bank of China show that the bank’s strategy is to introduce the digital currency alongside China’s renminbi. But there is currently no timetable for this, and the bank seems to be proceeding cautiously.

Nonetheless the test is a significant step. It shows that China is seriously exploring the technical, logistical, and economic challenges involved in deploying digital money, something that could ultimately have broad implications for its economy and for the global financial system.

A digital fiat currency—one backed by the central bank and with the same legal status as a banknote—would lower the cost of financial transactions, thereby helping to make financial services more widely available. This could be especially significant in China, where millions of people still lack access to conventional banks. A digital currency should also be cheaper to operate, and ought to reduce fraud and counterfeiting.

Even more significantly, a digital currency would give the Chinese government greater oversight of digital transactions, which are already booming. And by making transactions more traceable, this could also help reduce corruption, which is a key government priority. Such a currency could even offer real-time economic insights, which would be enormously valuable to policymakers. And finally, it might facilitate cross-border transactions, as well as the use of the renminbi outside of China because the currency would be so easy to obtain.

ibm is the most unloved exponential player

https://developer.ibm.com/dwblog/2017/quantum-computing-16-qubit-processor/

IBM announced today it has successfully built and tested its most powerful universal quantum computing processors.

The first upgraded processor will be available for use by developers, researchers, and programmers to explore quantum computing using a real quantum processor at no cost via the IBM Cloud.

The second is a new prototype of a commercial processor, which will be the core for the first IBM Q early-access commercial systems.

Launched in March 2017, IBM Q is an industry-first initiative to build commercially available universal quantum computing systems for business and science applications. IBM Q systems and services will be delivered via the IBM Cloud platform.

IBM first opened public access to its quantum processors one year ago, to serve as an enablement tool for scientific research, a resource for university classrooms, and a catalyst of enthusiasm for the field. To date users have run more than 300,000 quantum experiments on the IBM Cloud.

Bacteriophages… more crispr possibilities

https://www.nature.com/news/modified-viruses-deliver-death-to-antibiotic-resistant-bacteria-1.22173

Genetically modified viruses that cause bacteria to kill themselves could be the next step in combating antibiotic-resistant infections.

Several companies have engineered such viruses, called bacteriophages, to use the CRISPR gene-editing system to kill specific bacteria, according to a presentation at the CRISPR 2017 conference in Big Sky, Montana, last week. These companies could begin clinical trials of therapies as soon as next year.

Cameras with neurons…

https://www.scientificamerican.com/article/quick-thinking-ai-camera-mimics-the-human-brain/

Researchers in Europe are developing a camera that will literally have a mind of its own, with brainlike algorithms that process images and light sensors that mimic the human retina. Its makers hope it will prove that artificial intelligence—which today requires large, sophisticated computers—can soon be packed into small consumer electronics. But as much as an AI camera would make a nifty smartphone feature, the technology’s biggest impact may actually be speeding up the way self-driving cars and autonomous flying drones sense and react to their surroundings.

Getting all of the components of a memristor neural network onto a single microchip would be a big step, says Yoeri van de Burgt, an assistant professor of microsystems at Eindhoven University of Technology in the Netherlands, whose research includes building artificial synapses. “Since it is performing the computation locally, it will be more secure and can be dedicated for specific tasks like cameras in drones and self-driving cars,” adds van de Burgt, who was not involved in the ULPEC project.
Assuming the researchers can pull it off, such a chip would be useful well beyond smart cameras because it would be able to perform a variety of complicated computations itself, rather than off-loading that work to a supercomputer via the cloud. In this way, Posch says, the camera is an important step toward determining whether the underlying memristors and other technology will work, and how they might be integrated into future consumer devices. The camera, with its innovative sensors and memristor neural network, could demonstrate that AI can be built into a device in order to make it both smart and more energy efficient.

massively speeding up automated driving…

https://mcity.umich.edu/new-way-test-self-driving-cars-cut-99-9-validation-costs/

In essence, the new accelerated evaluation process breaks down difficult real-world driving situations into components that can be tested or simulated repeatedly, exposing automated vehicles to a condensed set of the most challenging driving situations. In this way, just 1,000 miles of testing can yield the equivalent of 300,000 to 100 million miles of real-world driving.

While 100 million miles may sound like overkill, it’s not nearly enough for researchers to get enough data to certify the safety of a driverless vehicle. That’s because the difficult scenarios they need to zero in on are rare. A crash that results in a fatality occurs only once in every 100 million miles of driving.

Yet for consumers to accept driverless vehicles, the researchers say tests will need to prove with 80 percent confidence that they’re 90 percent safer than human drivers. To get to that confidence level, test vehicles would need to be driven in simulated or real-world settings for 11 billion miles. But it would take nearly a decade of round-the-clock testing to reach just 2 million miles in typical urban conditions.

Beyond that, fully automated, driverless vehicles will require a very different type of validation than the dummies on crash sleds used for today’s cars. Even the questions researchers have to ask are more complicated. Instead of, “What happens in a crash?” they’ll need to measure how well they can prevent one from happening.

“Test methods for traditionally driven cars are something like having a doctor take a patient’s blood pressure or heart rate, while testing for automated vehicles is more like giving someone an IQ test,” said Ding Zhao, assistant research scientist in the U-M Department of Mechanical Engineering and co-author of the new white paper, along with Peng.

To develop their accelerated approach, the U-M researchers analyzed data from 25.2 million miles of real-world driving collected by two U-M Transportation Research Institute projects—Safety Pilot Model Deployment and Integrated Vehicle-Based Safety Systems. Together they involved nearly 3,000 vehicles and volunteers over the course of two years.

From that data, the researchers:

Identified events that could contain “meaningful interactions” between an automated vehicle and one driven by a human, and created a simulation that replaced all the uneventful miles with these meaningful interactions.

Programmed their simulation to consider human drivers the major threat to automated vehicles and placed human drivers randomly throughout.

Conducted mathematical tests to assess the risk and probability of certain outcomes, including crashes, injuries, and near-misses.

Interpreted the accelerated test results, using a technique called “importance sampling” to learn how the automated vehicle would perform, statistically, in everyday driving situations.

The accelerated evaluation process can be performed for different potentially dangerous maneuvers. Researchers evaluated the two most common situations they’d expect to result in serious crashes: an automated car following a human driver and a human driver merging in front of an automated car. The accuracy of the evaluation was determined by conducting and comparing accelerated and real-world simulations. More research is needed involving additional driving situations.

The paper is titled “From the Lab to the Street: Solving the Challenge of Accelerating Automated Vehicle Testing.”

hyperloop is getting serious…

https://techcrunch.com/2017/06/20/htt-signs-on-south-korea-to-build-a-full-scale-hyperloop-system/

The South Korean Hyperloop project will be called the HyperTube Express, and it’s backed by the Korean Department of Technological Innovation and infrastructure. The schools involved are the Korea Institute of Civil Engineering and Building Technology (KICT) as well as Hanyang University, which is South Korea’s leading engineering research school.

Back in January, reports suggested that South Korea was working on a Hyperloop-like high-speed rail network for the country, spearheaded by the Korea Railroad Research Institute. Said project was said to be called the Hyper Tube Express at the time, but the involvement of Hyperloop Transportation Technologies, which is a multi-year partner owing to the licensing deal, wasn’t previously announced.