Big read. Future of work…

http://www.mckinsey.com/global-themes/employment-and-growth/technology-jobs-and-the-future-of-work
The development of automation enabled by technologies including robotics and artificial intelligence brings the promise of higher productivity (and with productivity, economic growth), increased efficiencies, safety, and convenience. But these technologies also raise difficult questions about the broader impact of automation on jobs, skills, wages, and the nature of work itself.
We find that about 60 percent of all occupations have at least 30 percent of activities that are technically automatable, based on currently demonstrated technologies. This means that most occupations will change, and more people will have to work with technology. Highly skilled workers working with technology will benefit. While low-skilled workers working with technology will be able to achieve more in terms of output and productivity, these workers may experience wage pressure, given the potentially larger supply of similarly low-skilled workers, unless demand for the occupation grows more than the expansion in labor supply.

On a global scale, we calculate that the adaptation of currently demonstrated automation technologies could affect 50 percent of the world economy, or 1.2 billion employees and $14.6 trillion in wages. Just four countries—China, India, Japan, and the United States—account for just over half of these totals. There are sizable differences in automation potential between countries, based mainly on the structure of their economies, the relative level of wages, and the size and dynamics of the workforce.

As machines evolve and acquire more advanced performance capabilities that match or exceed human capabilities, the adoption of automation will pick up. However, the technical feasibility to automate does not automatically translate into the deployment of automation in the workplace and the automation of jobs. Technical potential is only the first of several elements that must be considered. A second element is the cost of developing and deploying both the hardware and the software for automation. The supply-and-demand dynamics of labor are a third factor: if workers with sufficient skills for the given occupation are in abundant supply and significantly less expensive than automation, this could slow the rate of adoption. A fourth to be considered are the benefits of automation beyond labor substitution—including higher levels of output, better quality and fewer errors, and capabilities that surpass human ability.

Even while technologies replace some jobs, they are creating new work in industries that most of us cannot even imagine, and new ways to generate income. One-third of new jobs created in the United States in the past 25 years were types that did not exist, or barely existed, in areas including IT development, hardware manufacturing, app creation, and IT systems management. The net impact of new technologies on employment can be strongly positive. A 2011 study by McKinsey’s Paris office found that the Internet had destroyed 500,000 jobs in France in the previous 15 years—but at the same time had created 1.2 million others, a net addition of 700,000, or 2.4 jobs created for every job destroyed. The growing role of big data in the economy and business will create a significant need for statisticians and data analysts; we estimate a shortfall of up to 250,000 data scientists in the United States alone in a decade.

4th industrial revolution 

http://cio.economictimes.indiatimes.com/tech-talk/the-fourth-industrial-revolution-will-be-personal/2378
This is the first article in a three part series discussing the Fourth Industrial Revolution, how machine learning and humans can create a way to navigate through this revolution and what it means for banks. The first article will focus on how the Fourth Industrial Revolution is going to impact businesses across the world. The second will look at how machine learning is creating new ways in which customers expect to be treated and pathing the way for new open platforms. The final article in this series will look at how humans are adapting to the new revolution, for both the good and the bad.

Real battery gains…

http://spectrum.ieee.org/semiconductors/design/how-to-build-a-safer-more-energydense-lithiumion-battery

We recently compared our prototype cell for a wearable device with a comparable commercial Li-ion cell by deliberately creating a precarious scenario. We overcharged a conventional 130-mAh Li-ion cell and our 100-mAh silicon Li-ion cell to 250 percent of capacity and simultaneously punctured the package of each (through the standard nail-penetration test). The conventional Li-ion cell burst into flames, but our silicon Li-ion cell did not.
To fabricate the Enovix battery, we begin with a wafer of silicon that’s 1 millimeter thick. This doesn’t have to be the chip-grade stuff—it can be the same low-cost material that is used to produce solar cells. To the wafer we apply a photolithographic mask and etch the required pattern with typical silicon etchants borrowed from the solar industry. Because the pattern can vary in shape­—square, rectangular, round, oval, hexagonal—as well as in length and width, we have the ability to form a wide variety of cell designs. The silicon that’s left behind where the mask was placed forms the anodes and “backbones” of the interlaced cell structure.
Next, we selectively deposit a thin coat of metal film onto the anodes and backbones to form current collectors and then deposit a ceramic separator around the collector on the anodes. Because the anodes and backbones are not electrically connected on the wafer, we can selectively electroplate different coatings on each. To create the cathodes, we inject a conventional cathode slurry, filling the remaining voids in the wafer. Then a laser slices off individual 1-mm thick die from the wafer, with the lateral dimensions of the die approximating the dimensions of the final battery. Positive and negative tabs are then attached to each die, which are baked to remove moisture, and stacked to form the desired height of the battery. The tabs are all connected to form a single positive and negative tab for the cell, and the resulting stacked cell is then pouched or inserted into a metal can, which is filled with electrolyte, sealed and tested.
Our architecture, silicon wafer photolithography, and etching process are comparable to what is used in three-dimensional MEMS. Hence we dubbed our device the 3D Silicon Lithium-ion battery. We compared a prototype with a conventional Li-ion battery having the same form factor, one designed to fit in a smart watch (that battery was 18 by 27 by 4 mm). Our internal tests showed our battery to have much higher capacity and a corresponding increase in energy density—695 Wh/L as opposed to about 460 for the conventional cell.
Much of this manufacturing technology comes, of course, from the solar cell business. The progress in that field—fueled by immense R&D investment worldwide—at once explains the low cost of our manufacturing approach and the likelihood that it will continue to improve in efficiency and scale.
Consumers yearn for better and more powerful batteries for their mobile devices, as survey after survey attests. Most demanding of all are the wearable devices and microsensors that are being created for the Internet of Things. Such IoT devices have even less room in them for batteries than do tablets and smartphones.
This wouldn’t be the first time that photolithography and wafer production have suddenly revamped whole industries. It happened first when computers began to use integrated circuits. These fabrication techniques were also applied to lighting, which moved from fluorescent tubes to light-emitting diodes and to video displays, which went from cathode ray tubes to liquid crystal displays.
We believe that the approach we’re pioneering will bring about a similar transformation in the market for lithium-ion batteries. The change will first appear in wearables, next in IoT and phones, and ultimately in electric vehicles and grid storage, as volumes scale up and manufacturing costs come down, as it already has in the solar industry.
With safer, thinner, and higher-energy batteries, designers will have more flexibility to create breakthrough products. Expect mobile devices to get smaller, to last longer between charges, and to continue to deliver amazing new capabilities to enhance our lives.

Rocket Lab, a company recently valued at $1 billion by private investors, has been waiting since May 22 to test its first product, a rocket called Electron designed to launch small satellites into orbit. Its outer shell is made almost entirely of carbon fiber, and it boasts an electric turbopump and a 3-D printed engine. A successful launch will provide key data to refine the rocket’s construction, and validate the hopes of both the company’s backers and a slew of other small satellite firms desperate to see their own technology put into space.

https://qz.com/991156/rocket-labs-electron-test-flight-succeeds-a-3d-printed-carbon-fiber-rocket-flew-for-the-first-time-in-new-zealand/

How to add 2 billion people to global economy.

https://blog.everex.io/how-to-add-2-billion-people-to-the-world-economy-using-blockchains-and-mobile-phones-73dac3637b01
Whoever figures this out is going to make a ridiculous amount of money. However we are a long way from knowing who the winners will be. Be prepared to momentum-ride the two fastest horses, changing mounts like a monkey-jock. 

Quantum is coming. 

Quantum computers have long held the promise of performing certain calculations that are impossible—or at least, entirely impractical—for even the most powerful conventional computers to perform. Now, researchers at a Google laboratory in Goleta, Calif., may finally be on the cusp of proving it, using the same kinds of quantum bits, or qubits, that one day could make up large-scale quantum machines.
By the end of this year, the team aims to increase the number of superconducting qubits it builds on integrated circuits to create a 7-by-7 array. With this quantum IC, the Google researchers aim to perform operations at the edge of what’s possible with even the best supercomputers, and so demonstrate “quantum supremacy.”
“We’ve been talking about, for many years now, how a quantum processor could be powerful because of the way that quantum mechanics works, but we want to specifically demonstrate it,” says team member John Martinis, a professor at the University of California, Santa Barbara, who joined Google in 2014.

http://spectrum.ieee.org/computing/hardware/google-plans-to-demonstrate-the-supremacy-of-quantum-computing
A system size of 49 superconducting qubits is still far away from what physicists think will be needed to perform the sorts of computations that have long motivated quantum computing research. One of those is Shor’s algorithm, a computational scheme that would enable a quantum computer to quickly factor very large numbers and thus crack one of the foundational components of modern cryptography. In a recent commentary in Nature, Martinis and colleagues estimated that a 100-million-qubit system would be needed to factor a 2,000-bit number—a not-uncommon public key length—in one day. Most of those qubits would be used to create the special quantum states that would be needed to perform the computation and to correct errors, creating a mere thousand or so stable “logical qubits” from thousands of less stable physical components, Martinis says.
There will be no such extra infrastructure in this 49-qubit system, which means a different computation must be performed to establish supremacy. To demonstrate the chip’s superiority over conventional computers, the Google team will execute operations on the array that will cause it to evolve chaotically and produce what looks like a random output. Classical machines can simulate this output for smaller systems. In April, for example, Lawrence Berkeley National Laboratory reported that its 29-petaflop supercomputer, Cori, had simulated the output of 45 qubits. But 49 qubits would push—if not exceed—the limits of conventional supercomputers.
This computation does not as yet have a clear practical application. But Martinis says there are reasons beyond demonstrating quantum supremacy to pursue this approach. The qubits used to make the 49-qubit array can also be used to make larger “universal” quantum systems with error correction, the sort that could do things like decryption, so the chip should provide useful validation data.

Steps to Supremacy: Google’s quantum computing chip is a 2-by-3 array of qubits. The company hopes to make a 7-by-7 array later this year.

There may also be, the team suspects, untapped computational potential in systems with little or no error correction. “It would be wonderful if this were true, because then we could have useful products right away instead of waiting for a long time,” says Martinis. One potential application, the team suggests, could be in the simulation of chemical reactions and materials.
Google recently performed a dry run of the approach on a 9-by-1 array of qubits and tested out some fabrication technology on a 2-by-3 array. Scaling up the number of qubits will happen in stages. “This is a challenging system engineering problem,” Martinis says. “We have to scale it up, but the qubits still have to work well. We can’t have any loss in fidelity, any increase in error rates, and I would say error rates and scaling tend to kind of compete against each other.” Still, he says, the team thinks there could be a way to scale up systems well past 50 qubits even without error correction.
Google is not the only company working on building larger quantum systems without error correction. In March, IBM unveiled a plan to create such a superconducting qubit system in the next few years, also with roughly 50 qubits, and to make it accessible on the cloud. “Fifty is a magic number,” says Bob Sutor, IBM’s vice president for this area, because that’s around the point where quantum computers will start to outstrip classical computers for certain tasks.
The quality of superconducting qubits has advanced a lot over the years since D-Wave Systems began offering commercial quantum computers, says Scott Aaronson, a professor of computer science at the University of Texas at Austin. D-Wave, based in Burnaby, B.C., Canada, has claimed that its systems offer a speedup over conventional machines, but Aaronson says there has been no convincing demonstration of that. Google, he says, is clearly aiming for a demonstration of quantum supremacy that is “not something you’ll have to squint and argue about.”
It’s still unclear whether there are useful tasks a 50-or-so-qubit chip could perform, Aaronson says. Nor is it certain whether systems can be made bigger without error correction. But he says quantum supremacy will be an important milestone nonetheless, one that is a natural offshoot of the effort to make large-scale, universal quantum machines: “I think that it is absolutely worth just establishing as clearly as we can that the world does work this way. Certainly, if we can do it as a spin-off of technology that will be useful eventually in its own right, then why the hell not?”