Quantum is coming. 

Quantum computers have long held the promise of performing certain calculations that are impossible—or at least, entirely impractical—for even the most powerful conventional computers to perform. Now, researchers at a Google laboratory in Goleta, Calif., may finally be on the cusp of proving it, using the same kinds of quantum bits, or qubits, that one day could make up large-scale quantum machines.
By the end of this year, the team aims to increase the number of superconducting qubits it builds on integrated circuits to create a 7-by-7 array. With this quantum IC, the Google researchers aim to perform operations at the edge of what’s possible with even the best supercomputers, and so demonstrate “quantum supremacy.”
“We’ve been talking about, for many years now, how a quantum processor could be powerful because of the way that quantum mechanics works, but we want to specifically demonstrate it,” says team member John Martinis, a professor at the University of California, Santa Barbara, who joined Google in 2014.

http://spectrum.ieee.org/computing/hardware/google-plans-to-demonstrate-the-supremacy-of-quantum-computing
A system size of 49 superconducting qubits is still far away from what physicists think will be needed to perform the sorts of computations that have long motivated quantum computing research. One of those is Shor’s algorithm, a computational scheme that would enable a quantum computer to quickly factor very large numbers and thus crack one of the foundational components of modern cryptography. In a recent commentary in Nature, Martinis and colleagues estimated that a 100-million-qubit system would be needed to factor a 2,000-bit number—a not-uncommon public key length—in one day. Most of those qubits would be used to create the special quantum states that would be needed to perform the computation and to correct errors, creating a mere thousand or so stable “logical qubits” from thousands of less stable physical components, Martinis says.
There will be no such extra infrastructure in this 49-qubit system, which means a different computation must be performed to establish supremacy. To demonstrate the chip’s superiority over conventional computers, the Google team will execute operations on the array that will cause it to evolve chaotically and produce what looks like a random output. Classical machines can simulate this output for smaller systems. In April, for example, Lawrence Berkeley National Laboratory reported that its 29-petaflop supercomputer, Cori, had simulated the output of 45 qubits. But 49 qubits would push—if not exceed—the limits of conventional supercomputers.
This computation does not as yet have a clear practical application. But Martinis says there are reasons beyond demonstrating quantum supremacy to pursue this approach. The qubits used to make the 49-qubit array can also be used to make larger “universal” quantum systems with error correction, the sort that could do things like decryption, so the chip should provide useful validation data.

Steps to Supremacy: Google’s quantum computing chip is a 2-by-3 array of qubits. The company hopes to make a 7-by-7 array later this year.

There may also be, the team suspects, untapped computational potential in systems with little or no error correction. “It would be wonderful if this were true, because then we could have useful products right away instead of waiting for a long time,” says Martinis. One potential application, the team suggests, could be in the simulation of chemical reactions and materials.
Google recently performed a dry run of the approach on a 9-by-1 array of qubits and tested out some fabrication technology on a 2-by-3 array. Scaling up the number of qubits will happen in stages. “This is a challenging system engineering problem,” Martinis says. “We have to scale it up, but the qubits still have to work well. We can’t have any loss in fidelity, any increase in error rates, and I would say error rates and scaling tend to kind of compete against each other.” Still, he says, the team thinks there could be a way to scale up systems well past 50 qubits even without error correction.
Google is not the only company working on building larger quantum systems without error correction. In March, IBM unveiled a plan to create such a superconducting qubit system in the next few years, also with roughly 50 qubits, and to make it accessible on the cloud. “Fifty is a magic number,” says Bob Sutor, IBM’s vice president for this area, because that’s around the point where quantum computers will start to outstrip classical computers for certain tasks.
The quality of superconducting qubits has advanced a lot over the years since D-Wave Systems began offering commercial quantum computers, says Scott Aaronson, a professor of computer science at the University of Texas at Austin. D-Wave, based in Burnaby, B.C., Canada, has claimed that its systems offer a speedup over conventional machines, but Aaronson says there has been no convincing demonstration of that. Google, he says, is clearly aiming for a demonstration of quantum supremacy that is “not something you’ll have to squint and argue about.”
It’s still unclear whether there are useful tasks a 50-or-so-qubit chip could perform, Aaronson says. Nor is it certain whether systems can be made bigger without error correction. But he says quantum supremacy will be an important milestone nonetheless, one that is a natural offshoot of the effort to make large-scale, universal quantum machines: “I think that it is absolutely worth just establishing as clearly as we can that the world does work this way. Certainly, if we can do it as a spin-off of technology that will be useful eventually in its own right, then why the hell not?”

Lidar breakthrough?

Lidarland is buzzing with cheap, solid-state devices that are supposedly going to shoulder aside the buckets you see revolving atop today’s experimental driverless cars. Quanergy started this solid-state patter, a score of other startups continued it, and now Velodyne, the inventor of those rooftop towers, is talking the talk, too.
Not Luminar. This company, which emerged from stealth mode earlier this month, is fielding a 5-kilogram box with a window through which you can make out not microscopic MEMs mirrors, but two honking, macroscopic mirrors, each as big as an eye. Their movement—part of a secret-sauce optical arrangement—steers a pencil of laser light around a scene so that a single receiver can measure the distance to every detail. 
“There’s nothing wrong with moving parts,” says Luminar founder and CEO Austin Russell. “There are a lot of moving parts in a car, and they last for a 100,000 miles or more.”
Luminar’s lidar

A key difference between Luminar and all the others is its reliance on home-made stuff rather than industry-standard parts. Most important is its use of indium gallium arsenide for the photodetector. This compound semiconductor is harder to manufacture and thus more expensive than silicon, but it can receive at a wavelength of 1550 nanometers, deep in the infrared part of the spectrum. That makes this wavelength much safer for human eyes than today’s standard wavelength, 905 nm. Luminar can thus pump out a beam with 40 times the power of rival sensors, increasing its resolution, particularly at 200 meters and beyond. That’s how far cars will have to see at highway speeds if they want to give themselves more than half a second to react to events. 

The vast majority of companies in this space are integrating off-the-shelf components,” he says. “The same lasers, same receivers, same processors—and that’s why there have been no advances in lidar performance in a decade. Every couple of years a company says, ‘we have new lidar sensor, half the size, half the price, and oh, by the way, half the performance.’ The performance of the most expensive ones has stayed the same for practically a decade; all the newer ones are orders of magnitude worse.”

http://spectrum.ieee.org/cars-that-think/transportation/sensors/22yearold-lidar-whiz-claims-breakthrough

Spaces planes here we come…

http://www.darpa.mil/news-events/2017-05-24
The XS-1 program envisions a fully reusable unmanned vehicle, roughly the size of a business jet, which would take off vertically like a rocket and fly to hypersonic speeds. The vehicle would be launched with no external boosters, powered solely by self-contained cryogenic propellants. Upon reaching a high suborbital altitude, the booster would release an expendable upper stage able to deploy a 3,000-pound satellite to polar orbit. The reusable first stage would then bank and return to Earth, landing horizontally like an aircraft, and be prepared for the next flight, potentially within hours.

Fusion in our lifetimes ?

Newly-available superconducting materials like REBCO (a single-crystal material composed of yttrium, barium, copper, oxygen and other elements) allow the creation of unprecedentedly-high-field magnets. They may enable smaller and less-expensive versions of venerable tokamak-type fusion reactors (like the Alcator C-Mod, which was shuttered last year), in part because a doubling of magnetic field strength produces a 16-fold increase in fusion power density. Hartwig says a fast-track high-field magnet development program, followed by the possible building of a compact, net-energy-gain tokamak in the next 5-10 years, would be a watershed in dispelling fusion’s reputation as being always in the future.

http://news.mit.edu/2017/mit-assistant-professor-zach-hartwig-applying-diverse-skills-in-pursuit-of-nuclear-fusion-breakthrough-0522

Big read. Artificial hearts…

Growing tissues and organs in a bioreactor is a laborious business, but recent improvements in 3D printing offer the tantalising possibility of manufacturing a new heart rapidly and to order. 3D printers work by breaking down a three-dimensional object into a series of thin, two-dimensional “slices”, which are laid down one on top of another. The technology has already been employed to manufacture complex engineering components out of metal or plastic, but it is now being used to generate tissues in the laboratory. To make an aortic valve, researchers at Cornell University took a pig’s valve and X-rayed it in a high-resolution CT scanner. This gave them a precise map of its internal structure which could be used as a template. Using the data from the scan, the printer extruded thin jets of a hydrogel, a water-absorbent polymer that mimics natural tissue, gradually building up a duplicate of the pig valve layer by layer. This scaffold could then be seeded with living cells and incubated in the normal way.
https://www.theguardian.com/science/2017/may/23/robot-hearts-medicines-new-frontier

Big read. Huge consequences. Transportation as a service.

https://static1.squarespace.com/static/585c3439be65942f022bbf9b/t/591a2e4be6f2e1c13df930c5/1494888038959/RethinkX+Report_051517.pdf

Summary–

 The impacts of TaaS disruption are far reaching:
Savings on transportation costs will result in a permanent boost in annual disposable income for U.S. households, totaling $1 trillion by 2030. Consumer spending is by far the largest driver of the economy, comprising about 71% of total GDP and driving business and job growth throughout the economy. Productivity gains as a result of reclaimed driving hours will boost GDP by an additional $1 trillion.
ê As fewer cars travel more miles, the number of passenger vehicles on American roads will drop from 247 million to 44 million, opening up vast tracts of land for other, more productive uses. Nearly 100 million existing vehicles will be abandoned as they become economically unviable.
ê Demand for new vehicles will plummet: 70% fewer passenger cars and trucks will be manufactured each year. This could result in total disruption of the car value chain, with car dealers, maintenance and insurance companies suffering almost complete destruction. Car manufacturers will have options to adapt, either as low-margin, high- volume assemblers of A-EVs, or by becoming TaaS providers. Both strategies will be characterized by high levels of competition, with new entrants from other industries. The value in the sector will be mainly
in the vehicle operating systems, computing platforms and the TaaS platforms.
ê The transportation value chain will deliver 6 trillion passenger miles in 2030 (an increase of 50% over 2021) at a quarter of the cost ($393 billion versus $1,481 billion).
ê Oil demand will peak at 100 million barrels per day by 2020, dropping
to 70 million barrels per day by 2030. That represents a drop of 30 million barrels in real terms and 40 million barrels below the Energy Information Administration’s current “business as usual” case. This will have a catastrophic effect on the oil industry through price collapse
(an equilibrium cost of $25.4 per barrel), disproportionately impacting different companies, countries, oil elds and infrastructure depending on their exposure to high-cost oil.
ê The impact of the collapse of oil prices throughout the oil industry value chain will be felt as soon as 2021.
ê In the U.S., an estimated 65% of shale oil and tight oil — which under a “business as usual” scenario could make up over 70% of the U.S. supply in 2030 — would no longer be commercially viable.
ê Approximately 70% of the potential 2030 production of Bakken shale oil would be stranded under a 70 million barrels per day demand assumption.
ê Infrastructure such as the Keystone XL and Dakota Access pipelines would be stranded, as well.
ê Other areas facing volume collapse include offshore sites in the United Kingdom, Norway and Nigeria; Venezuelan heavy-crude elds; and the Canadian tar sands.
ê Conventional energy and transportation industries will suffer substantial job loss. Policies will be needed to mitigate these adverse effects.

The goal is to ride winners, not pick winners. Because unless we have opportunity to get in early, in the first rounds of the capital structure, our money is not locked up. We can switch horses in the middle of the race. Admittedly, most of us fail to do well here. But that is the goal. So currently the race all about

1.) Internet of things

2.) Machine learning (think AI or robotics)

3.) CRISPR and related genomic fields (nearly impossible to find easy trades because of government regulatory involvement.)

4.) Monetizing big data