Google Cofounder Sergey Brin Warns of AI’s Dark Side | WIRED

It is my current belief that AI is the next great capital cycle, similar to steam, electrical power, railroads, etc. It will play out over decades. But this is the play. Additionally, it may be the last great human invention.

Google cofounder calls advances in artificial intelligence “the most significant development in computing in my lifetime,” but warns of ethical concerns.
— Read on www.wired.com/story/google-cofounder-sergey-brin-warns-of-ais-dark-side/

I, Cringely The space race is over and SpaceX won – I, Cringely

I tend to agree, but I would caution that going to space is not a network-gains type of event. It isn’t a x86 platform or video technology like vhs vs Betamax. It is physical. So it is a race that never ends. Bezos gets this deeply, which explains his quiet style. SpaceX has the edge today, but there is no platform legacy type of restrictions. A cheaper and safer tech wins. Immediately…

Elon Musk knows that for SpaceX to dominate, scale is everything
— Read on www.cringely.com/2018/04/06/the-space-race-is-over-and-spacex-won/

Serious thinking on AI

 

Difficult ethical questions have been raised this month after artificial intelligence (AI) was shown to be 91% accurate at guessing whether somebody was gay or straight from a photograph of their face. If a madman with a nuclear weapon is a 20th century apocalypse plot, climate change and AI are the two blockbuster contenders of the 21st. Both play to the tantalisingly ancient theme of humanity’s hubristic desire to be greater, bringing about their own downfall in the process. Barring recent political setbacks, the risks of climate change are no longer controversial. AI, on the other hand, seems to still be.

In Oxford, an ancient city of spires and philosophers, seemingly standing out against technological advance, there is the Future of Humanity Institute. The institute is headed by Nick Bostrom, a philosopher who believes that the advent of artificial intelligence could well bring about the destruction of civilisation. Meanwhile, in the New World, the top ‘schools’ are turning overwhelmingly to the study and development of artificial intelligence. At Stanford University, about 90% of undergraduates now take a computer science course. In Silicon Valley, the religion is one of self-improvement. Optimization (with a ‘z’) is their religion. With a strong culture of obsessing over their own ‘productivity’, it’s little wonder that the promises of AI, the ultimate optimiser, have such a powerful draw on so many brilliant brains on the West Coast.

Back home, Bostrom is a leading source of warnings against AI. His papers have titles like “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards”. It’s clear that he is a man not shy of staring back at the abyss, though I’m not entirely sure what ‘Related Hazards’ we need to be concerned about post-Human Extinction. He’s said that our work on artificial intelligence is “like children playing with a bomb”.

Bostrom extrapolates from the way that humans have dominated the world to highlight the risks of AI. Our brains have capabilities beyond that of the other animals on this planet, and that advantage alone has been distinctive in making us so overwhelmingly the dominant species, and in a relatively tiny amount of time. The dawn of civilisation could be described as a biological singularity, an intelligence explosion. If an artificial brain could be made more powerful than our own, why should we not see a second ‘intelligence explosion’? This brings up the technological singularity – a paradigm in which humans might come to have as little power over our lives as battery farmed chickens do now over theirs. The difference, I suppose, is that chickens didn’t inadvertently create humanity, so Bostrom sees a chance for us to control our creation – we choose how and when to turn it on.

However, AI need not be inherently malignant in order to destroy us. By illustration, one of Bostrom’s fun apocalypse scenarios (another of his light reads is Global Catastrophic Risks) is that of the end of the world by a runaway artificial intelligence algorithm initially designed to help a paperclip factory manufacture as many paperclips as possible. Within seconds, the machine has quite logically reasoned that this would be best achieved by wiping out the human race in order to make a bit more space for paperclip manufacturing, and cheerfully, obediently embarked on its task.

My engineering degree at Oxford is definitely a bit backward. Most of the course seems not to have changed since Alan Turing cracked the Enigma Machine at Bletchley Park. My decision to focus my final year on AI thus makes me – I would like to think – a dangerous maverick. Probably, the professors discuss me in hushed tones, fear mixed equally with reverence. I’m actually extremely grateful for this environment.

Away from the fervent centre of the Religion of Optimisation, it’s far easier to see the bigger picture, without being blinded by the light of enthusiasm. The Laboratory of Applied Artificial Intelligence which I just became part of sits within the Oxford Robotics Institute. The important nuance in this hierarchy is that at Oxford, artificial intelligence is more a prosaic tool to be applied to an end, than a quasi-religious holy grail in itself. Say, making better driverless cars. It is only in very specific, tailored ways like this that artificial intelligence is, and can be, currently used. This concrete embodiment of AI is called Machine Learning, and is nothing more glamorous than particularly clever statistics, run by relatively powerful computers.

It is these mundane algorithms that optimise all online ads to their audience, determine your sexuality from a photo, get your Über driver to you within 2 minutes, or even replace that Über driver altogether. Long before Bostrom’s artificial superintelligence surpasses the human brain and crushes us like ants, civilisation will be tested by the extreme turbulence in lifestyle and employment that will be brought by this far more mundane embodiment of computer intelligence. Besides a bit of philosophy and working out how to unplug a monster brain, we should be considering a far closer future, in which boring machines that can’t even hold a conversation will have nonetheless put most of us out of work.

more here

thoughts about industry 4.0

As Industry 4.0 technology becomes smarter and more widely available, manufacturers of any size will be able to deploy cost-effective, multipurpose and collaborative machines as standard. This will lead to industrial growth and market competitiveness, with a greater understanding of production processes leading to new high-quality products and digital services.

Exactly what impact a smarter robotic workforce with the potential to operate on its own will have on the manufacturing industry, is still widely disputed. Artificial intelligence as we know it from science fiction is still in its infancy. It could well be the 22nd century before robots really have the potential to make human labour obsolete by developing not just deep learning but true artificial understanding that mimics human thinking.

Ideally, Industry 4.0 will enable human workers to achieve more in their jobs by removing repetitive tasks and giving them better robotic tools. In theory, this would allow us humans to focus more on business development, creativity and science, which it would be much harder for any robot to do. Technology that has made humans redundant in the past has forced us to adapt, generally with more education.

But because Industry 4.0 robots will be able to operate largely on their own, we might see much greater human redundancy from manufacturing jobs without other sectors being able to create enough new work. Then we might see more political moves to protect human labour, such as taxing robots.

Again, in an ideal scenario, humans may be able to focus on doing the things that make us human, perhaps fuelled by a basic income generated from robotic work. Ultimately, it will be up to us to define whether the robotic workforce will work for us, with us, or against us.

https://theconversation.com/does-the-next-industrial-revolution-spell-the-end-of-manufacturing-jobs-80779

uncertainty and the path to prototype warfare…

This article is primarily about war planning.  However the description of uncertainty applies to everything. Well worth a read 

Dealing with uncertainty is not a new problem for military planners, though, and research on military innovation may provide an answer for the way forward. For example, Stephen Rosen’s classic book on military innovation, Winning the Next War, identifies two types of flexibility for dealing with uncertainty. According to Rosen, “Type I” flexibility relies upon developing capabilities, such as the aircraft carrier, that have great utility over time, particularly as such can be modified as mission certainty increases. “Type II” flexibility involves buying information on weapon systems and then deferring large-scale production decisions. This usually involves bringing systems to the prototype stage and permitting military testing in field or fleet exercises. Rosen describes how this strategy was used successfully in the development of guided missile programs. At the end of World War II, it was not clear how to proceed with missile technology, as this was a period of great uncertainty both technically and politically. The Joint Chiefs of Staff adopted a hedging strategy in the late 1940s that focused on investing in the basic research and, as the operational demands of the Korean War increased, the Pentagon was able to quickly shift into full-scale missile production.

Can the Type II model of flexibility be turned into an operational advantage? This is not a new idea, but has yet to be given a fair shake by the U.S. military. In the 1990s, one U.S. Army officer predicted that this approach to capability development would result in an operationally significant concept he termed prototype warfare. In The Principles of War for the Information Age, published in 2000 and prior to the onset of the trends that Hammes discusses, Robert Leonhard argues that to create or to maintain a technical advantage in the information age, successful militaries need to adapt their economies and military doctrine to prototype warfare. He observes that technological change in the industrial age occurred at a moderate tempo and as a result, military doctrine was based on the fundamental premise of mass production. He concludes, “future warfare will feature a constant myriad of technological advances that come at a tempo that disallows mass production.”

https://warontherocks.com/2017/07/the-path-to-prototype-warfare/