Highly recommended read
thoughts about industry 4.0
As Industry 4.0 technology becomes smarter and more widely available, manufacturers of any size will be able to deploy cost-effective, multipurpose and collaborative machines as standard. This will lead to industrial growth and market competitiveness, with a greater understanding of production processes leading to new high-quality products and digital services.
Exactly what impact a smarter robotic workforce with the potential to operate on its own will have on the manufacturing industry, is still widely disputed. Artificial intelligence as we know it from science fiction is still in its infancy. It could well be the 22nd century before robots really have the potential to make human labour obsolete by developing not just deep learning but true artificial understanding that mimics human thinking.
Ideally, Industry 4.0 will enable human workers to achieve more in their jobs by removing repetitive tasks and giving them better robotic tools. In theory, this would allow us humans to focus more on business development, creativity and science, which it would be much harder for any robot to do. Technology that has made humans redundant in the past has forced us to adapt, generally with more education.
But because Industry 4.0 robots will be able to operate largely on their own, we might see much greater human redundancy from manufacturing jobs without other sectors being able to create enough new work. Then we might see more political moves to protect human labour, such as taxing robots.
Again, in an ideal scenario, humans may be able to focus on doing the things that make us human, perhaps fuelled by a basic income generated from robotic work. Ultimately, it will be up to us to define whether the robotic workforce will work for us, with us, or against us.
uncertainty and the path to prototype warfare…
This article is primarily about war planning. However the description of uncertainty applies to everything. Well worth a read
Dealing with uncertainty is not a new problem for military planners, though, and research on military innovation may provide an answer for the way forward. For example, Stephen Rosen’s classic book on military innovation, Winning the Next War, identifies two types of flexibility for dealing with uncertainty. According to Rosen, “Type I” flexibility relies upon developing capabilities, such as the aircraft carrier, that have great utility over time, particularly as such can be modified as mission certainty increases. “Type II” flexibility involves buying information on weapon systems and then deferring large-scale production decisions. This usually involves bringing systems to the prototype stage and permitting military testing in field or fleet exercises. Rosen describes how this strategy was used successfully in the development of guided missile programs. At the end of World War II, it was not clear how to proceed with missile technology, as this was a period of great uncertainty both technically and politically. The Joint Chiefs of Staff adopted a hedging strategy in the late 1940s that focused on investing in the basic research and, as the operational demands of the Korean War increased, the Pentagon was able to quickly shift into full-scale missile production.
Can the Type II model of flexibility be turned into an operational advantage? This is not a new idea, but has yet to be given a fair shake by the U.S. military. In the 1990s, one U.S. Army officer predicted that this approach to capability development would result in an operationally significant concept he termed prototype warfare. In The Principles of War for the Information Age, published in 2000 and prior to the onset of the trends that Hammes discusses, Robert Leonhard argues that to create or to maintain a technical advantage in the information age, successful militaries need to adapt their economies and military doctrine to prototype warfare. He observes that technological change in the industrial age occurred at a moderate tempo and as a result, military doctrine was based on the fundamental premise of mass production. He concludes, “future warfare will feature a constant myriad of technological advances that come at a tempo that disallows mass production.”
https://warontherocks.com/2017/07/the-path-to-prototype-warfare/
AI researcher worries… intelligently… Could Marx still be right?
fascinating issue raised at the end of the article. Could Marx and Lenin still be right? Could the elites own all the means of production through AI slavery?
I’m not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures’ performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.
Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that we’ll find unintended consequences in simulation, which can be eliminated before they ever enter the real world.
Another possibility that’s farther down the line is using evolution to influence the ethics of artificial intelligence systems. It’s likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution – and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
While neuroevolution might reduce the likelihood of unintended consequences, it doesn’t prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.
https://www.scientificamerican.com/article/what-an-artificial-intelligence-researcher-fears-about-ai/
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don’t find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.
Your venture read… andreessen
Marc Andreessen is one of the most eloquent thinkers in the venture-capital community. His interviews are always required reading.
Future of jobs… it’s more complicated than you think…
https://youtu.be/7Pq-S557XQU
This video is the best argument I have seen in a easily understood context about the coming challenges of employability and future jobs with the rise of automation. Although I have a problem with the use of horses as a metaphor to understand future human jobs. Particularly in that the horses were never the creator of their own jobs where as we humans are the creator of our jobs. I do believe there will be tremendous deflation in the future that currently is expressing itself in some type of technological deflation which is hard to measure using the tools of an industrial time such as GDP. This deflation will make so many of our living expenses smaller. This will probably first be seen in the tremendous lowering of the cost of transportation. And has already been seen as mentioned above in the decreasing cost of technological gadgets and services. This will leave our largest cost as housing and healthcare. It is possible that a universal basic income will become increasingly likely. Indeed we live in interesting times.
AI checkup…
This conference adds much needed perspective on the AI revolution and the current state of the art. Indeed we are many breakthroughs away from artificial general intelligence. The current level of technology does not quite justify the current level of hype. However my interest in AI is related to the fact that of all the current narratives that can drive investors to want to put their money in a stock, AI and it’s many sub-fields (such as transportation as a service, big data, etc) it’s by far the most beguiling story.
While the growth of deep neural networks has helped propel the field of machine learning to new heights, there’s still a long road ahead when it comes to creating artificial intelligence. That’s the message from a panel of leading machine learning and AI experts who spoke at the Association for Computing Machinery’s Turing Award Celebration conference in San Francisco today.
We’re still a long way off from human-level AI, according to Michael I. Jordan, a professor of computer science at the University of California, Berkeley. He said that applications using neural nets are essentially faking true intelligence but that their current state allows for interesting development.
“Some of these domains where we’re faking intelligence with neural nets, we’re faking it well enough that you can build a company around it,” Jordan said. “So that’s interesting, but somehow not intellectually satisfying.”
Those comments come at a time of increased hype for deep learning and artificial intelligence in general, driven by interest from major technology companies like Google, Facebook, Microsoft, and Amazon.
Fei-Fei Li, who works as the chief scientist for Google Cloud, said that she sees this as “the end of the beginning” for AI, but says there are still plenty of hurdles ahead. She identified several key areas where current systems fall short, including a lack of contextual reasoning, a lack of contextual awareness of their environment, and a lack of integrated understanding and learning.
“This kind of euphoria of AI has taken over, and [the idea that] we’ve solved most of the problem is not true,” she said.
One pressing issue identified by Raquel Urtasun, who leads Uber’s self-driving car efforts in Canada, is that the algorithms used today don’t model uncertainty very well, which can prove problematic.
“So they will tell you that there is a car there, for example, with 99 percent probability, and they will tell you the same thing whether they are wrong or not,” she said. “And most of the time they are right, but when they are wrong, this is a real issue for things like self-driving [cars].”
The panelists did concur that an artificial intelligence that could match a human is possible, however.
“I think we have at least half a dozen major breakthroughs to go before we get close to human-level AI,” said Stuart Russell, a professor of computer science and engineering at the University of California, Berkeley. “But there are very many very brilliant people working on it, and I am pretty sure that those breakthroughs are going to happen.”
New Republic on conspiracies… must read…
https://newrepublic.com/article/142977/new-paranoia-trump-election-turns-democrats-conspiracy-theorists
Conspiracy theories spread like measles: First they infect the weak and vulnerable; then they spread like wildfire among the entire population. Researchers have found that if a person believes in one conspiracy, he or she is more likely to believe in others—even those unrelated to the initial theory. Which is to say, once conspiracy becomes part of our beliefs, it can be harder to see the world as it truly is. Conspiracy depends on a rejection of the world as it appears to be. Once this belief takes root, it becomes harder and harder to differentiate truth from fiction.
In other words, it is not the methodology of conspiracy that’s the problem. When paranoid thinking opens up possibilities, it can serve a useful function. The danger comes when conspiracists remain wedded to their theories in the face of conflicting information, when they refuse to do the hard work of confirming and substantiating their own assumptions and beliefs. Woodward and Bernstein did not simply point to a trail of shady campaign contributions and tweet that Nixon was behind it all. They followed the facts, step by painstaking step, all the way to the Oval Office.
The promise of conspiracy—that it will assuage our anxiety—is a false one. Watching Donald Trump from the social media sidelines, expecting at any minute that the Deep State will appear and fire a single magic bullet from the Grassy Knoll and put everything right again, is a dangerous delusion. It offers false assurance that you, as one lone individual, can’t do anything, even though American democracy has never needed you more.
China and the block chain…
This story is important and illustrates the advantage a slightly more totalitarian society has over a democracy in that it can quickly push innovations rather than waiting for market forces. However this comes at the cost of a more brittle and less resilient civil society. Which depends entirely upon the relative wisdom and judgment of a small number of leaders. Think of it as a experiment on Plato’s idea of the greatest leader being a philosopher with power. Secondarily, after the risk of poor leaders exercising bad judgment, you have the chronic problem of misallocation of capital. This is something that Plato did not think of or understand. But the western philosopher – economist Adam Smith perceived. The market is a much better allocator of capital than any manifestation of central planning. Nevertheless, watching the speed of China pursue any technical innovation is impressive.
China’s central bank is testing a prototype digital currency with mock transactions between it and some of the country’s commercial banks.
Speeches and research papers from officials at the People’s Bank of China show that the bank’s strategy is to introduce the digital currency alongside China’s renminbi. But there is currently no timetable for this, and the bank seems to be proceeding cautiously.
Nonetheless the test is a significant step. It shows that China is seriously exploring the technical, logistical, and economic challenges involved in deploying digital money, something that could ultimately have broad implications for its economy and for the global financial system.
A digital fiat currency—one backed by the central bank and with the same legal status as a banknote—would lower the cost of financial transactions, thereby helping to make financial services more widely available. This could be especially significant in China, where millions of people still lack access to conventional banks. A digital currency should also be cheaper to operate, and ought to reduce fraud and counterfeiting.
Even more significantly, a digital currency would give the Chinese government greater oversight of digital transactions, which are already booming. And by making transactions more traceable, this could also help reduce corruption, which is a key government priority. Such a currency could even offer real-time economic insights, which would be enormously valuable to policymakers. And finally, it might facilitate cross-border transactions, as well as the use of the renminbi outside of China because the currency would be so easy to obtain.
The flaw at the core of the EU
http://bilbo.economicoutlook.net/blog/?p=36270
Periodically, the European Commission puts out a new report or paper on how it is going to fix the unfixable mess that the Eurozone continues to wallow in. I say unfixable because all of the proposed reforms refuse to confront the original problem, which, at inception, the monetary union builders considered to be a desirable design feature – a lack of a federal fiscal capacity
The conclusion that anyone who understands these matters would reach is that the differences between the European nations are so great that such a shift towards a true federation is highly unlikely despite the fact that the EMU could function effectively if the capacity was developed.
The other conclusion is that by failing to solve the inherent design problem either by introducing a full federal fiscal capacity or disbanding the monetary union, the European Commission is setting the Eurozone up for the next crisis.
While there is some growth now, after nearly a decade of malaise, the residual damage from the crisis remains. The private sector still has elevated levels of debt, the banking system is far from recovered (particularly in Italy), the property market is still depressed, governments have elevated levels of foreign-currency debt (euros), and the labour market remains depressed.
What that means is that when the next economic downturn comes – and economic cycles repeat – the crisis will be magnified and the mechanisms set in place as emergency measures to deal with the GFC will fail immediately.
It is only a matter of time.
That is enough for today!