“Do You Trust This Computer?”: A Wake Up Call on Superintelligence

It is not everyday that Elon Musk endorses and partially funds a documentary. Needless to say, when that happens, anyone tracking the tech industry takes notice. In “Do you Trust Your Computer?“, Chris Paine brings together experts, journalists and CEOs from the tech industry and academia, to make a compelling point about the dangers of Superintelligence for humanity’s future. In this blog, I will review the documentary and offer some thoughts on how we respond to the issues raised by it.

 

 

In an age of misguided attention, I welcome any effort to raise awareness of the social impacts of AI. While AI has gained notoriety recently, there has been little thoughtful discussion of its impacts. I believe this documentary does exactly that and for that reason alone, I encourage everyone to watch it.

Surprisingly, the documentary did not uncover any new information. Most of the examples cited have been mentioned in other media discussing AI. The documentary contributes to the discussion not because of its content per say but because how it frames the issue of Superintelligence. Many of us have heard of singularity, the rise of killer AI, the death of privacy through Big Data and the dangers of automated weapons. Chris Paine’s genius was to bring those issues together to construct a cohesive argument that shows the plausability and the danger of the rise of superintelligence. The viewer comes away with greater clarity and awareness on the subject.

Compelling but Incomplete

In short, Paine argues that if we develop AI without proper safeguards, it could literally destroy us as a species. It wouldn’t do that intentionally but in the way to maximizing its goal. The example he gives is of how we humans have no qualms of removing an ant mound in the way to build a path. Superintelligent entities would look at us with the same regard we look at ants and therefore lack any human-centered ethical norms. Beyond that, he also touched on other topics like: the impending job elimination, Big Data’s impact in our lives and the danger of automated weapons. While the documentary was not overly alarmist it does challenge us to take these matter seriously and encouraging conversation at multiple levels of society.

In spite of its compelling argument, I found the treatment of the topic to be lacking in some aspects. For one, the film could have explored more how AI can lead to human flourishing and economic advancement. While at times it touched on the potential of AI, these bits were overshadowed by the parts that focused on its dangers.  I wish they had discussed how, just like previous emerging technologies, AI will not only eliminate jobs but also create new industries and economic ecosystems. Surely its impact is bound to create winners and losers. However, to overlook its potential for job creation does a disservice to the goal of an honest dialogue about our AI future.

Moreover, the rise of artificial Superintelligence, though likely, it far from being a certainty. At one point, one of the experts talked about how we have become numb to the dangers of AI primarily because of all the Hollywood’s exhaustive exploitation of this theme. That was a great point, however, that skepticism may not be completely unfounded. AI hype happened before and so did an AI winter. In the early 60’s, many already predicted a take over of robots as AI technology had just entered the scene. It turned out that technical challenges and hardware limitations slowed AI development enough so that government and business leaders lost interest in it. This was the first AI winter from the mid-70s to the mid-90’s. This historical lesson is worth remembering because AI is not the only emerging technology competing for funding and attention at this moment.

Exposing The Subtle Impact of AI

I certainly hope that leaders in business and politics are heeding to Chris Paine’s warnings. My critique above does not diminish the importance of the threat posed by Superintelligence. However, most of us will not be involved in this decision process. We may be involved in choosing who will be at the table but not at the decision-making process directly. So, while this issue is very important, we as individual citizens will have little agency in setting the direction of Superintelligence development.

With that said, the documentary did a good job in discussing the more subtle impacts of AI in our daily lives. That to me, turned out to be the best contribution to the AI dialogue because it helped expose how many of us are unwilling participants in the process. Because AI lives and dies on data, data collection practices are fairly consequential to the future of its development. China is leaping ahead in the AI race primarily because of its government ability to collect personal data with little to no restrictions. More recently, the Facebook-Cambridge-Analytica scandal exposed how data collection done by large corporations can also be unethical and harmful to our democracy.

Both examples show that centralized data collection efforts are ripe for abuse. The most consequential act we can take in the development of AI is to be more selective on how and to who we give personal data to. Moreover, as consumers and citizens, we must ensure we are sharing in the benefits our data creates. This process of data democratization is the only way to keep effective controls on how data is collected and used. As data collection decentralizes, the risk of an intelligence monopoly decreases and the benefits of AI can be more equitably shared among humanity.

Moreover, it is time we start questioning the imperative of digitization. Should everything be tracked through electronic devices? Some aspects of our analog earth are not meant to be digitized and processed by machines. The challenges is to define these boundaries and ensure they are kept out of reach from intelligent machines. This is an important question to ask as we increasingly use our smart phones to record every aspect of our lives. In this environment, writing a journal by hand, having unrecorded face-to-face conversations and taking a technology sabbatical can all be effective acts of resistance.

Hybrid Intelligence: When Machines and Humans Work Together

In a previous blog, I argued that the best way to look into AI was not from a machine versus human perspective but more from a human PLUS machine paradigm. That is, the goal of AI should not be replacement but augmentation. Artificial Intelligence should be about enhancing human flourishing rather than simply automating human activities. Hence, I was intrigued to learn about the concept of HI (Hybrid Intelligence). HI is basically a manifestation of augmentation when human intelligence works together with machine intelligence towards a common goal.

As usual, the business world leads in innovation, and in this case, it is no different. Hence, I was intrigued to learn about Cindicator, a startup that combines the collective intelligence of human analysts with machine learning models to make investment decisions. Colin Harper puts it this way:

Cindicator fuses together machine learning and market analysis for asset management and financial analytics. The Cindicator team dubs this human/machine predictive model Hybrid Intelligence, as it combines artificial intelligence with the opinions of human analysts “for the efficient management of investors’ capital in traditional financial and cryptomarkets.”

This is probably the first enterprise to approach investment management from an explicitly hybrid approach. You may find other examples in which investment decisions are driven by analysts and others that rely mostly on algorithms. This approach seeks to combine the two for improved results.

How Does Hybrid Intelligence Work?

One could argue that any example of machine learning is at its core hybrid intelligence. There is some truth to that. Every exercise in machine learning requires human intelligence to set it up and tune the parameters. Even as some of these tasks are now being automated, one could still argue that the human imprint of intelligence is still there.

Yet, this is different. In the Cindicator example, I see a deliberate effort to harness the best of both machines and humans.

On the human side, the company is harnessing the wisdom of crowds by aggregating analysts’ insights. The reason why this is important is that machine learning can only learn from data and not all information is data. Analysts may have inside information that is not visible in the data world and can therefore bridge that gap. Moreover, human intuition is not (yet) present in machine learning systems. Certain signals require a sixth sense that only humans have. For example, a human analyst may catch deceptive comments from company executives that would pass unnoticed by algorithms.

On the machine side, the company developed multiple models to uncover predictive patterns from the data available. This is important because humans can only consider a limited amount of scenarios. That is one reason why AI has beaten humans in games where it could consider millions of scenarios in seconds. Their human counterparts had to rely on experience and hunches. Moreover, machine learning models are superior tools for finding significant trends in vast data, which humans would often overlook.

Image by Gerd Altmann from Pixabay

Can Hybrid Intelligence Lead to Human Flourishing?

HI holds much promise in augmenting rather than replacing human intelligence. At its core, it starts from the principle that humans can work harmoniously with intelligent machines. The potential for its uses is limitless. An AI aided approach can supercharge research for the cure of diseases, offer innovative solutions to environmental problems and even tackle intractable social ills with humane solutions.

This is the future of work: collective human intelligence partnering with high-performing Artificial Intelligence to solve difficult problems, create new possibilities and beautify the world.

Much is said about how many jobs AI will replace. What is less discussed is the emergence of new industries made possible by the partnership between intelligent machines and collective human wisdom. A focus on job losses assumes an economy of scarcity where a fixed amount of work is available to be filled by either humans or machines. An abundance perspective looks at the same situation and sees the empowerment of humans to reach new heights. Think about how many problems remain to be solved, how many endeavors are yet to be pursued, and how much innovation is yet to be unleashed.

Is this optimist future scenario inevitable? Not by a long shot. The move from AI to HI will take time, effort and many failures. Yet, looking at AI as an enabler rather than a threat is a good start. In fact, I would say that the best response to the AI threat is not returning to a past of dumb machines but lies in the partnership between machine and human entities steering innovation for the flourishing of our planet. Only HI can steer AI towards sustainable flourishing.

There is work to do, folks. Let’s get on with the business of creating HI for a better world!

Blockchain and AI: Powerful Combination or Concerning Trend?

Bitcoin is all over the news lately. After the meteoric rise of these financial instruments, the financial world is both excited and fearful. Excited to get in the bandwagon while it is on the rise but scarred that this could be another bubble. Even more interesting has been the rise of blockchain, the underlying technology that enables Bitcoin to run (for those wondering about what this technology is, check out this video). In this blog, I reflect on the combination between AI and blockchain by examining an emerging startup on the field. Can AI and blockchain work together? If so, what type of applications can this combination be used for? 

Recently, I came across this article from Coincentral that starts answering the questions above. In it, Colin Harper interviews CEO of Deep Brain Chain, one of the first startups attempting to bring AI and blockchain technology together. DBC’s CEO He Yong puts it this way:

DBC will allow AI companies to acquire data more easily.  Because a lot of data are private data.  They have heavy security, such as on health and on finance.  For AI companies, it’s almost next to impossible to acquire these data.  The reason being, these data are easy to copy.  After the data is copied, they’re easy to leak.  One of DBC’s core features is trying to solve this data leakage issue, and this will help AI companies’ willingness to share data, thus reducing the cost you spend on data, to increase the flow of data in society. This will expand and really show the value of data.

As somebody who works within a large company using reams of private data, I can definitely see the potential for this combination. Blockchain can provide the privacy through encryption that could facilitate the exchange of data between private companies. Not that this does not happen already but it is certainly discouraged given the issues the CEO raises above. Certainly, the upside of aggregating this data for predictive modeling is fairly significant. Companies would have complete datasets that allow them to see sides of the customer that are not visible to that company.

However, as a citizen, such development also makes me ponder. Who will get access to this shared data? Will this be done in a transparent way so that regulators and the general population can monitor the process?  Who will really benefit from this increased exchange of private data? While I agree the efficiency and cost would decrease, my main concern is for what aims will this data be used. Don’t get me wrong, targeted marketing that follows privacy guidelines can actually be beneficial to everybody. Richer data can also help a company improve customer service.

With that said, the way He Yong describes, it looks like this combination will primarily benefit large private companies that will use this data for commercial aims. Is this really the promise of an AI and Blockchain combination: allowing large companies to know even more about us?

Further in the interview, He Yong suggested the Blockchain could actually help assuage some of the fears of that AI could get out of control:

Some people claim that AI is threatening humanity.  We think that blockchain can stop that, because AI is actually not that intelligent at the moment, so the threat is relatively low.  But in a decade, two decades, AI will be really strong, a lot stronger than it is now.  When AI is running on blockchain, on smart contracts, we can refrain AI.  For example, we can write a blockchain algorithm to restrain the computational power of AI and keep it from acting on its own.

Given my limited knowledge on Blockchain, it is difficult to evaluate whether this is indeed the case. I still believe that the biggest threat of AI is not the algorithms themselves but how they are used. Blockchain, as described here, can help make them process more robust, giving human controllers more tools to halt algorithms gone haywire. Yet, beyond that I am not sure how much more can it act as a true “check and balance” for AI.

I’ll be monitoring this trend in the next few months to see how this develops. Certainly, we’ll be seeing more and more businesses emerging seeking to marry blockchain and AI. These two technologies will disrupt many industries by themselves. Combining them could be even more impactful. I am interested to see if they can be combined for human flourishing goals. That remains yet to be seen.