Let’s Talk About the Future of AI and Faith

AI Theology is embarking on an exciting new project about the future. In the next months, we’ll be exploring how faith and AI will interact decades from now. Through research, dialogue, speculation, and prayerful reflection, we hope to paint plausible pictures of what may be ahead of us. More to come on that.

You might wonder why even engage in such pursuit.

Here is one reason:

“Those who think about the future can shape it while those who do not only react to it” (Insta post)

Now that I got your attention, let me flesh this out a bit further. In the last few months, I have been discussing what AIT was about with the advisory board. We wanted to take time to re-group and re-think priorities. Like any endeavor, we have limited time and resources. Hence, choosing what to focus on is crucial if we are to have any chance of being effective.

What we realized is that any discussion about emerging technologies is inevitably a dialogue about the future. While AI technologies present daunting challenges for our time, this is little compared to where it is taking us as a society. By playing out current possibilities in decades to come, we can offer a clearer picture of the dangers and opportunities of AI so we can instigate action in the present. In a digital environment where attention is measured in milliseconds, it is time we recover the long view.

Looking into the Past for Clues

If you think this is a daunting task, you are not alone. Often, we can barely plan what to eat for dinner. Imagine looking 20 years out! At first, it seems overwhelming if not a pointless exercise. Yet, stay with me for a bit.

A good starting point, albeit imperfect, is projecting your memory back. Do you remember what life was 20 years ago? Some of you weren’t even born or were too young to remember. Others may feel a rush of nostalgia for a time long gone.

Photo by Alexander Schimmeck on Unsplash

I, for one, (age alert ahead) remember very well where I was and what I was doing in 2002. It was a pivotal year when I graduated from college, got married, and started working for the same bank I still work for today (2 mergers later). What a year! One that would define the course of my life for decades to come.

As I reflect back on my younger self, I wonder what would surprise that 22-year-old if he could join me in 2042. Which technology would wow him the most and which one would horrify him? Certainly, he would be excited to learn that video calls on a handheld device were now possible. For somebody who dated 4 years long-distance that would have been a game changer! While cel phones already existed then, they were as dumb as they go, barely allowing you to make phone calls.

This is not just a trip down memory land but a way to calibrate our thoughts to the perception of change over time. That is, considering how our younger self would experience our present not only helps us understand what has changed but most importantly, how fast things have changed. The rate of change is key if we are to think about the future. Getting a sense of the rate of change is probably the best takeaway from this exercise in order to engage with the actual future projection we originally intended to do.

Peering into the future

Older version of me created by the Face app

Who is the wise older man in the right? Well, this is AI’s prediction of what I will look like in the future. Not bad, huh? I am not sure I can pull the beard off since the best I can do today are scattered patches of hair throughout my face. With that said, I am impressed with how realistic the picture looked.

If AI can draw such a compelling future of my looks how about if I ask it to tell me more about the world in 2042? Well, that’s what I did. Using Open AI GPT-3 language model, I typed in the prompt: “What will the world look like in 2042?”

Here is its response:

In 2042, the world will be a very different place. Technology will have advanced significantly, and humanity will have made great strides in solving many of the problems that plague us today. Poverty, hunger, and disease will be largely eradicated, and global warming will be under control. People will be living longer, healthier lives, and we will have colonized other planets. The internet will be even more omnipresent than it is today, and we will be able to communicate with each other instantaneously no matter where we are in the world. We will have flying cars, and robots will be a part of everyday life. The world will be a much more peaceful and prosperous place, and it will be an exciting time to be alive.

Ok, AI – dream on! That was both too general and too optimistic for my taste. No doubt, AI has a long way to go before it can give us a comprehensive picture of the future.

How about you?

What would your 20-years-younger-self notice about today? What would surprise him or her?

When you project yourself forward 20 years, considering the rate of change of the last 20 and keeping it mostly constant, what do you see?

What world do you visit?

Drop your answers in the “send us a message” section on the right or in our FB community

We would love to hear from you.

Artificial Intelligence: The Disguised Friend of Christian Anthropology

AI is making one significant leap after the other. Computer programs can nowadays convincingly converse with us, generate plausible prose, diagnose disease better than human experts, and totally trash us in strategy games like Go and chess, once considered the epitome of human intelligence. Could they one day reach human-level intelligence? It would be extremely unwise to discount such a possibility without very good reasons. Time, after all, is on AI’s side, and the kind of things that machines are capable of today used to be seen as quasi-impossible just a generation ago.

How could we possibly speak of human distinctiveness when robots become indistinguishable from us?

The scenario of human-level AI, also known as artificial general intelligence (AGI), would be a game-changer for every aspect of human life and society, but it would raise particularly difficult questions for theological anthropology. Since the dawn of the Judeo-Christian tradition, humans have perceived themselves as a creature unlike any other. The very first chapter of the Bible tells us that we are special, because only we of all creatures are created in the image of God (imago Dei). However, the Copernican revolution showed us that we are not the center of the universe (not literally, at least), and the Darwinian revolution revealed that we are not ontologically different from non-human animals. AGI is set to deliver the final blow, by conquering the last bastion of our distinctiveness: our intelligence.

By definition, AGI would be capable of doing anything that a standard human can do, at a similar or superior level. How could we possibly speak of human distinctiveness when robots become indistinguishable from us? Christian anthropology would surely be doomed, right? Well, not really, actually quite the contrary. Instead of rendering us irrelevant and ordinary, AI could in fact represent an unexpected opportunity to better understand ourselves and what makes us in God’s image.

Science’s Contribution to the Imago Dei

To explain why, it is useful to step back a little and acknowledge how much the imago Dei theology has benefitted historically from an honest engagement with the science of its time. Based solely on the biblical text, it is impossible to decide what the image of God is supposed to mean exactly. The creation story in Genesis 1 tells us that only humans are created in the image and likeness of God, but little else about what the image means. The New Testament does not add much, except for affirming that Jesus Christ is the perfect image. Ever since Patristic times, Christian anthropology has constantly wrestled with how to define the imago Dei, without much success or consensus.

The obvious way to tackle the question of our distinctiveness is to examine what differentiates us from the animals, the only others with which we can meaningfully compare ourselves. For the most part of Christian history, this difference has been located in our intellectual capacities, an approach heavily influenced by Aristotelian philosophy, which defined the human as the rational animal. But then came Darwin and showed us that we are not as different from the animals as we thought we were.

Theologian Aubrey Moore famously said that Darwin “under the guise of a foe, did the work of a friend” for Christianity.

Furthermore, the following century and a half of ethology and evolutionary science revealed that our cognitive capacities are not bestowed upon us from above. Instead, they are rooted deep within our evolutionary history, and most of them are shared with at least some of the animals. If there is no such thing as a uniquely human capacity, then surely we were wrong all along to regard ourselves as distinctive, right?

Not quite. Theologian Aubrey Moore famously said that Darwin “under the guise of a foe, did the work of a friend” for Christianity. Confronted with the findings of evolutionary science, theologians were forced to abandon the outdated Aristotelian model of human distinctiveness and look for more creative ways to define the image of God. Instead of equating the image with a capacity that humans have, post-Darwinian theology regards the imago Dei in terms of something we are called to do or to be.

Defining the Imago Dei

Some theologians interpret the image functionally, as our election to represent God in the universe and exercise stewardship over creation. Others go for a relational interpretation, defining the image through the prism of the covenantal ‘I-Thou’ relationship that we are called to have with God, which is the fundament of human existence. To be in the image of God is to be in a personal, authentic relationship with God and with other human beings. Finally, there are others who interpret the imago Dei eschatologically, as a special destiny for human beings, a sort of gravitational pull that directs us toward existential fulfilment in the fellowship with God, in the eschaton. Which of these interpretations is the best? Well, hard to say. Without going into detail, let’s just say that there are good theological arguments for each of them.

If purely theological debate does not produce clear answers, we might then try to compare ourselves with the animals. This, though, does not lead us too far either. Although ‘technically’ we are not very different from the animals and we share with them similar bodily and cognitive structures, in practical terms the difference is huge. Our mental lives, our societies and our achievements are so radically different than theirs, that it is actually impossible to pinpoint just one dimension that represents the decisive difference. Animals are simply no match for us. This is good news for human distinctiveness, but it also means that we might be stuck in a never-ending theological debate on how to interpret the image of God, with so many options on our hand.

How Can AI help Define Who We Are?

This is where the emergence of human-level AI can be a game-changer. For the first time, we would be faced with the possibility of an equal or superior other, one that could potentially (out)match us in everything, from our intellectual capacities, to what we can do in the world, our relational abilities, or the complexity of our mental lives. Instead of agonizing about AI replacing us or rendering us irrelevant, we could relish the opportunity to better understand our distinctiveness through the insights brought about by the emergence of this new other.

The hypothesis of AGI might present theologians with an extraordinary opportunity to narrow down their definitions of human distinctiveness and the image of God. Looking at what would differentiate us from human-level AI, if indeed anything at all, may provide just the right amount of conceptual constraint needed for a better definition of the imago Dei. In this respect, our encounter with AI might prove to be our best shot at comparing ourselves with a different type of intelligence, apart from maybe the possibility of ever finding extra-terrestrial intelligence in the universe.


Dr. Marius Dorobantu is a research associate in science & theology at VU Univ. Amsterdam (NL). His PhD thesis (2020, Univ. of Strasbourg, FR) analysed the potential challenges of human-level AI for theological anthropology. The project he’s currently working on, funded by the Templeton WCF within the “Diverse Intelligences” initiative, is entitled ‘Understanding spiritual intelligence: psychological, theological, and computational approaches.

AI Theology Goes to New Mexico: Of Petroglyphs and Atomic Bombs

Six years ago, my dad and I set on a cross-country road trip from North Carolina to California. No, this was not a vacation but my way to move our stuff to Pasadena. We rented a Penske truck, towed one of our cars, had the dogs with us and drove across the country while my wife flew with the kids to LA. In the third day or our trip, we reached New Mexico in the morning and I was struck by its natural beauty. Blue skies, dry heat and beautiful arid vegetation invited me in. It was then I decided I had to come back for a visit one day. A month ago, I made that happen organizing a 40 year-old birthday trip with my unsuspecting brother. Unlike the first time, I was determined to spend 5 days exploring its natural beauties and its rich multi-cultural history.

Soon it became clear the trip offered many opportunities for reflection and writing. The themes of technology, culture, nature and humanity popped up all throughout which led me to this series of blogs. Reflecting on compelling experiences often ends up as blog material. Here is my first installment reflecting on this epic trip.

Ancient Writings

In our first full day in Albuquerque, we decided to visit the Petroglyphs National Monument. This national park offered not only an opportunity to hike in the beautiful New Mexico landscape but also a way to connect with pre-Columbus American history. The canyons boasted a rich array of petroglyphs (artistic carvings on rocks) that were between 400 to 700 years old. Centuries of erosion and exposure to the elements was not enough to erase these early manifestations of human creativity.

A small sample of the many petroglyphs found in the park.

The carvings ranged from people, animals to abstract symbols. Some were faint while others were very clear. From the information provided in the park, no one knows for sure why these carvings were made. They were made both by Native American and early Spanish settlers and represent the rich and diverse cultural heritage of the region.

To think that their imprints have persevered in the rocks for centuries is mind-boggling. I wonder about what I could do today that would remain intact for even 50 years! With all our technology and tools at our disposals, are we still capable of leaving a noticeable imprint in nature that could last centuries? It also made me wonder what inspired them to carve. Was it religious, artistic or did it serve another purpose? Some sampling of the thoughts that went through my mind as we battled 90 degree sun in our heads to walk through the trails and climb up the rocks.

The Cloud that Changed the World

As the clock reached noon, it was time to find an AC respite from the sun. I didn’t want to acquire burns from exposure and ruin the rest of the trip. After a short but restful lunch it was time to go to the next destination. Knowing that the heat made anything outside unbearable we opted for going to a museum. From Breaking Bad scenes and previous research I knew that Albuquerque housed the Museum of Nuclear Science and History. A fitting geographical location as Los Alamos, a few miles up north, was the testing ground for the Manhattan Project.

Standing in front of the museum

Museum of Nuclear Science and History is one of the few national museums outside of DC. It boasts an impressive collection of artifacts tracing the history of the development of Nuclear technology. A lot of it centers on the development of the nuclear bomb that not only ended World War II but also changed our relationships with technology permanently.

Re-visiting this complex history evokes a range of mixed emotions. On the one side there is a sense of awe and admiration. It is mind-boggling to comprehend the effort, coordination, resources, secrecy that it took for the Manhattan project to succeed. Similar to the moon landing effort decades later, this is another testament to this nation’s ability to take on monumental tasks. Yet, all of this dissipates when we consider the horrifying destruction left by the nuclear bomb.

From a military strategic perspective, the bomb was meant to actually save lives. Given dogged Japanese resistance, the Allied forces anticipated that only a long and bloody invasion would bring the war to a conclusion. Some estimates put the cost of lives in the millions for both sides. The solution, in their view, was to deliver a humiliating and devastating blow that would convince the Japanese emperor to re-think his resistance. Two bombs, and close to 150,000 lives decimated later, their strategy worked. Yet, the cost would far surpass the direct casualties from the bomb.

In the section portraying the bombings of Hiroshima and Nagasaki I was struck by a harrowing reminder. Of all the pictures shown, the image of an incinerated kid’s tricycle shocked me. I thought of my own children and the pain it would be to have their lives cut short in such a tender age. Yes, the project was a tremendous feat but it was also a destructive curse. In spite of its justification, the bomb crossed an unprecedented line for humanity.

The advent of the atomic age removed a sense a security from all of us. It is as if we realized we now had the power to annihilate all human life in a matter of seconds. This fear transcended national, ethnical and racial loyalties. This new discovery threatened all life in the planet.

In the exposition we learned that scientists working on the project made a final plea for an alternative option. Instead of dropping it in a city, they begged the military to run a demonstration of the bomb’s power in an unpopulated area as a final warning to the Japanese. Military leaders disagreed and convinced the president that the war effort needed a more forceful action than a demonstration. Let history be the judge of their ultimate decision.

Technological Blessing and Curse

The loss of innocence that came with the atomic experience reverberated in many levels. Till this day, the mushroom cloud symbolizes the destructive side of technology. Yet, even seventy years later, another nuclear bomb has not been deployed in an armed conflict. While a few nations continue to pursue it as a way to shore up their position in the geopolitical theater, thankfully, no deadly detonations have followed. Paradoxically, the horror of the first bombing led to restraint and possibly kept a cold war between two super powers from becoming hot in the 20th century.

How can we use the legacy of the nuclear bomb as a way to re-think our relationship with technology? On the one hand, it reminds us that the development of any technology should never be divorced from rigorous ethical deliberation. One could say that the nature of a global conflict like World War II did not lend itself to such activity. That may be so, but I do wonder how different history would have been have the President listened to the scientists’ plea for a demonstration. Would the threat of grave destruction be enough or was the actual deployment necessary?

On the other hand, harnessing energy out of atoms was a major scientific break through. In an age where we worry about global warming, nuclear energy continues to be a rather clean way to produce electricity. I wonder whether we would be more open to building nuclear plants had it not been tainted by its association with the bomb. That is, sometimes because of fear we may lose opportunities to harness technology for the good.

As we enter in an age where AI is now becoming a deciding factor in our future, I hope we listen to the lessons from the history of nuclear science. Let us approach it with rigorous ethical reflection but also aware of our blinding fears. While the first is essential the latter can greatly reduce the opportunity for flourishing. And that, in itself, would be a great loss.

AI Evolution: How Wakanda’s Golden City Inspires Innovation

Have you ever imagined a place where technology and nature work in harmony for the flourishing of life? Somebody did. In a blog last year, I explored how the Black Panther represented a Decolonial African futurist vision. Today, inspired on a recent podcast I heard, I want to write about how this vision worked itself out in Wakanda’s Golden City.

Before, diving into Wakanda, let me first put a plug for the Technopolis Podcast from Citylab. It is on my short list of subscribed podcasts for a reason. It explores the intersection of technology in city planning in a thought provoking way. I recommend it to anyone looking for some meaty conversations on the topic.

Designing a Futuristic City

The Podcast recently had Hannah Beachler as a guest to discuss the work she did as a the production designer for the Black Panther movie. More specifically, she was there to discuss the meticulous work that went into designing, building and filming the Golden City – Wakanda’s capital. Just so you have an idea, Hanna put together a 500 page book for the production plan! As she described to the hosts, Hanna put in many hours studying, researching and speaking to experts in urban planning and sustainability.

Though this was mostly a background to the feature film, the production team asked serious design questions such as “How do people move in the city?”; “How can it incorporate technology while also respecting nature?”; “How can it portray the best of African culture from the present?” These poignant inquiries guided the creative process that led to a visual feast for the movie watcher.

The author sought inspiration from architecture, culture and futuristic thinking. Part of the challenge was creating a futuristic vision that happens in the present day. That is, the innovations showcased had to be recognizable and believable in what is available to us today since the story takes place in the present.

For example, one of the discussions were around having self-driving cars. In that debate, the importance of the bus driver in both African and African-American culture won out over techno-centric futuristic view. This kind of design thinking permeated throughout. Instead of flyinng cars, the Golden city had buses and trains. Instead of concrete jungle, the city had dirt roads with vegetation.

The vision was to portray technology not as an end in itself but as a complement to nature in service of people. Instead of destroy and replace mentality that often dominates city planning, the designer opted for how can existing structures evolve. Such direction balanced reverence for nature and tradition while also fostering ground-breaking innovations. For example, one of the sidewalks in the city harnessed foot traffic to power buildings. I would love to have one of those in my neighborhood!

From Creative Destruction to Evolution

The idea of evolving rather than replacing was a key insight from the conversation, especially as it relates to technology. One of the downsides of current technological innovation is its tendency to impose pre-conceived conditions for its flourishing. This is not limited to buildings but manifests itself especially in the many devices that currently proliferate globally.

For example, most electronics are built to be used in climatized rooms. Only now, with the emergence of smart phones and light laptops, this paradigm started to break. I rarely thought about this until I visited Brazil in May. Unlike here where all buildings are closed off, climatized enclosures, buildings in Brazil often lack AC or heat. Instead they keep its windows open 24/7. Hence, when I went to visit the IT team of the town’s university, I noticed they had to be enclosed in a special room with no windows that had AC running around the clock. In order to accommodate computers need for refrigeration, existing structures had to be completely retrofitted greatly increasing the cost and accessibility of computers in the town.

The idea of a technology that evolves challenges this pre-conceived notion. It starts by looking at what is currently available and then seeks to improve it for the flourishing of life. While respecting the environment, it also rejects luddite notions that all technology is destructive to nature. Instead it believes that technology can emerge from nature.

The picture that encapsulates this well is the idea of hover buses on top of dirt roads as portrayed in the golden city. You have a high-tech form of mobility moving across natural paths that can absorb water, grow beautiful vegetation while still serving as roads. This is a great example for a vision of life-affirming, environmentally conscious and cultural sensitive technology. A futuristic vision we can all adopt for our time.

Lessons from Wakanda for AI

As I reflect on this podcast, I consider ways in which AI can enable such evolution mentality when building innovation. One of the greatest strengths of AI system is its ability to learn from data. As our capability to capture, aggregate and process data improves with the advent of IOT (Internet of Things), AI systems can be a key tool in the planning for the future of cities, communities and even whole regions.

Yet, the idea that AI systems as a silver bullet is rather false. That is, simply learning from data is not enough. For an evolution vision to take hold, what you need are visionary humans who can point these AI systems towards new paradigms. They need to guide the process of data collection, processing and learning. They should also steward well the optimization functions towards life-flourishing goals.

Hence, the main lesson Wakanda offers is not technological per say but cultural. Black Panther illustrates the power of culturally-informed futuristic dreaming. When a historically ostracized people group decides to dream an alternative future without forgetting its past, a beautiful result can emerge. In being faithful to their own culture, they offer a lesson to all cultures. The main point is not the technology but whose vision it reflects.

This only reinforces the need for further democratizing technical knowledge. Developing technology cannot remain the job of a select few but belongs to all humanity. As we work to remove the barriers of entry we also foster innovation by bringing new voices to the table. May the Golden City be the first of many examples of a life-affirming, culturally sensitive, environmentally enhancing technological dream.

AI Theology Goes to Brazil Part 4 – Postmodernity and AI

In part 3, I described my learning experience with holistic ministry in Brazil after meeting with local pastors in Caratinga. In this blog, closing the series on my Brazil trip, I describe in more detail the last talk I gave in the university. Unlike previous talks, this time I was there more to listen than to speak. The event consisted of a panel discussion entitled: “AI and Postmodernity.” The plan was for me to give a 15 minute introduction and then pass on the discussion to a panel of professors.

With such a loaded title, I struggled to develop a suitable introduction. Postmodernity is such a broad term which does not lend itself to simple definitions. Moreover, it is not something usually discussed alongside a technology. The task before me was to elucidate points of correlation that would spark fruitful conversation. The preparation forced me to think deeper about some assumptions inherent in AI technology. After some careful thought, I came up with the following:

  1. All problems have a technological solution
  2. Nature (including humanity) is an algorithm that needs to be decoded
  3. Data reveals the truth and shows the way to solutions
  4. Every activity can and should be optimized

The more I reflect on these, the more I see them informing the development of Artificial Intelligence. Without these assumptions in place, I don’t see AI attracting the attention, funding and research needed to develop further. This is only happening because business leaders, politicians and investors have bought into these claims.

For Postmodernity, I framed the topic around four key implications that I found to be relevant for the panel discussion. They were the following:

  1. The acceptance and celebration of diversity
  2. Skepticism towards institutions or any power structure
  3. Increase in both nihilism and fatalism
  4. Openness to all narratives especially those coming from the margins

Bring the two together, I formulated four questions to spark the discussion among panel participants. Here were the questions:

  1. What is the socio-cultural dominant narrative in Caratinga?
  2. How can AI empower groups in the margins?
  3. How can AI foster human flourishing in the local level?
  4. Could the development of AI be a sign of a return to Modernity?

The panel consisted of professors in engineering, law, computer science, theology and social service. Their answers stayed mostly around themes of ensuring technology is used to enhance not diminish humanity. For example, the Social Service professor expressed the hope that data collection could improve their work with vulnerable populations. The Computer Science professor emphasized the differences between the human brain and AI. The Theology professor talked about the potential of AI for furthering ministry opportunities and addressing income inequality.

When hearing their answers, I sensed a bit of a disconnect on where I expected the discussion to go and where it actually went. While I set up the stage preparing for a more philosophical discussion of AI and postmodernity, the professors mostly avoided such approaches preferring instead to speak of concrete ways they understood that technology could affect their work. The panel ended up being about impact of AI applications rather than how it could change our view of humanity and the world we live in.

This disconnect only highlighted the importance of context. I noticed how my North-American academic context was dominated by deep specialists while the particular academic context I encountered in Brazil was run by generalists. This was not limited to academia but reflected more how the local society worked. People tended to rely on a broader more general level of knowledge rather than simply consulting the specialist for each field at hand.

I realized that such environment made them more receptive to the message of integration that I was proposing. In a place where people tend to rely on generalized knowledge, they already are doing the integration I suggest between faith, science and technology. They may not be doing directly on those topics but are practicing it in other areas.

On the flip side, reliance on a generalized knowledge can hinder more in-depth reflection on a specific topic. So, while I mostly agreed with the professors that technology should not replace humanity, I hoped to hear more nuanced arguments on how that could be the case in their context. I was hoping for new insights of how technology could integrate with their environment to solve deeply entrenched social problems. I was hoping for more layers of “why” and “how” in their answer. With that said, the panel was still fruitful in that it sparked discussion on how technology is impacting diverse segments of society. That in itself is an important step forward.

I left Caratinga deeply impacted by my time there. As the old adage goes, I came here to teach but ended up learning more. In fact, there were times where I wonder whether my talks were even relevant to their context. This was not just personal insecurity, though there was probably some of it , but it came from accepting that the assumptions that I operate under do not necessarily hold here. In many ways, I found a community where technological change was not as voracious as I have witnessed in the US. Even if many of these AI breakthroughs come to pass, they may not necessarily upend the social order there as it will in my own community. This is not to say that they will be immune from it. Just to realize that technology adoption is not as inevitable as we make it to be. The issue is not just whether something is technically possible but whether people will openly adopt it.

From what I saw, smart phones has become a integral part of their lives. Yet, in many other areas, the way they study, work, eat and relate to each other has remained unchanged. That is not a sign of being “backward” but of resistance. Caratinga show that there isn’t only one way to integrate technology into the fabric of a society. This realization led me reflect on how I can live a life, in a techno-dominant society, that is more in tune with my humanity and of those around me. That is the question I take home with me, hoping to not let the lessons I learned here go into oblivion.

AI Theology Goes to Brazil – Part 2: Pure Energy

In the previous blog, I described my first talk in Caratinga where I addressed how to transmit our faith to our kids in a technological era. It was a good way to break into the series of talks that followed.

The following day, I delivered a talk in the evening to a wider audience of students, professors and admnistrators of the university. After being introduced, I walked up to the stage with with sunglasses doing a robotic walk, to Information Society’s “What’s on Your Mind”. (Yes, I went there! For those old enough, who can forget Leonard Nimoy’s voice saying “Pure Energy” in the intro – it’s a classic. Also, I do remember that being a big hit in Brazil. What happened to all the great music of the 80’s and 90’s? That’s a topic for another blog.) For an introvert like me, that was a tremendous risk and one that that could have gone badly. Thankfully, the audience was gracious with my clumsy attempt waiting to hear what I had to share.

After outlining a short introduction to what Artificial Intelligence is I then moved on to examples of how it is changing relationships, jobs and religion. I wanted to audience distinguish between general and narrow AI and also the difference between replacement and augmentation. On relationships, I talked about the promises of algorithmic match making and shared the example of the virtual wife hologram, now available in Japan. For jobs, I talked about how many job functions would be affected noting that it would come not as in total replacement but in automating tasks. For fun, I shared the story of the robot run hotel in Japan where robots ended up being fired for annoying customers! In other words, AI changes would be more complex than what is portrayed in science fiction.

Finally, on religion, I shared a blessing from Robot Pastor with the audience. First, I assured seminarians that robot competition was not an issue for them. Instead, I encouraged those training for the ministry to view technology not as a threat but an ally to their efforts. Historically, religious leaders have often displayed knee-jerk resistance to new technologies. Could we react differently this time? I also mentioned how AI could amplify their efforts in counseling parishioners or even just getting their message out.

That’s me with sunglasses getting ready to speak.

The main message of the talk was that narrow AI was coming to change our lives, how would we respond? I encouraged the audience to replace fear with courage, engaging AI with a broader view that seized on opportunities while monitoring risks. On the latter, I mentioned the perils of using face recognition for policing and how such uses required a broader democratic discussion from the civil society. Interestingly enough, after my talk, a local lawyer told me of a recent effort to coordinate camera footage from many areas in the town for law enforcement. I was not speaking of a distant future but of the here and now.

The question and answer time proved very thought provoking. The students asked insightful questions. One that impressed me was how far computers were from approaching the human brain capacity. Others asked more directly about how AI would hinder belief in God and religious adherence. In my view, the questions demonstrated some critical thinking and deep intellectual engagement with the topic. That is, the students were awake and listening. That is not always the case in events like these where they are required to go for class credit.

Maybe “pure energy” and a robotic walk was all it took to hook them in. Maybe not. Either way, my hope was to spark conversation on the topic. Judging by their questions, the dialogue had begun. In the 21st century, we are now all “Information Societies.”

AI for Scholarship: How Machine Learning can Transform the Humanities

 In a previous blog, I explored how AI will speed up scientific research. In this blog, I will examine the overlooked  potential that AI has to transform the Humanities. This connection may not be clear at first since most of these fields do not include an element of science or math. They are more preoccupied with developing theories than testing hypotheses through experimentation. Subjects like Literature, Philosophy, History, Languages and Religious Studies (and Theology) rely heavily in the interpretation and qualitative analysis of texts. In such environment, how could mathematical algorithms be of any use? 

Before addressing the question above, we must first look at the field of Digital Humanities that created a bridge from ancient texts to modern computation. The field dates back the 1930’s, before the emergence of Artificial Intelligence. Ironically, and interestingly relevant to this blog, the first project in this area was a collaboration between an English professor, a Jesuit Priest and IBM to create a concordance for Thomas Aquinas’ writings. As digital technology advanced and texts became digitized, the field has continued to grow in importance. Its primary purpose is to both apply digital methods to Humanities as well as reflect on its use. That is, they are not only interested in digitizing books but also evaluating how the use of digital medium affect human understanding of these texts. 

Building on the foundation of Digital Humanities, the connection with AI becomes all too clear. Once computers can ingest these texts, text mining and natural language processing are now a possibility. With the recent advances in machine learning algorithms, cheapening of computing power and the availability of open source tools the conditions are ripe for an AI revolution in the Humanities.

How can that happen? The use of machine learning in combination with Natural Language Processing can open avenues of meaning that were not possible before. For centuries, these academic subjects have relied on the accumulated analysis of texts performed by humans. Yet, human capacity to interpret, analyze and absorb texts is finite. Humans do a great job in capturing meaning and nuances in texts of hundreds or even a few thousand pages. Yet, as the volume increases, machine learning can detect patterns that  are not apparent to a human reader.  This can be especially critical in applications such as author attribution (determining who the writer was when that information is not clear or in question), analysis of cultural trends,  semantics, tone and relationship between disparate texts. 

Theology is a field that is particularly poised to benefit from this combination. For those unfamiliar with Theological studies, it is a long and lonely road. Brave souls aiming to master the field must undergo more schooling than Physicians. In most cases, aspiring scholars must a complete a five-year doctorate program on top of 2-4 years of master-level studies. Part of the reason is that the field has accumulated an inordinate amount of primary sources and countless interpretations of these texts. They were written in multiple ancient and modern languages and have a span over thousands of years. In short, when reams of texts can become Big Data, machine learning can do wonders to synthesize, analyze and correlate large bodies of texts. 

To be clear, that does not mean the machine learning will replace painstaking scholarly work. Quite the opposite, it has the potential to speed up and automate some tasks so scholars can focus on high level abstract thinking where humans still hold a vast advantage over machines. If anything it should make their lives easier and possibly shorter the time it takes to master the field.

Along these lines of augmentation, I am thinking about a possible project. What if we could employ machine learning algorithms in a theologian body of work and compare it to the scholarship work that interprets it? Could we find new avenues or meaning that could complement or challenge prevailing scholarship in the topic? 

I am curious to see what such experiment could uncover. 

The Future of Service: How Google, Apple and Facebook are Using AI to Simplify Our Lives

Companies want to have satisfied customers who will come back for more and recommend their brands to others, and AI can help them to achieve this goal. There are many ways in which people benefit from this, for instance getting a quick reply to their questions.

Artificial intelligence is becoming “humanized” as it helps people in several ways. Whether it’s face recognition, dictating directly to the mobile phones, shopping online, ordering food, self-driving cars, and many more, they are making our lives easier.

Let’s take a look at three major enterprises and ways they use artificial intelligence to “make life easier” for their customers.

  1. Google

 

  • Google spent between $20 and $30 billion on artificial intelligence in 2016.

  • The Google Self driving cars use AI to map and move on the roads.

  • Google’s voice recognition technology claims that there is a 98% accuracy.

  • Youtube increased watch time by 50% by tuning its video recommendations using AI.

  • Google Photos can recognize faces, create animations, or suggest a photo filter.

 

  1. Facebook

  • Facebook Deeptext understands text with near-human accuracy.

  • Artificial intelligence is used to stop fake news from going viral.

  • Facebook uses deep neural networks for ad placement.

  • It has AI embedded into its messenger.

  • In 2017 it rolled out an AI project that could spot people with suicidal tendencies.

 

  1. Apple

  • Apple uses neural engine for face recognition to unlock the phone and transfer facial expression onto animate emoji.

  • It uses deep learning to detect fraud on the Apple Store and for face detection.

  • Machine learning helps Apple choose new stories and recognizes faces and locations in the photos.

  • It is building an autonomous driving system that could be implemented in existing cars.

  • Apple’s Siri is a virtual personal assistant that communicates using text-to-speech system.

These companies are just the tip of the iceberg, and many others such as Sephora and Nordstrom are also jumping on the AI bandwagon, as they realize how beneficial it can be for their business. In the next five years many people will turn to using artificial intelligence. The 47% of people will start using a home or family assistant, 46% of people will start using a health coach, and 41% will use financial advisers.

The following statistics and the fact that the worldwide spending on cognitive and AI systems will reach an astonishing $57.6 billion in 2021 show just how bright the future of artificial intelligence is.

  • 60% of retail and ecommerce brands will implement AI in 2018.

  • 100% of IoT initiatives will be supported by AI capabilities by 2019.

  • 20% of business content will be authored by machines in 2018.

  • 85% of customer interactions with the enterprise will be managed without human intervention by 2020.

The use of artificial intelligence is only going to expand in the following years, as more and more companies decide to use it. With this pace, chatbots will be indistinguishable from humans by 2029, at least according to famous futurist Ray Kurzweil.

While this is welcome news for the customer, the question will be whether how these companies steward customer data. As AI takes a more prominent role, the need for data collection will only increase. Ensuring that this is done in an appropriate manner can be the difference between stellar customer service and costly lawsuits. The company that successfully balances privacy concerns while also harnessing data through effective AI algorithms is poised to become a market leader.

Karthik Reddy, Community Manager at www.16best.net, is the author of India’s Number 1 travel blog. Boasting an MBA in computer science, he once decided to get away from the office desk life and take a breathtaking journey around the world. He is eager to use the power of the global network to inspire others. A passionate traveler and photography enthusiast, he aspires to share his experiences and help people see the world through his lens.

 

4 Reasons Why We Should be Teaching AI to Kids

In a previous blog, I talked about a multi-disciplinary approach to STEM education. In this blog I want to explore how teaching AI to kids can accomplish those goals while also introducing youngsters to an emerging technology that will greatly impact their future. If you are parent, you may be asking: why should my child learn about AI? Recently, the importance of STEM education has been emphasized by many stakeholders. Yet, what about learning AI that makes it different from other STEM subjects?

First it is important to better define what learning AI means. Lately, the AI term has been used for any instance a computer acts like a human. This varies from automation of tasks all the way to humanoids like Sophia . Are we talking about educating children to build sentient machines? No, at least not at first. The underlying technology that enables AI is machine learning. Simply put, as hinted by its name, these are algorithms that allow computers to learn directly from data or interaction with an environment rather than through programming. This is not a completely automated process as the data scientist and/or developer must still manage the processes of learning. Yet, at its essence, it is a new paradigm for how to use computers. We go from a programming in which we instruct computer to carry out tasks to machine learning where we feed the computer with data so it can discover patterns and learn tasks on its own. The question then is why should we teach AI (machine learning) to kids?

Exposes Them to Coding

Teaching AI to kids start with coding. While we’ll soon have advanced interfaces for machine learning, some that will allow a “drag-and-drop” experience, for now doing machine learning requires coding. That is good news for educational purposes. I don’t need to re-hash here the benefits of coding education. In recent years, there has been a tremendous push to get children to start coding early. Learning to code introduces them to a type of thinking that will help them later in life even if they do not become programmers. It requires logic and mathematical reasoning that can be applied to many endeavors.

Furthermore, generation Z grew up with computers, tablets and smart phones. They are very comfortable with using them and incorporating them into their world. Yet, while large tech companies have excelled in ensuring no child is left without a device, we have done a poor job in helping them understand what is under the hood of all this technology they use. Learning to code is a way to do exactly that: lift up the hood so they can see how these things work. Doing so, empowers them to become creators with technology rather than mere consumers.

Works Well With Gaming

The reality is that AI really started with games. One the first experiment with AI was to make a computer learn to play a game of Checkers. Hence, the combination between AI and gaming is rather complementary. While there are now some courses that teach children to build games, teaching AI goes a step further. They actually get to teach the computer to play games. This is important because games are a common part of their world. Teaching AI with games helps them engage in the topic by bringing it to a territory that is familiar to their imagination.

I suspect that gaming will increasingly become part of education in the near future. What once was the scourge of educators is turning out to be an effective tool to engage children in the learning process. There are clear objectives, instant rewards and challenges to overcome. Teaching machine learning with games, rides this wave of this and enhances it by giving them an opportunity to fine tune learning algorithms with objectives that captivate their imagination.

Promotes Data Fluency

Data is the electricity of the 21st century. Helping children understand how to collect, examine and analyze data sets them up for success in the world of big data. We are moving towards a society where data-driven methods are increasingly shaping our future. Consider for example how data is transforming fields like education, criminal courts and healthcare. This trends shows not signs of slowing down in the near future.

This trend will not be limited to IT jobs. As the sensors become more advanced, data collection will start happening in multi-form ways. Soon fitness programs will be informed, shaped and measured by body sensors that can provide more precise information about our bodies’ metabolism. Sports like Baseball  and Football are already being transformed by the use of data. Thus, it is not far-fetched to assume that they will eventually be working in jobs or building business that live on data. They may not all become data scientist or analysts, but they will likely need to be familiar with data processes.

Opens up Discussions About Our Humanity

Because AI looms large in Science-Fiction, the topic opens the way for discussions on Literature, Ethics, Philosophy and Social Studies. The development of AI forces us to re-consider what it means to be human. Hence, I believe it provides a great platform to add Humanities to an otherwise robust STEM subject. AI education can and should include a strong component of reading and writing.

Doing so develops critical thinking and also helps them connect the “how” with the “why”. It is not enough to just learn how to build AI applications but foremost why we should do it. What does it mean to outsource reasoning and decision-making to machines? How much automation can happen without compromising human flourishing? You may think these are adult question but we underestimate our children’s ability to reflect deeply about the destiny of humanity. They, more than us, need to think about these issues for they will inherit this world.

If we can start with them early, maybe they can make better choices and clean up the mess we have made. Also, teaching AI to kids can be a lot easier than we think.

Automated Research: How AI Will Speed Up Scientific Discovery

The potential of AI is boundless. Currently, there is a lot of buzz around how it will change industries like transportation, entertainment and healthcare. Less known but even more revolutionary is how AI could change science itself. In a previous blog, I speculated about the impact of AI on academic research through text mining. The implications of  automated research described here are even more far-reaching.

Recently, I came upon an article in Aeon that described exactly that. In it, biologist Ahmed Alkhateeb eloquently makes his argument in the excerpt below:

Human minds simply cannot reconstruct highly complex natural phenomena efficiently enough in the age of big data. A modern Baconian method that incorporates reductionist ideas through data-mining, but then analyses this information through inductive computational models, could transform our understanding of the natural world. Such an approach would enable us to generate novel hypotheses that have higher chances of turning out to be true, to test those hypotheses, and to fill gaps in our knowledge.

As a good academic, the author says a lot with a few words in the paragraph above. Let me unpack his statement a bit.

His first point is that in the age of big data, individual human minds are incapable of effectively analyzing, processing and making meaning of all the information available. There was a time where all the knowledge about a discipline was in books that could be read or at least summarized by one person. Furthermore, traditional ways of doing research whether through a lab experimentation, sampling, controlling for externalities, testing hypothesis take a long time and only give a narrow view of reality. Hence, in a time where big data is available, such approach will not be sufficient to harness all the knowledge that could be discovered.

His second point is to suggest a new approach that incorporates Artificial Intelligence through pattern seeking algorithms that can effectively and efficiently mine data. The Baconian method simply means the approach of discovering knowledge through disciplined collection and analysis of observations. He proposes an algorithmic approach that would mine data, come up with hypothesis through computer models then collect new data to test those hypotheses. Furthermore, this process would not be limited to an individual but would draw from the knowledge of a vast scientific community. In short, he proposes including AI in every step of scientific research as a way to improve quality and accuracy. The idea is that an algorithmic approach would produce better hypotheses and also test them more efficiently than humans.

As the author concedes, current algorithms and approaches are not fully adequate for the task. While AI can already mine numeric data well, text mining is more of a recent development. Computers think in numbers so to get them to make sense of text requires time-consuming processes to translate text into numeric values. Relevant to this topic, the Washington Post just put out an article about how computers have now, for the first time beat human performance in a reading and comprehension test. This is an important step if we want to see AI more involved in scientific research and discovery.

How will automated research impact our world?

The promise of AI-assisted scientific discovery is remarkable. It could lead to the cure of diseases, the discovery of new energy sources and unprecedented breakthroughs in technology. Another outcome would be the democratization of scientific research. As research gets automated, it becomes easier for others to do it just like Windows has made the computer accessible to people that do not code.

In spite of all this potential, such development should cause us to pause for reflection. It is impressive how much of our mental capacities are being outsourced to machines. How comfortable are we with this inevitable meshing of bodies and electronics? Who will lead, fund and direct the automated research? Will it lead to enriching corporations or improving quality of life for all? I disagree with the author’s statement that an automated research would make science “limitlessly free.” Even as machines are doing the work, humans are still controlling the direction and scope of the research. As we ship more human activity to machines, ensuring they reflect our ethical standards remains a human mandate.