A Priest at an AI Conference

Created through prompt by Dall-E

Last month, I attended the 38th conference of the Association for the Advancement of Artificial Intelligence. The conference was held in Vancouver, British Columbia, Canada. It’s important for people of faith to learn about AI and talk about AI with the people who are doing the actual work with these technologies. Who are these people? They are a mixture of academics in the social and natural sciences, as well as scientists and engineers in technology and research companies. For example, Microsoft’s Chief Science Officer, Eric Horvitz, was at the event.

Attendance was over 4000, drawing people from all over the world, especially China.  Breakout sessions focused on specific problems or segments of the field. For example, “Formalizing Robustness in Neural Networks: Explainability, Uncertainty, and Intervenability.” One of the sessions I attended was on “AI-Driven Personalization to Support Human-AI Collaboration.” There was just one session on philosophy and ethics, which in the technology space is usually referred to as AI alignment. This is the work of making sure that AI technologies will align with human values. It was interesting to hear papers about research being done to develop the ethical inputs of various AI programs.

This was a huge event, but religious ideas and persons were not particularly visible. While I could not be at the dozens of sessions that were held over the several days of the conference, including the pre-conference and post-conference tracks, I did not hear in the proceedings or read in the published materials of the conference anything about religion. The exception being the event’s social media app where some attendees had posted interest in having separate meetups for Mormons and Muslims. Muslim women were almost the only visibly religious individuals at the conference. I saw one Jewish man with a Yarmulke, and no doubt I was very visible in my friar’s habit. Of course there were religious people there who were not religiously visible. For example, I had a good conversation with a graduate student from Poland who is Roman Catholic.

I left the conference feeling more informed, inspired, and concerned. The general tenor of the conference was positive. There were very few comments in presentations on the potential harms of AI. When they were given, they were brief. Again, it was a massive event. There may have been more extended conversations about these dangers. For example, there was an all-day session I didn’t attend on, “Diversity, Belonging, Equity, and Inclusion.” This was the seventh annual workshop held on that topic in the thirty-eight years of the conference. Like with any new technology, AI will bring both positives and negatives to the world. One positive development I learned about was in health screening. In some areas, it has become more accurate due to AI. This will save more lives by detecting the presence of diseases like cancer earlier, leading to people getting treated earlier.

Attending the conference strengthened my belief that people of faith, from across the world’s religions, have an important part to play in thinking deeply about this technology. Philosophers, theologians, faith leaders and scholars have spent centuries pondering questions about what it means to be human and how to flourish as human beings. We often call this wisdom. As human beings our track record with wisely engaging emerging technologies is mixed.  Often, we quickly embrace new technologies without any deep reflection about how they will shape our lives and the world. I’m glad for organizations like AI and Faith. The more we can bring together the people doing the AI technology work with the wisdom of the world’s religions the better. There will be controversies, but I trust, a wiser engagement with AI. AI-related technologies will increasingly shape our lives, for good and ill. The more reflective we are about them, the more the good can be realized and the bad minimized. If you desire, you can click here for a short reflection on the conference and how we should respond from the perspective of my own faith tradition.


The Rev’d Dr. Kevin Goodrich OP is a vowed member of the Anglican Order of Preachers (aka “The Dominicans”).

Digital Companionship and the Future of Relationships

As AI technologies become more human-like, will they ever be able to meet our need for companionship? Pets already play that role creating deep bonds with us that transcend verbal communication. Yet, intelligent technologies have the potential to engage us in complex interactions never thought possible beyond two humans. That is the promise of digital companionship. What does that mean for the future of human relationships? First, definitions are in order.

For the purposes of this piece, I will define digital companionship as an app (chatbot, digital assistant, or avatar) that develops a relationship with the user that goes beyond servicing basic needs. In other words, it is able to carry on a conversation as opposed to Siri and Alexa today that only provide answers when prompted. These are not glorified google searches but instead can conjure unique personalities and engage in small talk.

As you can imagine, these are too far from our present. The controversy around LaMDA illustrates this well. We are starting to wonder whether AI is sentient because they are getting that good. All it takes is for a company to commercialize this technology in a product offering that appeals to customers.

What is the need?

Trends in longevity improvement and a loneliness pandemic point to a future where digital companions are not nice-to-have luxuries but possibly essential for human social needs. Entrepreneurs the world over are salivating at the market opportunity this presents. After all, a product that can develop a relationship with its customers addresses humanity’s most basic needs. Many people would be willing to pay big $s for that

Image by Stefan Dr. Schulz from Pixabay

This process of relationship building is already underway through small changes in how we interact with technology. Consider for example the growing demand for moving from typing to voice-activated solutions. It is really annoying to have to type a new address when setting up directions in a vehicle. Also, consider how easier it would be to manipulate apps on your phone if voice-activation technology was mature. The future is not on digits but on voice. As AI assistants start talking back with more intelligence and personality, bonds with them will naturally emerge. Just watch your children play around with Alexa and you will see what I mean.

While voice will be key, there is still a growing need for text generation in the form of chatbots. Innovative companies are already experimenting with advanced chatbot applications that provide mental health support. This is still a far cry from therapy but a step in that direction. Unlike voice, a technology that is yet to perfect both comprehension and generation, text generation manifesting in apps like GPT-3 and others are showing impressive abilities to carry on intelligent conversations.

Current developments point to a near future where chatbots can carry meaningful conversations, emulating humanity’s most cherished relational skill: the ability to create and sustain dialogue. Dialoguing chatbots will easily become anthropomorphized regardless of whether they reach sentience.

Signs of Things to Come

Intuition robotics is already envisioning a future where the elderly will rely on digital companions. On their site, they feature the Elli-Q , their first-generation digital companion that consists of a tower (kind of like Alexa but with a moving head) and an e-reader. Hence the user can interact with the tool both through text or voice. It offers help with reminders, track vitals, provides news and weather update, and searches for professionals while also throwing a joke here and there.

Image source: TheDigitalArtist via Pixabay.

The last feature is the most interesting, suggesting the direction they are aiming for. It is clear they want this to be not just a digital assistant but a pleasant companion. In a fact, in a separate blog the company outlines the path toward full-blown digital companions that will not only provide information but become empathetic and personalized agents. In other words, they will behave more like a true human helper and companion.

While I am not convinced that the switch to digital companions is inevitable, their bold proposal here is worth pondering. A lot of times, the difference between tech adoption has little to do with the technology itself but with the ingenuity of an application. As intuition robotics focuses its energies on elder-care, they have a better chance to get it right. Whether Elder customers will be willing to shell out $250 upfront + the monthly $30-$40 fee remains to be seen.

Re-defining what Digital Companionship is

My search took an interesting turn. When typing “Digital Companions”, Ecosia‘s (my preferred search engine that plants trees for every search) top hit was not a company or an informative article. Instead, it pointed me to a government service in the UK. In that case, digital companions are willing teenagers that help the elderly connect with the Internet. They are actual humans helping other humans find their way through the ever-confusing digital world.

high angle photo of robot
Photo by Alex Knight on Pexels.com

This site’s definition certainly deviates from my original idea of digital companionship. Yet, it made me pause to ponder: could digital companionship be less about AI and more about digitally-enabled ways to connect people to each other?

Before we undertake the arduous task of designing an AI product that can effectively help the elderly, shouldn’t we first define what it is? Should AI really replace human or simply augment them in this task? That is, can we imagine a feature where adventurous high schoolers can use AI tools to help the elderly find the services they need? I think it is this type of augmentation approach that is missing in the tech industry and also why we need to democratize technology skills so new options arise.

If the choice is between a cute intelligent robot or a job-giving empowered teenager – I would certainly opt for the latter.

4th AIT Podcast: AI at Work: A Tale of Two Workforces

AI and new technologies are growing by the day in workspaces. How can that change the future of work? 

In the fourth episode of the AI Theology Podcast, Elias Kruger and Maggie Bender, member of our AIT Board, talk about the tale of two cities. The tale of two labor forces, one shaped by automation and efficiency and one empowered by augmentation and creativity. How are these changes affecting us? How can we look at this process through a theological lens?

Listen to us on: 

Spotify

Apple Podcasts

Google Podcasts

Make sure to share with family and friends to spread information.

Here are some of the references we used for this episode 

60% of Americans whose job can be done from home are now working from home most of time. In many cases by choice. https://www.pewresearch.org/social-trends/2022/02/16/covid-19-pandemic-continues-to-reshape-work-in-america/  

The promise of 5G networks is already propelling innovators to design new modes of communication. From remote robotic surgery to ultra-responsive autonomous cars, the 5G network leans into a world of higher reliability and lower latency. In this episode, we talk to experts revolutionizing the way we transfer skills via the technology of touch. Podcast The future according to now on Apple Podcasts

Algo surveillance and measurement https://techmonitor.ai/leadership/workforce/algorithmic-bosses-changing-work

Call center – monitored calls https://partnershiponai.org/what-workers-say-about-workplace-ai/

 

Climate Change and Geopolitics: Macro-Drivers of the Future

In the last blog, we introduced scenario planning as an established academic and business practice for framing the future. The practice helps us break out of fixed thought patterns and step into a growth mentality that envisions multiple options for the future. The first step in this scenario planning journey is to pick the most important macro drivers that will define the parameters of the future. There are many options here such as economics, climate, geopolitics, technology, or social change. Before we get there, some preliminary thoughts on how we got here are in order.

Preparing to Imagine

At AI Theology we are in the business of imagining the future. In fact, in our recent meetings we established our mission statement as the following:

To forge a community of lifelong learners who will imagine theological AI futures that promote the flourishing of all life.

AI Theology mission statement

That is, we are above all a lifelong learning community. We look at the future with an open mind and stare at it as an organism rather than an individual. We believe we hear God better when we do it together. By expanding the table of conversation, including voices once shut out, we can finally hear the Spirit’s whisper from the margins.

Yet, we also have centered our task, our work to do, on imagination. What? You read that right, our number 1 job is child’s play – the skill we unlearn with adulthood. We believe that imagination is one way we can express the indwelling divine breath into form. As a form of embodied creativity, just like faith, imagination brings forth what was not there before.

Photo by J. Balla Photography on Unsplash

Scenario Planning as the Scaffold for Creativity

As you may suspect, our goal in pursuing scenario planning is not for the survival or thriving of an institution, instead, it is creative. We seek to imagine futures based on these scenarios we come up with. Furthermore, we seek to express them through relatable stories and through explanatory prose.

Our goal is not to create strategic plans but to elicit inspiration and action towards preferred collective futures. One of the biggest failures of technological development and theological thinking in our time is one of imagination. Straightjacketed by rigid religious dogma or agendas seeking perpetual profit, we produce more of the same even as needs and capabilities change. The failure of imagination is what leads us back into reclaiming a lost past rather than building a future anew. In this journey of transformation, we must first awaken to imagination.

Yet, this is not a free-flowing process devoid of structure and order. Discipline and creativity are not opposites but instead can work together to forge masterpieces. Hence, in the spirit of integration, we look to business practices, often tied to profit-making objectives, and turn them into a platform to build dreams about the future. In our case, we believe this will take shape in the form of fiction and non-fiction content about the future. We want to engage in scenario planning for painting realistic pictures of what the future could look like.

Setting the Foundations of a Future Canvas

If we are serious about imagining the future with the help of scenario planning, the first step is deciding on two main variables that will decide the parameters for our future. I would like to call them “macro-drivers” of the future. They are general enough to cut across multiple areas but also intelligible enough to be understood in simple terms. They don’t cover all areas of life but are big enough to set the terms upon which humanity builds their future.

Photo by Markus Spiske on Unsplash

For example, while one may not have foreseen in the early 1900s, growing nationalism would set the terms for the rest of the century. In the previous century, industrialization and colonization were defining macro drivers. These are not events but more like themes. They capture the gestalt of an age.

If we look at our present and the near-term future (20 years from now), which macro-drivers are setting the terms for what is to come? You may have guessed it but after some deliberation, we are currently settling on climate change and geopolitics. While these are important now, we expect them to become all the more defining in the next two decades.

The Climate Wager

Human driving warming of the earth is undoubtedly the challenge of our times. This is a pressing issue now and is only expected to loom larger in our collective psyche. It is an interesting variable because it is not dependent on a few actors, like political leaders, but represents the compounded effect of our relationship with the more-than-human world. It depends on us but also on how nature reacts. Both sides are extremely hard to predict but we can at least make scenarios based on agreed-upon temperature markers.

You might have heard about the 1.5C challenge nations put forth as a threshold they would like the planet to stay in by 2100. What you may not know is that we are already at 1.1 and at a rate of 0.2 warming per decade, we would reach this temperature by the early 2040s. That is, the goal for 2100 may come 40-50 years earlier! Naturally, when thinking about scenarios on climate, one of them see the earth reaching 1.5 or even 1.7 in 20 years – the pessimistic scenario. On the other end, would be to trust that changes implemented now will accelerate to curb that to something more like 1.3C. The variation seems small but it makes all the difference.

Climate change represents a marker and metric of how well humanity works with the earth to sustain life. Given the multiple warnings from scientists and the challenges we are already experiencing, I believe climate must be part of every exercise considering the future. It is the container, the stage setting the conditions in which we will live (or not) our future lives.

Crayion-generated “geopolitics”

Globalism vs Nationalism

Geopolitics is another macro-driver of the future. It represents the combined impact of national political decisions. One could say that geopolitics will be a by-product of climate impact. There is some truth to that, especially over the long term. However, in this case, the macro-driver really is how nations cooperate with each other to face planetary challenges. That is, will they seek to work together toward shared goals (globalism) or prefer to protect their own interests first (nationalism).

A recent example would be COVID-19. On that occasion, national responses leaned mostly toward globalism. There was unprecedented sharing of information, vaccines, and cooperation as a way to mitigate the worse of the pandemic. Even with the significant cost in human lives, globalism ensured worst scenarios did not occur. This is, however, not a guarantee for the next two decades.

The Economist published a seminal article, The New Political Divide in 2016 that expressed this choice well. It argued that the central political question would no longer be between left and right (capitalism vs socialism) but between open and closed societies. This was a remarkable statement considering that it preceded Trump’s electoral victory and the rise of nationalists in other countries such as Brazil and the Philippines. This debate is far from over and it would be a mistake to interpret Trump’s defeat in 2020 as a decline of nationalism in geopolitics. Political candidates may change but the allure of isolationism and parochial politics will continue

Conclusion

There are many others but we thought we would start with these two to set the canvas for the stories we are to create. As we mentioned before, the point here is not to “get the future right.” We are not just extending these trends to build one future. Instead, we are looking at them for a range. That is, what would it look like if we actually are able to slow global warming? What does it look like if it accelerates? How will a nationalistic world look? What happens if globalism reigns supreme? We believe the future will lie somewhere in between these ranges, yet preparing for its extremes is a good strategy.

While our focus is on the future AI and faith, we believe that climate change and geopolitics will be defining parameters. Think of it as a canvas, the prevailing background upon which the future of AI and faith will be painted. By doing so, we acknowledge that technology and religion do not happen in a vacuum but are as much drivers as recipients of their surroundings.

AIT Podcast Episode 3: Demystifying Christian Transhumanism

Have you ever wondered what Christian Transhumanism is? In the third episode of the AI Theology Podcast, Elias Kruger interviews Micah Redding, our AIT Board member on his personal experience with Christian Transhumanism. He is the founder of the Christian Transhumanist Association (CTA). They talked about the origins and future of the movement. Micah also shared how Transhumanism informs and shapes his Christian practice.

Listen to us on: 

Spotify

Apple Podcasts

Google Podcasts

Make sure to share with family and friends to spread information.

AIT Podcast Episode 2: AI Warfare

Our second episode from the AI Theology Podcast just came out! Are you having trouble understanding what’s been going on in technology in warfare? Have you ever thought about what the church could do about wars like the one in Ukraine right now? Don’t miss out and listen to a fact based conversation on these topics. Listen to us on Spotify, Apple Podcasts and Google Podcasts. Check out here all our references on this episode.

Ukraine government using Clearview for facial recognition – click here

Russia using FindClone for facial recognition – click here

Use of deepfakes to mimic the president of Ukraine (Volodymyr Zelensky), and a deepfake of Putin declaring peace to Ukraine – click here 

Russian President Vladimir Putin signed into law a rule that criminalizes reporting that contradicts the Russian government’s version of events – click here

Fact vs Fiction about the war (document from the US government) – click here 

Book “Robot Theology: Old questions through new media” from Joshua Smith – click here 

Russion drone Lanset – click here 

Race for AI supremacy – click here

Just War Theory – click here

Donate here

Make sure to share with family and friends to spread information. 

Working for a Better Future: Sustainable AI and Gender Equality

At our February AI Theology Advisory Board meeting, Ana Catarina De Alencar joined us to discuss her research on sustainable AI and gender equality, as well as how she integrates her faith and work as a lawyer specializing in data protection. In Part 1 below, she describes her research on the importance of gender equality as we strive for AI sustainability.

Elias: Ana, thank you for joining us today. Why don’t you start by telling us a little about yourself and about your involvement with law and AI.

Ana: Thank you, Elias, for the invitation. It’s very nice to be with you today. I am a lawyer in a big law firm here in Brazil. I work with many startups on topics related to technology. Today I specialize in data protection law. This is a very recent topic for corporations in Brazil. They are learning how to adjust and adapt to these new laws designed to protect people’s data. We consult with them and provide legal opinions about these kinds of topics. I’m also a professor. I have a master’s degree in philosophy of law, and I teach in this field. 

judgement scale and gavel in judge office
Photo by Sora Shimazaki on Pexels.com

In my legal work, I engage many controversial topics involving data protection and AI ethics. For example, I have a client who wants to implement a facial recognition system that can be used for children and teenagers. From the legal point of view, it can be a considerable risk to privacy even when we see a lot of favorable points that this type of technology can provide. It also can be very challenging to balance the ethical perspective with the benefits that our clients see in certain technologies.

Gender Equality and Sustainable AI

Elias: Thank you. There’s so much already in what you shared. We could have a lot to talk about with facial recognition, but we’ll hold off on that for now. I’d like to talk first about the paper you presented at the conference where we met. It was a virtual conference on sustainable AI, and you presented a paper on gender equality. Can you summarize that paper and add anything else you want to say about that connection between gender equality and sustainable AI?

Ana: This paper came out of research I was doing for Women’s Day, which is celebrated internationally. I was thinking about how I could build something uniting this day specifically and the topic of AI, and the research became broader and broader. I realized that it had something to do with the sustainability issue. 

Sustainability and A Trans-Generational Point of View

When we think of AI and gender, often we don’t think with a trans-generational point of view. We fail to realize that interests in the past can impact interests in the future. Yet, that is what is happening with AI when we think about gender. The paper I presented asks how current technology impacts future generations of women.

The technology offered in the market is biased in a way that creates a less favorable context for women in generations to come. For example, when a natural language processing system sorts resumes, often it selects resumes in a way that favors men more than women. Another example is when we personalize AI systems as women or as men, which generates or perpetuates certain ideas about women. Watson from IBM is a powerful tool for business, and we personalize it as a man. Alexa is a tool for helping you out with your day-by-day routine, and we personalize it as a woman. It creates the idea that maybe women are servile, just for supporting society in lower tasks, so to speak. I explored other examples in the paper as well.

All of these things together are making AI technology biased and creating ideas about women that can have a negative impact on future generations. It creates a less favorable situation for women in the future.

Reinforcing and Amplifying Bias

Levi: I’m curious if you could give an example of what the intergenerational impact looks like specifically. In the United States, racial disparities persist across generations. Often it is because, for instance, if you’re a Black American, you have a harder time getting high-paying jobs. Then your children won’t be able to go to the best schools, and they will also have a harder time getting high-paying jobs. But it seems to be different with women, because their children may be women or men. So I wonder if you can give an example of what you mean with this intergenerational bias.

Ana: We don’t have concrete examples yet to show that future impact. However, we can imagine how it would shape future generations. Say we use some kind of technology now that reinforces biases–for example, a system for recruiting people that lowers resumes mentioning the word ‘women,’ ‘women’s college,’ or something feminine. Or a system which includes characterization of words related to women–for instance, the word ‘cook’ is related to women, ‘children’ is related to women. If we use these technologies in a broad sense, we are going to reinforce some biases already existing in our society, and we are going to amplify them for future generations. These biases become normal for everybody now and into the future. It becomes more systemic.

Racial Bias

You can use this same thinking for the racial bias, too. When you use these apps and collect data, it reinforces systemic biases about race. That’s why we have to think ethically about AI, not only legally, because we have to build some kind of control in these applications to be sure they do not reinforce and amplify what is already really bad in our society for the future.

Levi: There’s actually a really famous case that illustrates this from Harvard Business students. Black students and Asian students sent their applications out for job interviews, and then they sent out a second application where they had whitewashed it. They removed things on their CV that were coded with with their race–for instance, being the president of the Chinese Student Association or president of the Black Student Union, or even specific sports that are racially coded. They found that when they whitewashed their applications, even though they removed all of these accomplishments, they got significantly higher callbacks.

Elias: I have two daughters, ages 12 and 10. If AI tells them that they’re going to be more like Alexa, not Watson, it influences their possibilities. That is intergenerational, because we are building a society for them. I appreciated the paper you presented, Ana, because AI does have an intergenerational impact.

In Part 2 we will continue the conversation with Ana Catarina De Alencar and explore the way she thinks about faith and her work.

The Telos of Technology and the Value of Work

At our January Advisory Board meeting, we explored the question of whether we live in a technological age. You can find Part 1 of our conversation in this post. In Part 2 below, we discuss a new telos of technology.

Elias: I think we established, for the most part, that this is a technological age. Maybe we always have been in a technological age, but technology is definitely part of our lives now. Some of you started hinting at the idea that technology is pointing towards something. It is teleological, from the Greek word telos, meaning goal. Technology leads toward something. And I think Chardin saw technology leading into the Omega point, while Ellul saw it more as a perversion of a Christian eschaton. In his view, the Christian position was to resist and subvert it. 

The question I have now is very broad. How do we forge a new vision, a new telos, for technology? Or maybe even, what would that telos be? We talked earlier about technology for the sake of capitalism or consumption. What would be a new telos for technology, and how would we forge this new vision?

No Overall Goal for Technology

František: I have a great colleague with a technical background and a longtime friend. I studied with him in Amsterdam. He’s now sort of an important person in a company developing AI. He’s a member of the team which programmed the AI to play poker. So he’s quite skillful in programming, and actually working on the development of AI. He’s developing amazing things.

I spoke with him about this telos question, “What is the aim of technology?” He said, “Well, there is no such thing as an overall goal.” The goal is to improve our program to be able to fight more sophisticated threats to our system. That’s what we are developing. So basically, there is no general telos of technology. There is only a narrow focus. There is just the goal to improve technology, that it gets better, and serves better the concrete purpose for which is built. It’s a very particular focus. 

A Clash of Mentalities

I was very unhappy with this answer. After all, there must be some goal. And he said, “Well, that’s the business of theologians.” My friend said he doesn’t believe in anything. Not in theism, not even in atheism, he just doesn’t bother discussing it. So for him, there is no God, no goal, nothing. We’re just living our life. And we’re improving it. We are improving it step by step. He’s a well-studied, learned person, and he sees it like that. I’ve experienced the same thing during conversations with many of my friends who are working in technology on the technical or the business side. 

So they would say, perhaps, there is no goal. That’s a clash of mentalities. We are trying to build a bridge between this technological type of thinking and the theological, philosophical perspective which intends to see the higher goal.

I don’t have a good argument. You can try to convince him that there is a higher goal, but he doesn’t believe in a higher goal. So I’m afraid that a lot of people developing technology do not see further than the next step of a particular piece of technology. And  I’m afraid that here we are, getting close to the vision of the Brave New World, you know, the novel. People are improving technology on a particular stage, but they do not see the full picture. It is all about improving technology to the next step. There is no long-term thinking. Perhaps there are some visionaries, but this is at least my experience, which I’m afraid is quite broad in the field of technology.

The Human Telos of Technology

Maggie: I feel like that happens a lot from the developer side of technology. But at least the import within technology should be that you have some sort of product owner or product manager, that’s supposed to be supplying a vision. That person could start thinking about the goal of technology. I know a lot of times within technology, the product manager draws out the user story. So, “I’m a user. I want to ______, so that ______.” And it’s the so that which becomes the bigger element that’s drawn out. But that’s still at a very microscopic level. So yeah, there might be an intersection with the larger goal of technology, but I don’t think it really is used there very well.

Elias: Some of you who have known me for a long time know how much I have struggled with my job and finding meaning in what I do. And a lot of times it was exactly like you described, František. It was like, What am I doing here? What is this for? And I found, at least recently, this sweet spot where I found a lot of meaning in what I was doing. It wasn’t like I was changing people’s lives. But I found this passion to make things better and more efficient. When you are in a large corporation things can be so bureaucratic. And we were able to come in and say, I don’t care how you do it, we’re gonna accomplish this thing. And then you actually get it done. There is a sense of purpose and satisfaction in that alone. 

The Creative Value of Work

I would venture to say that your friend, František, is actually doing creative work, co-creative work with God. He may not call it that. But there is something about bringing order out of chaos. I think even in a situation where the user or the developer is not aware, there might be goals happening there that we could appreciate and describe theologically.

For instance, going back to my experience, it might just be the phase that I’m in at work. But I’m feeling a lot of satisfaction in getting things done nowadays. Just simply getting things done. How can I put that theologically? I don’t know. Is that how God felt after creation? But there is something about accomplishing things. Now, if that’s all you do, obviously, eventually it just becomes meaningless. But there is something meaningful in the act of accomplishing a task.

Maggie: And just the sanctity of work too. Your friend, he’s working, he’s doing something. And in that type of work, even though it’s labor, I think it’s still a part of the human telos. 

František: Yeah, I think so, even though he thinks that there is no human telos as such. And we keep having conversations, and he still sees something important in the conversations. So that means he still keeps coming to the conversation with philosophers and theologians, even though he sort of disregards their work because he sees it as not relevant to his work. But I think that’s a sign of hope in his heart.

Latest on Ethics, Democratization of AI, and the War in Ukraine

There is a lot happening in the world of AI. In this short update we explore AI ethics, democratization, and tech updates from the war in Ukraine. For more on the latter, check out our recent piece where we dove into how AI is changing the landscape of warfare and possibly tilting the balance of power to smaller actors.

Let me begin with wise words from Andrew Ng from his recent newsletter:

When developers write software, there’s an economic temptation to focus on serving people who have power: How can one show users of a website who have purchasing power an advertisement that motivates them to click? To build a fairer society, let’s also make sure that our software treats all people well, including the least powerful among us.

Andrew Ng

Yes, Andrew. That is what AI theology is all about: rethinking how we do technology to build a world where all life can flourish.

Next Steps in the Democratization of AI

When we talk about democratization of AI, it is often in the context of spreading AI knowledge and benefits to the margins. However, it also means extending AI beyond the technical divide, enabling those with little technical ability to use AI. Though many AI and data science courses have sprung up in recent years, machine learning continues to be the practice of a few.

Big Tech is trying to change that. New Microsoft and Google tools allow more and more users to train models without code. As machine learning becomes a point-and-click affair, I can only imagine the potential of such developments as well as the danger they bring. The prospect of harnessing insight from millions of spreadsheets is promising. It could boost productivity and help many advance in their careers.

taken from pexel.com

One thing is for certain in AI applications: while coding may be optional, ethical reflection will never be. That is why, here in AI Theology, we are serious about expanding the dialogue to the non-technical masses. A good starting point for anyone seeking to better understand AI technologies is our guide. There you can find just enough information to have a big picture view of AI and its applications.

Trends in AI Ethics

The AI Index report from Stanford University has good news: AI ethics has become a thing! The topic is no longer restricted to academics but is now commonplace among industry-funded research. It is becoming part of mainstream organizations. Along with that, legislation efforts to regulate AI have also increased. Spain, the UK, and the US lead the way.

Furthermore, in the US, the FTC is levying penalties on companies that build models on improperly acquired data. In one of the latest instances, Weight Watchers had to destroy its algorithms developed on this type of data. This represents a massive loss for companies. Developing and deploying these models cost millions of dollars, and algorithm destruction prevents organizations from realizing their benefits.

This is an interesting and encouraging development. The threat of algorithm destruction could lead to more responsible data collection and retention practices. Data governance is a key foundation for ethical AI that no one (except for lawyers, of course) wants to talk about. With that said, ensuring good collection practices is not enough to address inherent bias in existing data.

War in Ukraine

A Zelensky deepfake was caught early, but it will likely not be the last. This is just a taste of what is to come as a war on the ground translates into a war of propaganda and cyber attacks. In the meantime, Russia is experiencing a tech worker exodus which could have severe consequences for the country’s IT sector for years to come.

Photo by Katie Godowski from Pexels

On the Ukrainian side, thousands continue to join the cyber army as Anonymous (the world’s largest hacking group) officially declared war on Russia. Multinational tech companies are also lining up to hire Ukrainian coders fleeing their homeland. Yet, challenges still remain around work visas as European countries struggle to absorb the heavy influx of refugees.

The war in Ukraine has been a global conflict from the start. Yet, unlike the major wars of the 20th century, the global community is overwhelmingly picking one side and fighting through multiple fronts outside of military action. While this global solidarity with the invaded nation is encouraging, this also raises the prospect of the military combat spilling into other countries.

Human Mercy is the Antidote to AI-driven Bureaucracy

If bureaucracies are full of human cogs, what’s the difference in replacing them with AI?

(For this entry following the main topic of classifications by machines vs. humans, we consider classifications and their union with judgments for their prospect of life-altering decisions. It is inspired by a sermon given today by pastor Jim Thomas of The Village Chapel, Nashville TN)

Esau’s fateful Choice

In Genesis 25:29–34 we see Esau, the firstborn son of Abraham, coming in from the fields famished, and finding his younger brother Jacob in possession of hearty stew, Esau pleads for some. My paraphrase follows:   Jacob replies, “First you have to give me your birthright.”   “Whatever,” says Esau, “You can have it, just gimme some STEWWW!” …”And thus Esau sold his birthright for a mess of pottage.”

Simply put, it is a bad idea make major, life-altering decisions while in a stressed state, examples of which are often drawn from the acronym HALT:

  • Hungry
  • Angry
  • Lonely
  • Tired

Sometimes HALT becomes SHALT by adding “Sad”.

When we’re in these (S)HALT states, our brains are operating relying on quick inferences “burned” into them either via instinct or training. Dual Process Theory of psychology calls this “System 1” or “Type 1” reasoning, according to the (cf. Kahneman, 2003Strack & Deutch 2004). System 1 includes the fight-or-flight response. While System 1 is fast, it’s also often prone to make errors, oversimplify, and operate based on of biases such as stereotypes and prejudices.

System 1 relies on only a tiny subset of the brain’s overall capacity, the part usually associated with involuntary and regulatory systems of the body governed by the cerebellum and medulla, rather than the cerebrum with its higher-order reasoning capabilities and creativity. Thus trying to make important decisions (if they’re not immediate and life-threatening) while in a System 1 state is inadvisable if waiting is possible.

At a later time we may be more relaxed, content, and able to engage in so-called System 2 reasoning, which is able to consider alternatives, question assumptions, perform planning and goal-alignment, display generosity, seek creative solutions, etc.

Hangry Computers Making Hasty Decisions

Machine Learning systems, other statistics-based models, and even rule-based symbolic AI systems, as sophisticated as they may currently be, are at best operating in a System 1 capacity — to the extent that the analogy to the human brain holds (See, e.g., Turing Award winner Yoshua Bengio’s invited lecture at NeurIPS 2019: video, slides.)

This analogy between human System 1 and AI systems is the reason for this post. AI systems are increasingly serving as proxies for human reasoning, even for important, life-altering decisions. And as such, news stories appear daily with instances of AI systems displaying bias and unjustly employing stereotypes.

So if humans are discouraged from making important decisions while in a System 1 state, and machines are currently capable of only System 1, then why are machines entrusted with important decision-making responsibilities? This is not simply a matter of which companies may choose to offer AI systems for speed and scale; governments do this too.

Government is a great place to look to further this discussion, because government bodies are chock full of humans making life-altering decisions (for others) based on of System 1 reasoning — tired people, implementing decisions based on procedures and rules – bureaucracy.1 In this way, whether it is a human being following a procedure or a machine following its instruction set, the result is quite similar.

Human Costs and Human Goods

By Harald Groven taken from Flickr.com

The building of a large bureaucratic system provides a way to scale and enforce a kind of (to borrow from AI Safety lingo) “value alignment,” whether for governments, companies, or non-profits. The movies of Terry Gilliam (e.g., Brazil) illustrated well the excesses of this through a vast office complex of desks after desks of office drones. The socio-political theorist Max Weber, who advanced many of our conceptions of bureaucracy as a positive means to maximize efficiency and eliminate favoritism, was aware of the danger of excess:

“It is horrible to think that the world could one day be filled with nothing but those little cogs, little men clinging to little jobs and striving towards bigger ones… That the world should know no men but these: it is such an evolution that we are already caught up, and the great question is, therefore, not how we can promote and hasten it, but what can we oppose to this machinery in order to keep a portion of mankind free from this parcelling-out of the soul, from this supreme mastery of the bureaucratic way of life.”

Max Weber, Gesammelte Augsaetze zur Soziologie und Sozialpolitik, pp. 412, (1909).

Thus by outsourcing some of this drudgery to machines, we can “free” some workers from having to serve as “cogs.” This bears some similarity to the practice of replacing human assembly-line workers with robots in hazardous conditions (e.g., welding, toxic environments), whereas in the bureaucratic sense we are removing people from mentally or emotionally taxing situations. Yet one may ask what the other costs of such an enterprise may be, if any: If the system is already “soulless,” then what do we lose by having the human “cogs” in the bureaucratic machine replaced by machines?

The Heart of the Matter

So, what is different about machines doing things, specifically performing classifications (judgments, grading, etc.) as opposed to humans?

One difference between the automated and human forms of bureaucracy is the possibility of discretionary action on the part of humans, such as the demonstration of mercy in certain circumstances. God exhorts believers in Micah 6:8 “to love mercy.” In contrast, human bureaucrats going through the motions of following the rules of their organization can result in what Hannah Arendt termed “the banality of evil,” typified in her portrayal of Nazi war criminal Adolph Eichmann who she described as “neither perverted nor sadistic,” but rather “terrifyingly normal.”

“The sad truth is of the matter is that most evil is done by people who never make up their minds to be or do evil or good.”

Hannah Arendt, The Life of the Mind, Volume 1: Thinking, p.180 (1977).

Here again we see the potential for AI systems, as the ultimate “neutral” rule-followers, to facilitate evil on massive scales. So if machines could somehow deviate from the rules and show mercy on occasion, how would that even work? Which AI researchers are working on the “machine ethics” issue of determining when and how to show mercy? (At the time of writing, this author is unaware of such efforts). Given that human judges have a tendency to show favoritism and bias in the selective granting of mercy to certain ethnicities more than others. Also, the automated systems have shown bias even in rule-following, would the matter of “mercy” simply be a new opportunity for automated unfairness? It is a difficult issue with no clear answers.

Photo by Clay Banks on Unsplash
Photo by Clay Banks on Unsplash

The Human Factor

One other key, if the pedantic difference, between human vs machine “cogs” is the simple fact that with a human being “on the line,” you can try to break out of the limited options presented by menus and if-then decision trees. Even the latest chatbot helper interfaces currently deployed are nothing more than natural language front ends to menus. Whereas with a human being, you can explain your situation, and they can (hopefully) work with you or connect you to someone with the authority to do so.

I suspect that in the next ten years we will see machine systems with increasing forays into System 2 reasoning categories (e.g., causality, planning, self-examination). I’m not sure how I feel about the prospect of pleading with a next-gen chatbot to offer me an exception because the rule shouldn’t apply in this case, or some such. 😉 But it might happen — or more likely such a system will decide whether to kick the matter up to a real human.

Summary

We began by talking about Jacob and Esau. Jacob, the creative swindling deal-broker, and Esau who quite literally “goes with his gut.”. Then we talked about reasoning according to the two systems described by Dual Process Theory, noting that machines currently can do System 1 quite well. The main question was: if humans make numerous erroneous and unjust decisions in a System 1 state, how do we justify the use of machines? And the easy answers available seem to be a cop-out: the incentive of scale, speed, and lower cost. And this is not just “capitalism,” rather these incentives would still be drivers in a variety of socio-economic situations.

Another answer came in the form of bureaucracy. System 1 already exists albeit with humans as operators. We explored “what’s different” between a bureaucracy implemented via humans vs. machines. We realized that “what is lost” is the humans’ ability to transcend, if not their authority in the organization, at least the rigid and deficient set of software designs imposed by vendors of bureaucratic IT systems. Predicting how the best of these systems will improve in the coming years is hard. Yet, given the prevalence of shoddy software in widespread use, I prefer talking to a human in Mumbai rather than “Erica” the Bank of America Chatbot for quite some time.


[1]    Literally “government by the desk,” a term coined originally by 16th-century French economist Jacques Claude Marie Vincent de Gournay as a pejorative, but has since entered common usage.

Scott H. Hawley, Ph.D., Professor of Physics, Belmont University. Webpage: https://hedges.belmont.edu/~shawley/

Acknowledgment: The author thanks L.M. Sacasas for the helpful conversation while preparing this post.