[0:00] It's run on our phones, it's run on social media, there's web apps, perhaps we've used them,! ChatGPT and DeepSeek and others. It seems to be a tool on almost every website. So the software that I use to make my slides on a Sunday has a tool on it to generate images.
[0:24] So here's a few that I got it to generate for us. I asked them to create a picture of Calvary Church Brighton. This was what it came up with for something that doesn't have eyes and stuff. It was quite impressive really.
[0:39] And then this was an anime version of it. A Japanese art. That's a bit scary, isn't it? And then a few months ago we looked at Psalm 23 as a church and there was a bit in Psalm 23. You've prepared a table before me.
[0:57] And so I was thinking sheep at a table. Couldn't find a picture of sheep at a table online for some reason. So this is where AI comes in useful.
[1:08] But the sheep are at the table surrounded by enemies. So got it to include wolves and lions. There we go. That's one way we can use AI, isn't it? Others in the world are creating robots with AI. That's a hash-win area of things.
[1:27] A few weeks ago I saw this clip of a Russian AI robot. Probably don't need to be too worried about them at the moment. There we go.
[1:40] Some of us are really excited about the possibilities artificial intelligence is creating for the world. Some of us might be a bit fearful.
[1:51] Or somewhere in between. Perhaps we're asking questions. Is my job going to be safe? How do I know what is real and what is not real?
[2:02] All kinds of questions. And in our Q&A time perhaps we can try and answer some of those questions. But how do we think as Christians about AI?
[2:15] What does the Bible say on it? Where would we even begin in the Bible? It doesn't mention AI. The human writers of the Bible could have only dreamed of it if they did at all.
[2:27] So what do we do when trying to tackle this subject as Christians? Thinking biblically. Well we are Christians who do want to think biblically.
[2:40] And we trust that the Bible does speak to us today. And I hope that we are helped by the Bible this evening. So I want to read two little passages from the book of Genesis.
[2:51] Starting right at the beginning. Best place to start isn't it? Genesis chapter 1. I'll read verses 1 to 5. And then I'll skip down to read a few more verses.
[3:03] And then we'll also read some verses from Genesis chapter 11. It's probably found on page something like 1 or something of your Bibles.
[3:16] Just in case you're wondering. In 3. In the beginning God created the heavens and the earth.
[3:27] Now the earth was formless and empty. Darkness was over the surface of the deep. And the Spirit of God was hovering over the waters. And God said let there be light. And there was light.
[3:38] God saw that the light was good. And he separated the light from the darkness. God called the light day. And the darkness was called night. And there was evening. And there was morning.
[3:50] The first day. And then I'm going to skip down to verse 26. Still in chapter 1. Then God said let us make mankind in our image.
[4:02] In our likeness. So that they may rule over the fish in the sea. And the birds in the sky. Over the livestock. And all the wild animals. And over all the creatures that move along the ground.
[4:13] So God created mankind in his own image. In the image of God he created them. Male and female he created them. And God blessed them. God blessed them and said to them.
[4:25] Be fruitful and increase in number. Fill the earth and subdue it. Rule over the fish in the sea. And the birds in the sky. And over every living creature that moves along the ground.
[4:37] And then we're going to skip forward to read Genesis chapter 11. Genesis chapter 11. And just the first nine verses. Genesis chapter 11.
[4:49] The first nine verses. Now the whole world had one language. And a common speech. As people moved eastwards.
[5:00] They found a plain in China. And settled there. They said to each other. Come let's make bricks. And bake them thoroughly. They used bricks of stone. Sorry. They used brick instead of stone.
[5:12] And bitumen for mortar. Then they said. Come let us build ourselves a city. With a tower that reaches to the heavens. So that we may make a name for ourselves.
[5:23] Otherwise we will be scattered over the face of the whole earth. But the Lord came down to see the city. And the tower that people were building. The Lord said.
[5:34] If as one people speaking the same language. They have begun to do this. Then nothing they plan to do. Will be impossible for them. Come let us go down.
[5:45] And confuse their language. So they will not understand each other. So the Lord scattered them from there. Over all the earth. And they stopped building the city. That is why it was called Babel.
[5:56] Because there the Lord confused the language of the whole world. From there the Lord scattered them over the face of the whole earth.
[6:08] May God add his blessing to us. As we have read his words. Question. Which I've put the answer on the screen already.
[6:20] Never mind. What does the creation of the world. Tower of Babel. And AI have to do with each other. Language. Language.
[6:32] Answer. Language. Thank you. Follow along with me. And hopefully we'll be able to show you what I mean by that. So according to Genesis chapter 1.
[6:45] The world came to be through language. Language. So God spoke. He spoke out words. And creation came to be. Let there be light.
[6:56] And there was light. At the beginning of each day. It says God said. God said. God said. By the power of the creator's word.
[7:07] Creation came to be. And the result of the language that was spoken out by the creator. Was a beautiful world.
[7:17] Reflecting the glory. Of our creator. And we're told in the Genesis account. That the pinnacle of this creation.
[7:27] Was human beings. Who God said. Are made in our likeness. In his likeness. In our likeness.
[7:38] Let us make man. And here's what he instructs. Mankind to do. Verse 28.
[7:48] God blessed them. And said. Be fruitful. And increase in number. Fill the earth. Subdue it. Rule over. So.
[7:59] Two. Things. That he. Creates humans to do. The first thing is. To rule. To rule over creation. God ultimately.
[8:10] Is in charge. Of the world. That he has made. But he delegates. That rule over. The physical creation. To human beings. But what we're really.
[8:23] Interested in this verse. This evening. Is the second one. That he's. Instructed human beings. To be creative. Do you see that.
[8:33] In the first part. Of the verse. Be fruitful. And increase. In number. Be increasing. In number. Be creating. More human beings.
[8:43] That's. A creative thing. That human beings do. But also. It's more than that. Subduing. Fill the earth. And subdue it.
[8:55] Which means. To manage the earth. Cultivate the earth. Find resources. Which will help you. To rule over. The earth. Work out.
[9:08] What tastes good. In life. Work out. What foods. Go with other foods. Work out. That cinnamon. And apple. Goes well together. Work out.
[9:19] That coffee beans. Exist. And if you grind them up. You can make a good drink. Work out. That cocoa. Is around. And you can make chocolate.
[9:31] With it. So much creativity. In God's creation. Reflecting the fact. That we're made. In the great creator's. Image. We are creatives.
[9:41] Like he is creative. But as we know. From the Genesis account. Things go. A little bit. Pear shaped. Or apple shaped.
[9:51] Or whatever fruit. Was there. In the garden. Sin entered the world. Which affects. This beautiful creation. God has made. But that doesn't stop.
[10:03] Human creativity. These wonderful. Few verses. In Genesis chapter 4. Genesis chapter 4. Verse 20. And 21. Amid.
[10:16] Like all human sin. And trouble. We read these words. Ada gave birth to Jabal. He was the father. Of those who live in tents. And raise livestock. His brother's name.
[10:27] Was Jubal. He was the father. Of all. Who play stringed instruments. And pipes. Zila had a son. Tubal Cain. Who forged all kinds of tools. Out of bronze.
[10:38] And iron. Tubal Cain's sister. Was Neama. And we see. Humans. Creativity there. So. When you go.
[10:51] On your camping holiday. Next year. You can thank Jubal. When. We. Enjoyed. The musical evening. Last night. We could give full.
[11:02] Credit. To. His brother. Jubal. Jabal. Was the first one. Jubal. Was the second one. Father of those. Who play stringed instruments.
[11:13] Phil. Has. Him. To thank. This evening. And then. There were people. Who forged all kinds of tools. Out of bronze and iron.
[11:23] So. So when we cut our vegetables up. And things like that. We can. Thank this guy. Tubal Cain. Wonderful creativity. As humans are subduing.
[11:36] This good. Creation. That God. Has. Made. So. Language. Involved. In the creation. Of the world.
[11:47] And as it continues. Produces. This wonderful human creativity. God's language. But what about human. Language. We see that in. Genesis 11. So you might just want to.
[11:58] Have that open. In front of you. For a few moments. Verse 1. Introduces us. To the. The chapter. And shows us. There is one world.
[12:09] Language. A whole. Common language. Everyone. Can speak. Imagine what good. That can do. Humans. Could cooperate. Together. Understand.
[12:21] Whoever. They speak to. Wherever. They go. In the world. Understand. A menu. At a restaurant. Understand. Traffic. Signs. There is. So much. Good. That could be done.
[12:31] For a human. One. Human. Language. When countries. Are at war. There would be no. Language barrier. To peace talks. Doctors. Could travel.
[12:42] Anywhere. In the world. To practice. Their skills. Without having. That language barrier. But instead. Of human beings. Working together. For the good.
[12:52] Of one another. It seems. In Genesis 11. They set out. To work. Not. Not for the good. Of one another. And their creator. Instead.
[13:02] They set. To work. Making a structure. That. That. Competes. With God. They were interested. Only one. One thing. Verse. Four. To make a name.
[13:14] For themselves. We don't want to enjoy. This whole beautiful earth. That God has made. We want to be together. In one place. Making this. Huge structure.
[13:25] For. The world to know. For. For all. That come afterwards. Who made this. We want to make a name. For ourselves. Instead of making.
[13:37] The glory of God. Their main interest. It's all about them. And the issue. In this chapter. Is all about language.
[13:47] They could all. Understand each other. So they could all. Co-operate. In this big venture. To make this tower. That goes up to heaven. And so that's why. In verse.
[13:59] Six. And seven. God. Says. We're going to. Make this task. Impossible for them. We're going to. Disrupt. Their evil. Plants.
[14:10] And we're going to. Confuse. Their languages. So how does. Language. Here in Genesis 11. And Genesis 1. And AI.
[14:20] How does it all. Connect. It all connects. By language. But how. Well I think. In two ways. A positive way. And a negative way. This world.
[14:32] Was made by God. Positively. This world. Was made by God. We're made in God's image. And AI. Shows something. Of our. God.
[14:42] Given. Creativity. It's. Technology. Made in. Our image. Our human image. Which. God has.
[14:54] Given to us. It's incredible. Who. Who would have thought. Would have such technology. Some years ago. Who would have thought.
[15:05] Would have been watching. An AI. Robot. Hoover. Clean the floors. On our church away day. Who would have thought. That I can type in a question. And get.
[15:16] An answer. So quickly. From something. That has scanned. Hundreds. Thousands. Millions. Of books. All across the world. Millions of web pages. Across the world.
[15:28] And bought. Bring me an answer instantly. It's amazing. We should be praising God. For AI. Because it shows. Our human. Creativity. And it is making.
[15:39] A real difference. In so many. People's. Lives. Lives. But why do we connect. Language. With this. Much of the tools.
[15:52] That we use. In AI. Language. Learning. Models. LLMs. So. Chat. GPT. For instance. That's an LLM. And a definition.
[16:04] Maybe not the most. Reliable. Definition. It's from Google AI. On chat. GPT. Is this. It's considered. A large. Language. Language. Model. Because it is. A powerful. AI.
[16:14] System. Trained. On a massive. Amount. Of. Text. Data. To understand. And generate. Human. Like. Language. It's basically. Simulating.
[16:24] Human. Language. And if it's. Simulating. It. Simulations. Aren't. Real. Nor. Does this stuff.
[16:35] Know what it's like. To actually live. As a human being. Talking. Actual. Human. Language. It doesn't know. What it's like. To live in a physical world. It doesn't know. What it's like. To get cold this week.
[16:46] As many of us have done. It doesn't know. What it's like. To have emotion. It doesn't know. What it's like. To fall in love. But it is also. Breaking down.
[16:57] The language. Barrier. The language. Barrier. That. That God has. Placed. In this world. Because of. Human's. Evil. Behavior. Yeah. And.
[17:09] That could. Lead to all manner of. Positive things. But also. All manner of. Negative. Things. Things. The tower of.
[17:22] Babel. Wasn't. A positive thing. It ended. In. God's. Judgment. But the tower of. Babel. Was a. Physical.
[17:33] Building. Which had. Spatial. Limits. Could only be. In a certain. Physical. Space. Whereas. This. AI. Technology. Which was. Removing.
[17:44] The language. Barrier. In many. Cases. Is. Is there. An end. To its. Limits. If it's. Technology. If it's. Virtual. Worlds. If.
[17:56] God. Sees. That the tower of. Babel. As evil. Going on. What about. With AI. And so many. Of us. Are asking. The question. How far. Can it go. What is.
[18:06] The end. Goal. And maybe. We'll. Ask that question. At the end. Of this evening. But we are. Thinking. Biblically. Here. And. Genesis 11.
[18:17] Also. Reassures us. That. That we can. Take confidence. In our God. In our creator. He saw. What humans. Were doing.
[18:28] He saw. The evil. That they were doing. And he interrupted. Their plans. He sees. What is going on. With AI. He sees. The evil. And he sees. The good. And he has.
[18:38] All power. To interrupt. Human. Evil. Plans. And finally. Like what. Took place. A few. Chapters. Before. In. Genesis 11.
[18:50] Sorry. Before. Genesis 11. In Genesis 9. There was a flood. A worldwide flood. In the form of a judgment. Wasn't there. But God provided.
[19:02] A safe place. A safe refuge. For Noah. And his family. And whatever. May happen. In this world. Because of AI. There is another safe place.
[19:14] Which God has provided. For human beings. And it's found. In the Lord Jesus Christ. Who is a refuge. For. The judgment. To come. And so we can.
[19:29] Trust in the Lord. And we do not. Need to fear. We have a much bigger God. Than however big AI will become.
[19:41] And we're going to sing. A song of praise to him now. And then we'll get to hear from Steve. And then the rest of the panel. Sing the song.
[19:52] How great. Thou art. To our great creator. Creator. The one who has. Given his great son. For us.
[20:02] As that refuge. Let us see.
[20:35] Let us see. Let us see. Let us see. Oh, my God, let I in awesome wonder, Don't say your hope, the words I have to pay.
[21:13] I see the stars, I hear the mighty thunder, I bow you out, the hidden earth's name.
[21:29] And since my soul, my Savior, come to me, I'll pray thou art, I'll pray thou art.
[21:44] And since my soul, my Savior, come to me, How great thou art, how great thou art.
[22:02] When through the woods, and for its days I walk thou, And hear the birds, sing steeply in the sea.
[22:20] When I am down, from the mountain mountain, And hear the birds, and fill the gentle breeze.
[22:36] And hear the birds, and for its days I walk thou art. And hear the birds, and for its days I walk thou art. I'll pray thou art, how great thou art.
[22:54] And since my soul, my Savior, come to me, I'll pray thou art, how great thou art.
[23:12] And when I think that God is not staring, Send him to die, I said, can't take it in.
[23:30] That on the cross, my burden, gladly bearing, He bread and I, to take away my sin.
[23:48] And change my soul, my Savior, come to me, I'll pray thou art.
[23:59] And seek my soul, my Savior, come to me, I'll pray thou art. And seek my soul, my Savior, come to me, I'll pray thou art. And seek my soul, my Savior, come to me, And seek my soul, my Savior, come to me, My soul, my Savior, not to me.
[24:18] I'll take the home. I'll take the home. When Christ shall come, this shall come back the creation.
[24:36] I'll take the home. Your joy shall fill my heart. When Christ shall come, this shall come to adoration.
[24:55] I'll take the home. I'll take the home. Then since my soul, my Savior, not to me.
[25:17] Thou prays, Thou one. Thou prays, Thou one. Then since my soul, my Savior, not to me.
[25:34] I'll take the home. I'll take the home. I'll take the home.
[25:45] Thou prays, Thou one. I'll take the home. Yes, thank you. Right.
[26:01] I'll try and be as quick as possible because we want to get to the panel, but we do need a bit of context here. So that's a very brief history of AI. As you can see, it's going to be very brief.
[26:14] Where to start? Well, let's start in the 17th century when blazed... Yeah, it's causing me problems in the face. Thank you.
[26:53] The first mechanical calculators were built. The first one's by Blaise Pascal, the philosopher. There's one come up for sale recently, you may have seen in the news. At the same time, Gottfried Leibniz and other people were starting to use symbolic methods of reasoning.
[27:11] They used symbols for quantities, which led to the calculus. You can't do modern physics without the calculus. They also started looking at models of reasoning, which were kind of formalized, based on symbolic reasoning.
[27:27] It's perhaps worth mentioning that both Blaise Pascal and Gottfried Leibniz were both Christians, and indeed Christian apologists. These origins were not in the...
[27:38] They didn't start in the Enlightenment atheism. Actually, these people were both Christians. Anyway, moving forward a couple of centuries...
[27:51] Sorry, I don't need to do that. People were starting to build quite big machines. You may have heard of Babbage's difference engine and analytical engine.
[28:05] And Ada Lovelace, daughter of Lord Byron, regarded as the first computer program. But the technology wasn't up to it. The analytical engine nearly ever really got built, because the technology just couldn't cope with it.
[28:21] So we move forward another century or so to 1950. And around that time, people were just starting to build electronic computers.
[28:34] At the time, people were saying, oh, well, soon computers will be as intelligent as humans. But it just didn't happen. And we'll see why in a minute.
[28:47] But you may have heard of the Turing test. What is the Turing test? This is what Google AI's overview says is the Turing test. The Turing test is a test of a machine's ability to exhibit human-like intelligence conducted in a text-based conversation.
[29:08] A human evaluator engages in a conversation with both a human and a machine. And if the evaluator cannot reliably tell which is the machine, the machine is considered to have passed the test.
[29:22] The test was devised, of course, by Alan Turing, the codebreaker and mathematician, in around 1950, as a way to answer the question, can machines think without getting bogged down in the definition of thinking?
[29:37] Because the trouble is that people have been arguing about this ever since. Academics argue over the significance of it. And meanwhile, computer programmers try and come up with more and more sophisticated Turing tests.
[29:52] Those things you used to get on websites that say, I'm not a robot, you had to click on all the panels with stairs in or something. They're called captures, and the T in that stands for Turing test.
[30:06] But you don't see them so much nowadays because there are AIs around nowadays that are starting to be able to beat them. So, from 1950 onwards, I think you could probably regard as the beginning of AI in the sense we mean it today.
[30:30] And largely at the time, people were working on what were called symbolic AI, trying to formalize reasoning programs and put them into computers.
[30:42] But the results, in fact, proved disappointing. There were such systems around. There were things called expert systems around at one time, but they didn't really become very convincingly intelligent.
[30:57] intelligence. So, people started to have a rethink. There'd been what's called sub-symbolic AI models around for some time.
[31:09] But around 1990, the focus started to move towards sub-symbolic AI. What are sometimes called associative memories or content addressable memories.
[31:22] What does that mean? Well, suppose I say to you the words Eiffel Tower. What happens in your brain, presumably, if you've either been there or seen a picture of it, you see a picture in your mind.
[31:38] And all sorts of other connections come together, depending on what else you know about it, where it is, perhaps what it's made of, how many visitors it has, how high it may be, just depending on what you know.
[31:51] Our brains make all sorts of connections that move off in various directions. That's kind of the way our brains work.
[32:02] And so, people started looking at models based on this sort of idea and came up with what is called an artificial neural net. I'll explain what that is in a minute.
[32:14] And some nice neural net models were produced around that time. I worked on one or two myself, not tremendously successful.
[32:28] But, again, the problem was that the hardware wasn't up to it. And even worse, there wasn't enough training data. But around 2010, these things changed.
[32:43] There were, machines were being built that could store really large neural networks. And also, the internet was around.
[32:55] And that produced an almost inexhaustible supply of training data. I say almost inexhaustible because I was reading an article just today written a couple of days ago that says we're beginning to run out of internet.
[33:11] But, certainly, it produced very large training sets. And these led to these things that Daniel was talking about, large language models and image processing models and so on.
[33:28] So, perhaps, let me explain briefly what a neural net is. I dug these diagrams out of the lecture notes when I, the stuff I used to teach 30 years ago.
[33:42] Your brain consists of cells called neurons, about 87 billion of them in the human brain. They're connected to their neighbours and communicate with each other by what's an electrical junction called a synapse.
[34:00] And people started to say, can we make a computing model of that? And that diagram on the right shows the sort of things that people came up with. So, how clear is it?
[34:13] Too bad, is it? Yes. You can see that each net neuron has inputs which we called at X1, X2, up to Xn there so you have a variety of inputs.
[34:25] But associated with each input is what's usually called a weight. And what that does is it gives you the strength of the connection, how much emphasis is put on that connection.
[34:39] And then the neuron itself combines these together, does some sort of processing, usually some sort of smooth thresholding, and that produces an output which is then sent on to other neurons.
[34:54] That's basically how your brain works, it's a vast oversimplification of course, but that's the essence of it. So that's an artificial neuron.
[35:05] Stick a lot of these together in a network and you get what we used to call an artificial neural net. I think nowadays the term artificial is more or less taken as red and you're just referred to as neural nets.
[35:19] Before I stop, just one more thing, I think it's important to understand about neural nets and what's different from neural nets to the apps like you have on your phone.
[35:32] Neural nets and other machine learning systems are not programmed. The apps you have on your phone are programs, computer programs, written in a computer programming language.
[35:44] But neural networks and other machine learning systems are not programmed, they're trained. How does that work? Well, you have sets of training data and training data consists of an input pair and an output pair.
[36:01] neural net. And what you do is you feed your input into your neural net at the bottom as it were, or the top depending on which way you want to look at it. It produces an actual output.
[36:15] You compare that output with the output the training data says you ought to be getting and you adjust your connections of your neural net, these weights and things, to make it a little bit closer to the actual output you're getting, a little bit closer to the training output.
[36:38] And if you do that lots of times with lots and lots and lots of data, eventually the system learns and will hopefully produce, when you put an input, will produce something very close to the actual desired output.
[36:53] And of course once it's trained then you just give it an input and take the output as the result that you want. So that essentially is what artificial neural nets do.
[37:10] They're just nowadays very big with billions of connections and billions and lots and lots of neurons. But I think it's worth understanding that that's what a neural net does because whether you consider that thinking or not takes you back to the Turing test of course.
[37:29] What does it actually mean? But it certainly doesn't as Daniel said, they don't exist in the real world in any sense. They just take inputs and outputs and they're quite capable of contradicting each other depending on what input you put in.
[37:47] So it's worth understanding that these things are limited in that sense. Anyway, I need to stop talking so we can get on to the questions and answers. Steve, thank you so much for that.
[37:58] That's really helpful. Yeah, do give Steve a round of applause. We'll just move a few things. And those on the panel, if you'd like to come up, that would be brilliant.
[38:12] Come up and take a seat. You know who you are. And as they come up, why don't we give them a clap as well? Just try and clear things out of the way so we can see people.
[38:38] Can we cope with that there? I'll let Phil untangle that. Brilliant. We've also got a microphone just by mercy.
[38:49] So there we go. Just so everyone knows who you are, can you tell us your names? And if you want to tell us what qualifies you to be on this panel, what qualifies you as I've asked you, but what experience with AI you've got, please do share that as well.
[39:10] Shall we start in order from Terrence? All right. I've ruined the answer there. There you go. Who are you? Okay. That should be on.
[39:24] Urgens, whatever else, because that is how they exist. Right. So that's how we refer to them. It's just a nice thing that we can learn when we talk about it with our friends in our various social circles.
[39:38] And then to just probably somebody is wondering, are there instances where we can start visual intelligences? Maybe if we look at it from an academic or philosophical viewpoint, if we really want to dig into it, maybe somebody who is from a speculative background, science fiction, for example.
[39:54] You can have a certain brand of artificial intelligence that's different from what exists on Earth, let's say, than something else from another planet, right? So you're comparing these artificial intelligences.
[40:07] But for all intensive purposes and I guess for the sake of our discussion so that it becomes very beneficial to us, let's look at artificial intelligence as a field and refer to certain aspects of it as implementations and architectures, things like algorithms and systems, systems, agents, those sort of things, it captures the essence of it better.
[40:33] Thank you. Basically, it's a big thing. Yeah. Excellent. Just so we're really clear, can you help us with a definition of AI just so we know what we're talking about exactly?
[40:53] Anyone want to go on that one? Ash, Ashwin looks ready. Can I just say, guys, holding the microphone, do you hold it close to your mouth so we can all hear well?
[41:06] Okay. Thank you. It's just as it suggests, it's the ability or an attempt to mimic a human brain.
[41:18] So that's where it all begins. And the impossibility was not in the proposed algorithm. The impossibility was in the tech.
[41:30] But it is possible now because the tech has caught up with all the algorithms that was already proposed thanks to data centers.
[41:40] It has its very own issues. And the internet is 4G. And we live in a very connected world right now. And data is very centralized.
[41:51] centralized. It's in one big space. And then there is tons of public data. Be it images, texts, videos, all of it is food for this giant neural net.
[42:09] And just like how a brain works, it is trying to process it and give an output. is it necessarily the truth depends on the kind of data it's being trained on.
[42:23] So be very wary of it. Yeah, we've got to be wary of what we're taking in. Thank you. And that links back to what Steve was saying.
[42:33] So that's helpful bringing things together. Anybody else add anything on that definition of AI? Shall we move on? A question that's bothered me, I think Brenda might have answered this question in my mind a few months ago.
[42:51] Is AI actually intelligent? Is intelligence a human thing? Or can it really belong to something that's artificial? I differ in this from what many people would say.
[43:06] I wouldn't consider it intelligence. I would consider it as an interpreter or just trying to propose something from the data that's been fed. I don't think it I wouldn't term it as intelligence.
[43:21] I don't think it's human intelligence per se. I think human intelligence has some sort of emotional conscious attached to it. And I think we are also morally driven beings.
[43:35] So it plays a big role in our decision making. intelligence and by that definition I wouldn't call it intelligence. So the definition of intelligence is the ability to learn, understand and make judgments or have opinions that are based on reason.
[43:57] And so I would argue that AI while not humanly intelligent or doesn't have human intelligence I would argue is intelligent. we see it with chat GBT, right?
[44:09] If you ask it a question, I don't know, book me a give me an itinerary for Ibiza for example. I'm there for seven days.
[44:19] Give me an itinerary. I don't know why that's so hard to say. It'll give you an answer. If you ask it to give you a budget, it'll give you a realistic budget. So I would argue yes, intelligent.
[44:32] I think maybe where I would disagree is consciousness. I think that that is something different and sometimes people can kind of confuse the two, intelligence and consciousness. Yeah, that's what I would say.
[44:47] Can I ask you then, based on that, is it conscious? No, I wouldn't say AI is conscious. Fab. Yeah, maybe to add.
[44:57] Would you like to? Yeah, add, please do. Yeah, so I think I am somewhere in the middle. Yeah, so there is something that we call functional intelligence and then there is cognitive intelligence.
[45:11] And I think both of them mentioned either aspect. Atuitual intelligence is indeed intelligent because it does things that we do, right, like understand language. I can ask you the question.
[45:22] That's something that is intelligent indeed. But it's also incapable of something like abstract reasoning or comprehension. So I was preparing for this. I was actually asked ChatGPT if it understands some of the things it does, right?
[45:36] So it doesn't necessarily have the ability for comprehension. It's able to perform tasks based on certain patterns and all of those things and data. But does it really know what it is doing?
[45:47] It's something else that's entirely different. So the answer is if we use humans as the standard of intelligence, then AI as a discipline or AI implementations be it algorithms and whatever, they exist somewhere in the middle.
[46:05] They have some intelligent aspects that are more functional and then less of the cognitive aspects that are more complex and more emotive, all of those things, and comprehension in all of those.
[46:18] Thank you. So plan is, I'll ask two more and then we'll get to the floor. Is that okay? So two more.
[46:31] How is AI changing our lives and how could it change our lives? That's one question and two questions in one. Cheeky. It's a huge question and maybe we'll get to more of the implications of that as we go through.
[46:49] So just a general answer for now, shall we? Well, it clearly is changing our lives in all sorts of ways. It affects people's jobs, it affects the way people set exams, it affects the way people take exams, it affects the way we access knowledge.
[47:10] knowledge. But it can also affect us by sending us down a sort of rabbit hole.
[47:24] The problem with AI, of course, certainly with large language models, is that they just really reflect what you say. image in Revelation 15 doesn't speak on its own account, it just speaks the words of the beast.
[47:44] And I think, basically, large language models only produce what reflect what you put in. So they can contradict themselves, for example. I mean, if you put in New Agey stuff, you'll get New Agey answers out.
[48:00] If you put in Christian stuff, you'll get Christian answers out. You put atheist stuff in, you'll get atheist answers out. So I think it's, I suppose it's the old data processing.
[48:16] Maxim still applies, garbage in, garbage out. Anyone else want to answer any? I mean, if I get into robotics specifically, surgical robotics is a big thing.
[48:32] And I consider it a positive outcome because surgeons are decreasing. So that's the result. And people have found, and it has been a positive thing that one surgeon can be elsewhere in the world and operate a surgical robot and do 20 or 100 operations all over the world.
[48:53] And that decreases the cost, actually, rather than flying the specialist all around the world. world. And that's a good thing. And the other thing is, an AI was able to crack genetic cancer codes.
[49:06] That's very helpful. Yeah. And that's a really good thing. Yeah. And there are bad things as well. And that's the reason why it's always a human motive-driven ambition.
[49:17] And I think you can drive the AI to what your motive really is. And if the motive is to clear a cancer cell, then we can clear it. If the motive is something else, then that's something else is also a possibility.
[49:34] Thank you, Ashwin. Merci. I was looking into different areas of life that, like, AI, we can see AI kind of developing. One of them is war at the moment.
[49:45] So both Ukraine and Russia are kind of rushing to develop softwares that will, like, more easily identify their targets. And they're using chips that cost, like, less than £100 to do this.
[50:00] I was also looking at education as well. So currently, with the way that classes work, you know, you will move up a year regardless of how well you performed in the last one.
[50:11] They're making, they're having discussions around timetables being adjusted and potentially abolished altogether in favour of personalised curriculums that will, like, help students in the subjects that they're weaker in rather than giving them an equal timetable of all subjects, which we could say is a good thing.
[50:37] Because, obviously, then other children won't be being held back by slower learners and the quicker learners will be able to develop better in other areas. But we've seen that, like, iterations of this kind of school hasn't really worked.
[50:51] And self-pacing, when you allow students to go off by themselves and do their own work, oftentimes they don't really work. They'd rather just kind of lay around and not do anything.
[51:02] So that's, you know, that's a, it's a discussion but nothing really concrete at the moment. And then, obviously, we have politics as well. So we've seen, like, a rise in deepfakes, fake news, yeah, polarization on the internet, yeah, things like that.
[51:19] Positive and negative aspects to this as well, which feeds into the other question I was going to ask but I won't ask because we want to make sure we ask the floor. Terrence, did you have anything to add on this one and then we'll go to people in front of us?
[51:36] Yeah, maybe just quickly, again, there are negatives and positives. I was talking to a few earlier and we were talking about, I think it's Dynamite, yeah, I think they were, Alfred Nobel, he invaded something that ended up being bad but there was also potential good to it.
[51:54] So, yeah, there's a lot of good. For me, in the mental health space, I've seen people actually have their lives saved because of its usage. In the UK, there's a platform called Tell Me, it's wonderful, they're doing great work, you know, I'm working with one of the professors who's out there doing their thing, trying to make sure that people can actually benefit from it.
[52:16] And then, I was talking about personalized learning, yeah. It's something else that I'm passionate about, especially if you look at coming from Africa, you know, people with learning disabilities, they need adaptive learning plans, really.
[52:31] They will need to be managed, like what you were saying, you know, if they try to do it on their own, there will be problems. But at the very least, you can have something that's personalized, that's true to their capacity. And then the issue of deepfakes, man.
[52:44] Deepfakes, deepfakes. Now we are seeing, you're seeing videos of people saying things they never say. I'm really worried about that. You don't know what's going on, what's not, when you're on social media.
[52:56] We're getting to that point where it becomes very tricky. And if someone who is in power says something that they actually didn't say, that could literally lead to war.
[53:07] So yeah, there are negatives, of course. So we have to find a way to responsibly use AI and to have safeguards in place. What they actually look like, I don't know.
[53:19] I don't have the answer to that, but we have to actively try and do something. Thank you. I want to hear from the floor. instead of this microphone being passed out there and then passed back here, we'll hear questions.
[53:32] I'll repeat it for the sake of those watching online and for those who may not hear the question in the room as well, and then we'll get the panel to answer. By the way, plan is to finish by eight and then there's some hot chocolate going to be served if people want to stick around and maybe talk some more.
[53:51] Or you might have had enough of AI and talk about the weather. Valerie, you had a question. Two questions. Go one first. I'm trying to speak loud so you don't have to repeat it.
[54:02] My question was specifically for Massey. When you were defining intelligence versus consciousness earlier, and you said AI and conscious, I'm curious how you define consciousness for density and AI.
[54:16] I will repeat it, just in case those at home haven't heard it. So basically, how do you define consciousness because we were talking about consciousness and whatever the other one was, intelligence earlier?
[54:31] Go on. So they use this word qualia, Q-U-A-L-I-A, to describe subjective human consciousness or subjective human experience.
[54:43] consciousness. And essentially, it's like difficult to explain. It's difficult to explain.
[54:54] So like, the idea, we talk about consciousness. Okay, so what is consciousness? Is it intelligence? So is it being able to think and come to a conclusion?
[55:06] Or is consciousness, you know, visual? Like being able to see things? Well, our cameras can see things. Our cameras can pick up colors. And so they're not the same thing. What, how do you, can you limit consciousness down to a few categories?
[55:21] Or is it just a combination of things that we don't really understand? Is it neural pathways? Yeah, is it the neural pathways in our brains?
[55:32] It's difficult. So almost in a way, because we can't define it, we couldn't then say that AI is conscious. There is something about being human that we just can't really explain.
[55:50] And I think that's probably where, unless, does anybody else on the panel have? I mean, there, of course, is the $64,000 question, what is consciousness?
[56:01] And nobody knows is a simple answer. I mean, the Turing test was supposed to, supposed to test for consciousness, but people now think generally agree that it doesn't.
[56:13] It just, so there are, some people think that consciousness is something to do with quantum entanglement, so you can get into the whole area of quantum physics.
[56:27] This was, but this is controversial as well. We seem to be able to do things that machines can't, but it's not quite sure how or why, I think.
[56:40] You had another question, didn't you? Go for it. Okay. So this is two of the panel, but I think when thinking about AI as a Christian, I think it's, I don't want to say it's easier, but often it comes to how do I not use AI to a new system.
[56:55] So I shouldn't cheat in that. I have a question of how you think AI can be used to advance the gospel. As far as how AI can be used to a Christian system, not just avoiding it and being like, okay, I want to make sure I don't cheat in the way, but what a way that you guys think AI can be used as a tool to advance the gospel?
[57:19] Brilliant question. Any thoughts? Can AI be used to advance the gospel? Oh, yes, sorry, repeat the question. Can AI be used to advance the gospel?
[57:29] As I've got the microphone, I can tell you an anecdote. The whole of the internet is being scraped by these large language models, including, it appears, our sermons on SermonNet.
[57:44] So I suppose it's possible, and certainly there's lots of Christian stuff on YouTube and so on, so it's certainly possible that somebody will ask, you know, what is the gospel to an AI, and they might get a sensible answer, but of course they might not.
[57:59] And of all our churches, all the churches in the world. Ashwin. I think it comes down essentially to the data that's been fed, and you could say it's feeding all the information, and the government all around the world are trying to ease the ability to feed all the data to all these AI-driven companies.
[58:28] countries, and I think the only government that was against is Denmark. They introduced virtual facial licenses to all their citizens, so that if anyone uses their images to generate anything, then they can officially go to the court.
[58:44] But India is working in another way, and USA is working in another way, but it's essentially trying to feed the information, and how does it help for a Christian?
[58:57] I never use even my mobile when I'm in my quiet time, so that's just me, I guess. But as Daniel showed, you could generate images to showcase something about the scripture.
[59:11] That's a helpful use case. I don't think there's any necessarily bad thing about it. So I don't know. aid a sermon. Any thoughts from these guys?
[59:25] I have a thought. I'd have it in my notes, and I needed to share it, so now I get the opportunity. I wonder if it, because breaking down the language barrier, if it could help scripture translation, or help missionaries who want to go to a foreign country, but don't know the language very well, aid their learning of that language.
[59:48] There's my thoughts. You've got the microphone. Were you about to answer? Yeah, I'm just going to build on what you're talking about. Yeah. Talk about language translation.
[59:59] It's reassuring. Language translation in AI, we can do that. Translate even the Bible, or certain texts really, into other languages.
[60:09] Of course, we may need oversight in order to make sure the process is efficient. And then research is one thing. Sometimes I'll find myself, I'm remembering a portion of scripture. I'll just go probably to Google.
[60:22] The original algorithm for Google was called Page Rank. It was sort of like a mathematical optimization, but now they're using AI. It's still AI. So you remember, now that makes sense.
[60:34] You remember it. It's a way that you can actually research for SMO or something like that. It helps in that way. It helps grow the gospel in that way.
[60:45] And then to sort of get back to Valerie's question, since you are the investor of Sussex, we have a Sussex center for consciousness. I mean, they study everything about it. And one of the leads for that center is a well-known academic all over the UK.
[61:02] You can always go over there and ask them any questions you have. And then for me, next semester I'll be studying a module called neuroscience of consciousness. consciousness.
[61:12] yeah. So, we're in a semester's time. You can come. Amazing. We'll go to Phil next, but just checking, if you've got a question, could you put your hand up so I know where I should be aiming?
[61:27] Great. Let's go Phil, and then we'll go over to that side of the room, to Jack, and then we'll come back to the middle. Phil, go for it. This is an observation, and perhaps a question to the panel, what they think about this.
[61:39] when you're talking about consciousness, if you were a human consciousness, is it legitimate to move that into human responsibility, in the sense that a human being, if they do something wrong deliberately, there are moral categories, and they deserve to be punished if they do things wrong.
[62:06] The humorous example of this is those who remember faulty towers, when Basil thought his car wouldn't start, he beat it with a stick, said, you naughty car, which obviously is absurd.
[62:19] Now, is an AI in the category of somebody who does something wrong, or is it in the category of machine, that does not bear responsibility for what it does, like a car that won't start?
[62:31] I don't know how I summarized that question, is AI responsible for bad things it does because of consciousness? I've summarized that question badly, but go for it, you guys did hear it.
[62:49] Okay, since I'm holding the mic, I will start. Okay, so when I was working with my professor, we were talking about this application that she's working on, tell me.
[63:03] So the question was, because they're working with young people who at times have suicidal ideations, it's AI, so there's a potential for mistakes to be made, it's called hallucinating, some may call it lying, but it can fabricate things.
[63:19] If things do go wrong, in that sort of situation, who takes responsibility? So I sort of understand your question in that sort of way. Yeah, so who does?
[63:30] I don't think that question is easy to answer, but right now you'd probably think the people who designed the algorithm take the blame, so I guess in a way you could say they are conscious in that way.
[63:43] But something else that we study when we are studying consciousness is other forms of non-normal consciousness states.
[63:55] Like for example, if there's a psychiatric disturbance, let's say somebody commits a crime and yet they have a psychiatric issue, would you then go and say you are responsible for what you did?
[64:06] Is that person conscious? I think that's one of the issues that people study. Then there's also the issue of somebody who is in a coma in a vegetative state, it's also a study of consciousness. Are they conscious? When they wake up, will they remember what we talked to them when they were out?
[64:21] I'm just trying to make how complex the issue of consciousness is. Some of the things we're able to perceive like what you were saying and then we can take responsibility, but some of the things are actually much more complex than that.
[64:33] But yeah, it's an open area of discussion, but that's how I look at it. It's complex. Can we go to another question? Is that all right? Let's go Jack and then as said we'll go back to the middle of Jack.
[64:46] On the question of how can I help in terms of presenting the gospel, I was interested in what in his quiet time he never uses his app.
[65:08] So how do we know what if we look up on our computers and it's correct? For instance, if I want to know what a particular verse says, I might say, well, what did John Calvin say?
[65:24] How do we know that when we look at the computer that it is a true representation? If I got the book, it's covered on Genesis, I can look it up.
[65:38] But how do I know? How does the normal person know what is correct? Oh, how does the, I'm doing a bad job at something I said I do, how does the normal person understand what AI is saying is correct?
[65:57] So if we asked chapter EPT, what does John Calvin think about John chapter 15? How do we know it's going to be right? Well, firstly, I would say don't rely on the Google AI answers that you get when you first go because those can kind of come from just random sources on the internet.
[66:16] So I would, I would honestly, I would urge us to kind of start getting back into books again, honestly, where you have an objective source that cannot be edited and not be changed.
[66:30] That is what I would, that's kind of the main, thank you, that's what I would urge us to do because increasingly it is getting more difficult to know what is real and what is not real.
[66:43] And yeah, I just wouldn't want any of us to kind of fall astray and to be led astray. So that's what I would say. And then if you have to use the internet, I would suggest going to reputable sources, reputable verified sources.
[67:03] And even then, you know, sometimes we've seen with the BBC as well recently in the news that even they can get the facts wrong. So, yeah. Yeah, reputable sources, don't use Google AI answers, and yeah, start getting back into books again.
[67:21] I mean, if I want to cross verify, I cross verify. I just go back to my primitive times before AI.
[67:34] I used to have 60, 70 tabs open. My Chrome is crashing. But that's the only way where you can find if the source is legitimate.
[67:45] And even if whatever article is, go and check the writer, if he is a legitimate writer, if he is really what he claims to be.
[67:56] I think we just have to go back to primitive times, especially when it comes to scripture. We definitely don't want God's words to be twisted. So, that's the reason why I don't use mobile phones.
[68:07] I have questions, I write it in my diary, and then I have some beautiful elders in church to rely on. And even after that, if I do have questions, I would still circle around and come back.
[68:19] So, I keep technology away from scripture as much as possible. Can we go, I think, let's go Brenda, because I think I saw your hand up earlier, and I didn't.
[68:43] We can get, Ashwin, do you want to run the mic over tea, to Brenda for a moment. Thank you, Brenda. It was really just a comment along the lines that we know from our own use, many of us from our own use of programs that have AI in the background, how they tend to, they reflect back to you what you have said.
[69:15] So, if you're online shopping, it will show you things that it knows you're interested in.
[69:26] And I think, I agree with Ashwin about keeping technology and AI away from scripture, because otherwise I think there's a danger that if we put the two together, what will be reflected back to us is what we want to hear.
[69:52] And we all know what the Bible says about, you know, what we want to hear. Where would the challenge be? Where would be the stuff we need to hear?
[70:03] We would never hear it. I, thank you, I do worry with this, that the Bible, Romans 12, for instance, talked about the renewing of our minds, and I do worry if we're relying on the internet, on AI, to answer all our questions, we don't do any thinking.
[70:23] And like you say, it just tells us what we want to hear, that that renewing of our minds isn't going to be happening, and we just get lazy, and we don't grow as Christians, and so I really think, really important, we come back to physical books.
[70:43] Terence has just grabbed the microphone, you adding to that. You're in Romans 12 as well. Yeah, yeah, that's, clearly of the Lord. Yeah, yeah, yeah, the idea of discernment is very important.
[70:55] Yeah, but let's also remember as well that large language models are not the only, the only, yeah, the only form of AI.
[71:05] Yeah. So there are people who are diligent missionaries and pastors and, I mean, people who are working for God who use, like, the internet to provide information that is relevant to a Christian's life.
[71:22] Maybe the lazy thing is to go to Church GPT, but if you try and Google sometimes, there are certain websites that are really good, I mean, they are curated by people who are doing wonderful work, but again, check with your scripture, check with your elders, like Ashwin was saying, but yeah, they are reputable sources.
[71:40] I always try to say this, Church GPT is designed to sort of be a general purpose tool, but there's nothing like that in the world, something that can answer every question doesn't really exist, so there is a potential for mistakes, especially when it comes to issues of religion and all of those things, so be discerning where you get all your information, so I agree on that, but at the end of the day, still, if you're using Google and if you're using websites, you're still using AI in a way, because you're using the algorithms that Google use, so let's be discerning.
[72:12] Thank you. Shikondi, and then I think I saw Bill's hand as well, so final two questions in our formal time, and Megan. Let's do, I think let's go Shikondi first, because I was aware of her first, then we'll go Megan, then we'll go Bill, and then we'll close.
[72:30] I'm not sure I have a complete question, per se, but a number of you have mentioned, particularly with the LLMs, how what you get in is what you bring out, and that's true for a lot of models, so I kind of have a two-in-one question, and the first with that being, I guess, as believers, and you guys, some of your scientists on the panel, and you have the understanding that these are based off of data sets, and so there is the potential for things to go wrong, because they don't have all of the data in the world, so the question being how, it's a very big, broad question, but how do we steward AI well enough that we use it as a tool, as believers, because even what has been mentioned before about languages, I think particularly for languages in
[73:33] Zimbabwe, where they're not as well documented, and so even the use of it, of AI as a tool, for example, to translate, it could very well go really wrong, and so how do we, I guess more for how do we ease in AI well, such that it doesn't curb people's ability to think, creatively, and critically, but also acknowledge that it can get things wrong because it doesn't have all the information, but also it is a tool.
[74:10] Great question. Yeah, that is this word we use, human in the loop, so you put a specialist specialist in between the answers that the AI tool is given, and that specialist should be a really well-versed specialist who can re-verify, and since we can't use any other tools and applications, because understand that all of it is data, so if we put whatever this outcome is into these applications to verify, it would probably pass those applications, because that's the intention, so in robotics we would use this word called human in the loop, so we would say, yes, we have created this amazing machine, but we still don't trust it, so there is human life at risk, because we are talking about physical machines now, so you put a human who's very specialized on his job, so if I'm going to replace a care worker, so to say, to take care of an elderly, you still put a care worker specialist to do a surveillance on top of what this machine is intended to do, and that's the same way you can apply for translations, even if the
[75:29] AI is translating, and it is necessary, especially for scriptures where it's not been translated to, but you would still put a human translation specialist to re-verify those documents, and then confirm the job of done well, so that's one way to do it, and humans involved, yeah, coming back to Phil's question about how would a machine reply back, I don't know who said this, but the way someone quoted us, like, if a human sees an ant, what do you call it, ant live, an ant hill, and if a human looks at it, and if a human is a very good empath, he wouldn't choose to destroy it for his own means, but if you say, okay, I'm not going to go there and check around the surroundings to do my thing, I'm going to use an
[76:30] AI machine, and I'm going to say this machine has to operate, go and do these things, an AI machine wouldn't think about those ant hills, it's going to get the job done, and if you put that into perspective and replace an anthill with any other life form, then we are going to talk about big risks, so, and is the machine going to be responsible for it?
[77:00] No, it's just a machine. We're going to, because of time, let's go to Megan and then Bill, and whoever jumps in first gets the privilege of answering their question.
[77:15] First of all, thank you, it's been really interesting. I suppose this is a practical question, which is particularly for children, AI is here, and they are more than capable of using the internet, and I'm not even talking about chat GPT, but just Google and things like that.
[77:35] Have any of you got practical ways of explaining such a complex thing to children? We can't trust images we see now, can we?
[77:46] But how do we, have you got any practical ways of how this can be explained to children?
[77:56] Because in a way, I feel they're the most vulnerable at just taking everything on face value, and as we've discussed, it reflects back your views, basically, or what you're seeing.
[78:11] So it could be obviously very dangerous as well as being very good. So yeah, just practical, I don't know, articles, videos, anything to help children understand the potential negative as well as positive aspects.
[78:29] Thank you. Anyone want to jump on that? Go on, Mercy. I don't have any videos that I can send you or any articles, but just what came to my head is that I think traditional internet safety has been preached since the internet inception, I feel like it's still relevant.
[79:00] So it's a case of be careful about who you speak to online, be careful about, and be careful of who you speak to online, and I think that could also apply to the AI chatbots as well.
[79:15] Be careful of how you interact with these chatbots. Not everything that they're going to tell you is going to be true. But I think that that's a difficult question.
[79:26] And, yeah, one of the, one of the, it definitely complicates parenthood, doesn't it? Yeah, I would just say traditional internet safety, just beware, be cautious, keep your eye out, make sure that you're getting things verified.
[79:46] I think even just being like, oh, if you see something that kind of disturbs you online, make sure that you come and tell me, just things like that. But it, yeah, I don't think there's, it's such a new issue that I don't know if there are that many resources available at the moment.
[80:02] Thank you. I think this needs more thinking. Yeah, so thank you for that question. I've written it down. I'd love us to think more about that, to help parents, selfishly, as a parent myself.
[80:16] Yeah. Yeah. Sorry.
[80:44] Sorry. Yeah, just very quickly getting a child to be going down a rabbit hole, you know, if they are feeling low, for example. Yeah. It doesn't take much for them to be kind of being shown things or told things that are not accurate, and I'm kind of not, I'm taking the kind of Christian aspect out of it, but just general.
[81:04] So, yeah. Yeah. If it's, if it's brief, that's great. Thank you. No, that's fine. I don't think we've got time.
[81:18] And mention that article. Your wife's told you. Yeah, I don't. Yeah, no, I won't do that. I promise to be brief. You wouldn't let your child run out in the road without teaching them about road safety, would you?
[81:37] So, I mean, the simple answer is that you have to train your children. But the complication, of course, is that you probably understand the danger of being run over by a bus, because you might not understand yourself the dangers of some of these deep fakes and so on.
[81:55] So, part of the answer has to be that the adults have to make sure they understand the issue, and then they can perhaps train their children to cope with it. And we'll mention a couple of resources in a moment before we end.
[82:07] Final question, Bill. And do ask some questions after this, and going forward in the weeks to come. Just briefly, thinking about the sort of global implications, AI has the means of good and bad, and obviously different nations, different countries, different leaders, will eventually use the tool of AI possibly in a bad way, in a way which can be destructive.
[82:52] And I think we're all aware of that. I think even China now are using AI to identify their citizens just from facial recognition and such.
[83:05] and they're even, I would say, managing their own population and subjected them to their rule, as it were, and their oversight.
[83:18] So what my question will be is how, as people in the world, as nations in the world, how can we police AI? why? Because I'm sure it's going to develop into something which we can't even imagine very quickly.
[83:34] be. So as a global community, how can we come together as leaders, not myself, but world leaders and people in positions of power to have some control?
[83:53] Anyone want to go on that? Terrence? Yeah, I'll just try to give an answer. I don't think there is one perfect answer to your question.
[84:05] We have a lot of issues with climate change right now. We have the COP that happens every year, but we know what happens over there. People in Saudi Arabia agree on anything. And you have the world health organizations, you have certain powers that are probably pulling out to try and solve something that's a common goal.
[84:24] So what you're saying is very valid. I think we would need something like that, at the very least, to have something equivalent, to COP, something where we can sit down and talk about these issues and say, this is AI, this is why it's good, this is why it's potentially bad, what safeguards can we put in place, what could be the penalties for people who misuse AI, how do we police it, all of those sort of things.
[84:46] If we could do that, that would be good. but I don't know who would lead the church but maybe, I don't know, the United Nations being, yeah, probably they could do that.
[84:57] Yeah, all of us. Maybe we need to. Ashwin, let's make this the final comment, I think.
[85:08] Yeah. It comes to policy and throughout Abserved history, policymaking has always been one step behind than technological advancements and right now technological advancements are skyrocketing that the policies can't even keep up at the moment.
[85:27] So it is tough and obviously global leaders don't agree a lot. And maybe that's where we need to be resting in the sovereignty of the Lord.
[85:42] Can we give these guys a clap? applause I think we've worked you really hard this evening so thank you so much and it's been just so good to hear from you and your expertise.
[85:58] Sorry we've gone a bit over time but it's not every week we do a session on artificial intelligence but it's also not every week we have hot chocolate available after an evening service so do make use of that.
[86:09] I'll just mention a couple of resources and then I'll pray those books there they're also on the screen made in our image by Stephen Driscoll I found this a really accessible book on the subject of AI so if you're not as clever as some of these guys maybe that's the one but also 2084 and the AI revolution I've read a bit of it it's a bit more technical than that but it is worth going through I've been reassured and I'm going to try and keep going through it after this night so do question me on that in weeks to come and then there'll be a couple of podcast things we'll send out and Megan based on your question I'm going to try and find see if there's some resources for parents on this and if anyone else spots anything let me know because we want to be as helpful to everyone as we can I'm going to pray and then we'll close if you need to rush off for a bus or anything don't be embarrassed to leave the earth is the
[87:19] Lord's and everything in it the world and all who live in it for he founded it on the seas and established it on the waters heavenly father we thank you that this world and everything in it belongs to you we thank you for creating us as human beings with such dignity and worth created in your image and we are amazed and astounded by the the creativity in human life that we've produced