Artificial intelligence (AI) isn’t just about creating cool graphics or chatbots that answer your questions anymore. It’s revolutionizing how humans think, plan, and make decisions about everything–from politics and war to how religion and law interact in our everyday lives. To understand how, let’s take a quick trip back in time to the days when you’d stop at a gas station for directions.
Back then, you’d ask an attendant for help, and they’d draw you a simple map or point you to a road atlas. Fast forward to today, and now we’ve got apps like Waze. Waze doesn’t just tell you the route–it pulls from thousands of drivers, calculating traffic, accidents, and speed traps in real time. It’s like a hive mind, gathering information from millions of sources and giving you the best option based on what’s happening right now.
AI is doing something similar on a much bigger scale. Instead of just maps, it’s now tackling the most complex problems we face–problems that used to take human experts years to untangle. Think about war strategy. In Ukraine, AI-powered drones are on the battlefield, making decisions faster than any human soldier could. This has changed the nature of war itself. Traditional armies can’t keep up with machines that learn, adapt, and execute strategies in milliseconds.
But it’s not just war. AI is also reshaping how we navigate our legal and moral systems.
This ability to coalesce ideas and data applies to religion too. Henry Kissinger’s book Genesis, which explores the rise of AI, focuses on its impact on humanity, politics, and ethics but notably avoids delving into how AI interacts with religion. And yet, religion might be one of the most profound areas AI will reshape. Today, it’s quite possible that some pastors are already using AI to draft sermons or gather insights on complex theological and moral issues. AI, guided by specific questions, can sift through centuries of theological texts, ethical arguments, and real-world examples to create a cohesive message in minutes. This is revolutionary–machines are doing the mental heavy lifting that used to take hours or days.
That sounds great–until you think about the risks. AI doesn’t have feelings or a sense of right and wrong. It’s only as ethical as the people who program it. On the battlefield, this means AI could make decisions that humans wouldn’t, like prioritizing efficiency over lives. In law and religion, it could create interpretations that clash with human values or traditions.
Take the Ukraine example again. AI drones don’t tire, fear, or hesitate. But what happens when decisions about war–decisions with life-or-death consequences–are made without human judgment? Similarly, what if AI systems begin advising courts on how to interpret religious freedoms or resolve conflicts between church and state? These are areas where humanity’s moral compass matters most.
To manage this, we need a framework–rules and ethics that guide how AI is used. We also need to recognize that AI is already smarter than any one of us. It can see the patterns we miss and make decisions based on an ocean of data we’d never have time to explore. But with great power comes great responsibility. If we let AI take the wheel without oversight, we risk losing the very human values that make our societies work.
In the end, AI isn’t going anywhere. It’s going to help us navigate the incredibly complex systems of government, law, morality, and even spirituality, just like Waze helps us find the quickest way home. But unlike Waze, AI’s decisions will shape the future of humanity. It’s up to us to make sure those decisions reflect not just intelligence, but wisdom. Because if we don’t, the road ahead could get a lot bumpier than we’re ready for.