The Pursuit of Perfection | Commentary


Companies active in artificial intelligence (AI) development have been making voluntary public commitments to create standards for safety, security, and trust around their revolutionary technology.  If this sounds like rhetoric to you, then you are in good company.  In the post-Pandemic era, trust in technology companies is waning.  

But what could they be hiding? The answer seems to be literally staring all of us right in the face.

On 7 July 2023 in Geneva, Switzerland, the United Nations (UN) invited nine AI-humanoids to answer questions posed by selected journalists.  These technological marvels all had expressive faces, moving lips, eyes that scanned the room, and heads that turned to face their questioners.  Each also had their own biography and culture, including identifiable male or female traits and specific talents and careers.  Two even closely resemble their creators.

The question is of course why do originators and developers of AI strive to represent their tech as human?  

Creating a humanoid like robot is superfluous.  ChatGPT is powered by AI but remains a computer interface without physical form.  Amazon's Alexa, while a less advanced technology, listens to questions and provides answers but none of its devices physically resemble us.  The same is true for the satellite navigation systems in our automobiles.  Just type in a destination and the technology gives instant driving directions displayed on your smartphone or dashboard.  A voice designed to mimic a human one gives verbal cues, but no one could possibly confuse Siri for the disembodied voice of their girlfriend, wife, sister or mother.

Yet when it comes to AI, particularly in high-profile settings like this recent U.N. Press conference, the technology is most often represented as being 'human'.  To understand the mindset of AI developers, it is perhaps useful to revisit the 11th Chapter of Genesis.  In those days, the newly discovered Mesopotamian art of brick making was the world's greatest technological breakthrough:

"Come let us make bricks and bake them thoroughly.  They used brick instead of stone, and tar for mortar" (Gen. 11:3)

Armed with such advanced technology, this innovative society set out to construct a great city and "a tower that reaches the heavens", with the clear purpose being to "make a name for ourselves." (Gen. 11:4).   The great Nimrod planned to mount the completed tower as a self-proclaimed god-king.

In our own century, placing an AI interface on phones, tablets, or laptops is rather ungodly.  However, an interface that looks and behaves humanly with a distinctive personality transforms its maker into a creator-a god-who shares the same desire as those Mesopotamian brick makers.  For them, it was divine conflation of language which halted construction of their city and tower.  After the confusion, they all scattered:

"That is why it was called Babel-because there the Lord confused the language of the whole world.  From there the Lord scattered them over the face of the whole earth." (Gen. 11:9)

In Hebrew, "Babel" sounds like the word for "confused", which aptly describes our 21st century situation.  Confused, scattered, and divided into camps, we cannot understand each other or work together.  We describe our divisions in terms like secular humanists vs. Christians, liberal vs. conservative, urban vs. rural, and haves vs. have nots.  The ever expanding number of so called "diversities" according to race, ethnicity, gender identity, sexuality, and environmental hysterics is truly befuddling.  While these appear to be separate problems, they all share the same root cause:  pride, as expressed by the age-old desire to be gods.  The original sin of the archangel Lucifer, which he fatefully brought to corrupt the pristine perfection of Eden.

Once we recognize this, we see it ubiquitously.  Doctors play god, purporting to 'correct' God's mistakes in terms of a person's biological sex.  DIE administrators claim to know what is in every human heart and offer a false path to forgiveness through them alone.  Judges presume the power to shape our values and to wield decisions based upon the perversion known as "social justice".  The suffering decide to avail themselves of medically assisted death, otherwise known as self-murder.  Like the Tower of Babel, humans everywhere are playing god and think that advancements like AI actually have the capacity to take us to the very apex of existence.  What they fail to consider are the inevitable consequences for individuals and societies which aspire to be as gods.

While issues of national security and the job market are certainly important considerations, distrust and concern about AI come from another place entirely.  We know instinctively that AI is a bridge too far.  Even tech titans like Elon Musk are warning us of impending doom:

Public trust will therefore not be granted to AI until its developers forthrightly answer some key questions like:  'why are they designing faux humanity', and most importantly-'who is God?'

The answer to the first question is obviously to achieve something that has perpetually escaped human endeavour throughout history, and which only God can do.  Unlike God, we mere mortals are fallen creatures, corrupted by the stain of sin.  Consequently, everything we create is perishable, corruptible, and occasionally-evil.

The potential abuse of AI for private gain has profound implications for our economic, political, social, and especially our spiritual lives.  Recently, our social media feeds have been flooded with headlines about advances in AI or AI generated images.  One that I saw recently was a somewhat convincing generation of legendary Hollywood actor Morgan Freeman confessing to being a 'deep fake'.  Text to image algorithms are becoming hugely popular.  ChatGPT is now the world's best performing large language model, reaching 1 million users in its first week alone-exponential growth eclipsing even tech giants such as Twitter, Facebook, or TikTok.  

As AI demonstrates its unprecedented ability to craft poetry, write code and even pollinate crops by imitating bees, the governance community is slowly awakening to the impact of AI upon the perplexing problem of corruption.  Leading policy institutes and academics point to potential application of AI to detect fraud and corruption, with some even heralding these technologies as the 'new frontier in anti-corruption.'

Amid all of this unbridled enthusiasm, it is perhaps easy to lose sight of the fact that AI can also produce undesirable outcomes due to biased input data, faulty algorithms or irresponsible implementation.  Thus far, most of the documented negative repercussions from AI have been treated as incidental side-effects.  However, such technologies present novel opportunities to willfully abuse power, and the impact that AI could have as an agent of evil  has not yet received nearly enough attention.  For example, Forbes Magazine recently published a piece about what happened when ChatGPT was asked to write poems about Donald Trump and Joe Biden.  The results were rather shocking:

Whatever one's opinions are of these two political rivals, there is simply no escaping the fact that these AI generated opinions were absurdly biased, thus illustrating the problem.

A recent Transparency International working paper introduces the concept of 'corrupt AI'-defined as the abuse of AI systems by public power holders for private gain.  It also documents how these tools can be fashioned, manipulated, or even weaponized to constitute corruption.  For instance, politicians can abuse their power by commissioning hyper-realistic deepfakes like the cited Morgan Freeman one to discredit their political opponents and thereby increase their chances of holding office.  Abuse of AI tools on social media to corrupt elections by spreading disinformation has already been well documented in the aftermath of the 2020 U.S. Presidential election.

Corrupt AI does not just arise from a maliciously designed system.  It can also result from exploitation of the vulnerabilities of otherwise beneficial AI systems.  This becomes of greater concern given the significant push worldwide toward digitalization of all public administration, from health, to the military, to currency, trade and commerce.  

Algorithm Watch recently concluded that citizens in many nations like Canada and the U.S. already live in 'automated societies' in which public bodies rely on lines in code to make vital social, economic, and even political decisions.  Digitalization of government services has long been recognized as a means of reducing bureaucratic discretion in decision making, thereby constraining corruption opportunities; but as we have seen, replacing humans with AI cannot yield perfection.  At best, AI simply brings novel and perhaps more serious corruption risks.  Indeed, a human system may produce a few corrupt individuals, but there will always be many more who behave ethically.   A corrupt AI system will induce monolithic corruption.  As Elon Musk so aptly puts it:

"An evil human dictator will eventually die and so there will be an opportunity for freedom to be restored.  An evil AI dictator will never die and is merciless."

There are four sound reasons to beware the risk of AI corruption.  Firstly, we are more likely to be corrupt when the risk of discovery is low, such as when we can claim plausible deniability.  The risk of ethical violations to reap illicit benefits increases where we are not directly confronted by our victims; in other words, when there is a vast psychological distance from the people impacted by our corruption.  This is illustrated by the Old Testament story of how King David sends Uriah to certain death at the battlefront so that Bathsheba will become a widow and hide David's sinful adultery.  By this process, David got Uriah out of the way and stole his wife.  Plausible deniability.

According to recent research in behavioural science, deployment of AI systems could enhance  both of these risk factors.  The complexity and autonomy of machine learnt AI systems, which produce outputs often incomprehensible to us based upon inputted data, could simplify corruption of the technology to avoid detection.  Meanwhile, introduction of AI tools as intermediary in decision-making processes can only buttress the physical distance between perpetrator and victim.  King David and Uriah, redux.

The healthcare sector is one example of how these risk factors can undermine potential benefits of AI.  Physicians and other healthcare providers are already being trained to use algorithms to help diagnose disease and assist in making cost estimates.  There is however ample reason to suspect that these systems are easily duped.  By just exchanging a few pixels or the orientation of an image, physicians can fool AI image recognitions into producing faulty results, including misidentification of a benign mole as malignant in order to justify expensive treatment.  Healthcare workers can also reap benefits from manipulating AI systems to classify patients as high-risk and thus high cost.  These concerns are hardly hypothetical; in fact, there is a growing body of evidence that such abuses are already occurring.

The second reason to take corrupt AI seriously is its potential to increase the scale of damage caused by a single act of corruption.  If one person is bribed, that might impact hundreds or even thousands; but if an algorithm is corrupted, it can affect millions or even billions of people.  Algorithmic capture describes how AI can be systematically manipulated to favour a specific group.  For example, tweaking the code of algorithms used in electronic procurement or fraud detection programs can steer lucrative public contracts to cronies or conceal crime by certain well-connected parties.  While bribing an individual is usually about breaking rules of the game to gain illicit privileges, corrupting an algorithm by bribing its developer or corrupting its code changes the very rules of the game itself.  If an AI system is distorted to allocate resources in a particular way-such as licenses, permits, or tax exemptions-then a new corrupt 'rule' can be embedded into the entire system.

The third reason is that replacing humans with AI in public administration reduces reporting and the obvious benefit of whistle-blowing potential.  When decision making authority shifts to AI, there are necessarily fewer James Snowden types around to reveal corruption.  Moreover, those working where algorithms do the policing and reporting might receive less training, and thereby lose the skills set to detect and expose corruption.

The final risk factor is opacity.  When AI systems are implemented automatically and code or training data go undisclosed, the specter of abuse within these systems grows.  For example, investigative efforts recently documented biases in face detection algorithms, as well as in AI systems used to make hiring decisions.  Those developing and implementing such systems could encode WOKE biases to favour certain demographics on a systemic level.  In that case, the secrecy of code and data would render reliable detection of malicious algorithms quite challenging.  Since most AI tools are developed by the private sector, reluctance to disclose commercially sensitive information is widespread and impairs accurate auditing of these algorithms.  

In authoritarian regimes where there is little to no recognition of the Rule of Law, even AI systems created to curb corruption can be weaponized for evil.  One such example is the 'Zero Trust' project implemented in China by the CCP.  It is ostensibly designed to identify corruption amongst 60 million public officials.  It authorizes AI to cross-reference 150 databases, including private bank statements, property transfers, and even consumer purchases.  While notionally intended to uncover corrupt behaviour, this CCP digital surveillance infrastructure is readily abused to advance narrow private interests or ideological agendas.

As widening swathes of our lives become regulated by AI, what safeguards can be installed to ensure that we are not exposed to the malicious and undetectable abuses of technocracy?  Beyond generalizations like strengthening the Rule of Law, the most promising countermeasure might be facilitating checks and balances.  Ideally, this would form an integral part of AI development and deployment procedures.  

One obvious challenge is enforcement.  How can private and public companies be compelled to submit to oversight processes which may engage independent investigators?  An important step would be to establish transparency regulations mandating that code and data be responsibly shared.  Privacy can be protected by uploading data in a masked way.  Techniques like differential privacy, for example, help to remove identifiable information while still permitting data to be meaningfully analyzed.  By increasing accessibility in this way, such transparent digital infrastructure facilitates code audits, since it allows scientists to inspect code and data.  It is crucial that everyone have access, not just state actors.  Involving civil society, academics, and others in development, deployment and augmentation of AI systems is key.  Oversight in public administration is vital to ensure that these powerful new tools serve the public interest.

It is unlikely that mankind will ever create a 'perfect' AI, in the sense of a system that is capable of flawlessly solving all problems and making universally infallible decisions.  AI systems are ultimately limited by the algorithms and data they are trained on, and subject to the same biases and errors endemic to human decision making.  The objectives of AI systems can also be complex, difficult to define, and subject to periodic revision.  Consequently, it seems unlikely that any AI system will ever be able to achieve a state of perfection.  

So why then do the modern Mesopotamian brick makers at Google and Space-X keep trying to attain the unattainable?  To construct a perfect technological tower into the heavens from whence they can stand and survey the known world as gods?

Perfection!  It is the highest standard to meet.  But it is exclusively God's standard.  It always was and always shall be:

"The Lord God, merciful and gracious, long suffering, and abundant in goodness and truth."

(Exodus 34:6)

But there is more good news-the news of the Gospel.  That too has always been with us.  It is found in literally every book that we euphemistically call the "Old Testament."  It is actually possible-and even easy-to keep all of God's natural laws.  It can in fact to be done much more simply than obeying all of the government's multitudinous rules and regulations.   For one thing, God's laws are far fewer in number.  For another, God is much more forgiving, just, and full of grace than any human or AI judge.

If we violate one of God's laws, all He requires is sincere repentance and to sin no more.  That is the simple recipe to God's heart.  But we must get to know it, fully understand it, and have a strong desire to know Him:

"My little children, these things write I unto you, that ye sin not.  And if any man sin, we have an advocate with the Father, Jesus Christ the righteous:  And he is the propitiation for our sins:  and not for ours only, but also for the sins of the whole world.  And hereby we do know that we know him, if we keep His commandments."(John 5:3)

God knows that we are not perfect.  Nor can we manufacture perfection.  In fact, the Greek word for sin is "Hamartia", which means to aim but miss the mark.   God understands that it is implicit in our fallen state of being to be unable to achieve perfection.  He therefore does not expect us to pretend to be perfect, and in fact we sin by the effort.  We are instead just expected to follow God's laws.  The problem we face is that most people do not take these commandments seriously.  That is the standard today and so it has always been through human history, all the way back to Babel and Eden.  Even most "Christians" do not hold that we must keep His commandments.  They do not see this as a pre-requisite to their faith, but they are wrong.  Christ said so, as did all of the Apostles.  

How could Christians not understand this?  Is it not the very essence of knowing God?

Trusting God is, after all, a commandment.  Jesus came first to the Hebrew people and later tells his followers to spread the Gospel throughout the world-even to the nations.  There was to be nothing changed.  They were taught what to preach and were all afire for their Lord.  They turned the world on its head.  Many of the original early missionaries were Jews who died in the act of spreading the truth about God.  Later, when gentiles began to accept the Messiah as their Lord and Saviour, they were deemed part of 'The Way' or 'Nazarenes'.  We  learn about them in the book of Acts.  They followed the Jews in most every way, just as those who left Egypt amongst the Hebrews were called the "mixed multitude." (Exodus 12:36)

Today, many Christians accept this reality, while some still do not; but all who follow the One True God and willingly accept His commandments will be saved.  As Paul tells us in Romans 11:26:

"all Israel shall be saved; as it is written."  

God's rules and commandments are for all because He loves us.  Jesus especially emphasized this in his teaching.  He is the Son.  He is the mediator.  He is the King.  Only via the Cross can we experience the perfection of the Kingdom to come, the Kingdom on Earth, which He alone shall preside over for a thousand years.  It is through Christ alone-and not through the technological wonders of AI or any other manufactured system-that perfection is actualized and experienced.  

Perfection is impossible this side of heaven.  God paid the price at Calvary for us substitutionally.  We can also experience life today more abundantly if we would only follow His rules, one of which is not to worship false idols like AI.  What is even more important is to understand God's own standard of perfection, as revealed in the incorruptible moral perfection of Jesus. In Christ alone among the sons of men the final break with sin-perfection-was achieved.  The power of spirit over matter, which in turn was blocked at every turn by sin's taint and thwarted, was in Him truly set free.  Mighty works and miracles thus flowed naturally from Jesus; and the Resurrection, the very height of perfection, ceased to be impossible or even improbable, and instead became inexorable.  

The decisive thing for the disciples was their personal experience of human perfection in Jesus.  Daily fellowship with Him taught them that he was surely more than man; and though Calvary momentarily dimmed and clouded that high faith, it ultimately shone out clearly again.  They came to see (as should we) that the Resurrection was certain from the very first; for if Christ died and that was all, then God himself and the very notion of perfection would also perish.  So in the last resort the proof of the Resurrection is the perfect personage of Jesus Himself.  It was impossible that he-as God incarnate-could be subject to death.

Having thus established that only God can create perfection, and that this was achieved via the incarnation of Jesus Christ, let us ask how we as Christians might best shape the future of AI in this century?  

All technology is shaped by human behaviour.  This relationship is amplified in AI, which is designed to learn from us.  Since Christians already encounter AI daily, we all share a responsibility to think through its changing role in our lives.  As Christianity shifts to the global south in the 21st century, those Christians will be well positioned to develop key insights on AI.  Within the global south, the most privacy invasive AI systems will be deployed more widely in middle-income societies with powerful governments and comparatively weak civil society.  AI systems in such nations might make profound contributions to human flourishing.  Yet marginalized Christians across societies will also have the best first-hand experience with AI that exercises strong surveillance and control over financial systems, communications networks, and justice systems.

Christians working in computer science and technology in the U.S., Canada, UK, China, and India are also involved in building the future of AI.  Many of them have not studied ethics or social sciences and can be shocked at the moral and societal outcomes of their inventions.  For many Christian computer scientists, the church likely provides their primary education on ethics and morality-though Christian conversations in the computing industry are presumably quite rare.  

The future of AI will also be decided in local and national governments around the world by civil servants, activists, journalists, and policymakers who develop the ways we govern AI.  Christian ministries serving these groups might be the veritable modern bricklayers in Genesis:  they can help us to both make sense of AI and to determine whether this amazing new technology will either bring us closer to, or else estrange us from Perfection.

Share this article