On the morning of July 14, 2025, a team of U.S. Marshals arrived at Angela Lipps’s Tennessee home and arrested her at gunpoint while she was babysitting four young children. Lipps, a fifty-year-old grandmother who had never flown on an airplane and had never set foot in North Dakota, found herself charged with eight felony counts of bank fraud in Fargo — a city she had never visited.
The arrest warrant rested substantially on a facial recognition match generated by Clearview AI, a commercial system operating on a database of billions of photos scraped from the internet. No one from the Fargo Police Department called to question Angela before the warrant was issued. For 108 days she sat in a Tennessee jail cell without bail, held as an accused fugitive awaiting extradition to face charges in a state she had never entered.
When authorities finally reviewed her bank records in December, they revealed what any interview could have established months earlier: Lipps had been in Tennessee the entire time, her transactions placing her squarely at home while the actual fraud was occurring twelve hundred miles away. The charges were dismissed on Christmas Eve. The Fargo Police Department offered no apology and no explanation for why no investigator had spoken with her during five months of incarceration.
The Fargo case is more than a cautionary tale about one flawed investigation. It exposes a growing institutional habit of treating AI outputs not as fallible signals requiring human verification, but as authoritative conclusions carrying near-prosecutorial weight. A Fargo detective reviewed Lipps’s social media and driver’s license, then concluded she matched the suspect based on “facial features, body type and hairstyle and color” — ratifying what the machine had decided rather than examining the evidence independently. The algorithm had become the accuser, and every subsequent step in the case took that premise as settled fact.
Lipps is not the first. In Detroit, police wrongfully arrested Robert Williams after a facial recognition system matched him to shoplifting surveillance footage — the ACLU called it the first documented wrongful arrest attributable to the technology. Porcha Woodruff, eight months pregnant, was arrested in front of her crying children for a carjacking she could not possibly have committed; the detective who signed the warrant knew the suspect was visibly pregnant but did not pause to investigate. The charges were eventually dismissed, and Detroit revised its protocols — but the broader institutional habit of treating algorithmic outputs as authoritative has not.
Part of the problem is technical and well-documented. A 2019 National Institute of Standards and Technology study analyzing 189 facial recognition algorithms found that false positive rates were ten to one hundred times higher for Black and Asian faces than for white faces — a structural feature of the technology as currently deployed, not a software anomaly awaiting the next update.
Scripture identified the underlying human failure long before AI existed. Proverbs 14:15 warns: “The simple believes everything, but the prudent gives thought to his steps.” Precision technology amplifies the consequences of abandoned discernment; it cannot supply what it demands.
What makes this especially alarming is the institutional scale AI now brings to what was once an inherently human process. Historically, building a criminal case required investigators to develop evidence interview by interview. Today, a commercial facial recognition query can generate a probable cause affidavit before anyone has spoken to the suspect. The same dynamic is spreading into employment screening, academic integrity enforcement, and financial fraud detection — hiring algorithms filtering applicants before a human reads a résumé, plagiarism detectors issuing academic sanctions with known false positive rates, each concentrating authority in software that neither the citizen can meaningfully challenge nor any official is required to personally defend.
In my book The New AI Cold War, I examine how this drift toward technocracy threatens free societies from within, even as authoritarian states pursue it more openly. China has fully integrated facial recognition, predictive policing, and behavioral scoring into a surveillance infrastructure that tracks dissidents, suppresses religious minorities, and enforces political conformity at national scale. Russia and Iran follow the same model. Free societies rarely surrender liberty all at once, however. The erosion happens step by step, rationalized as efficiency and necessity, until the infrastructure of control is already in place and deference to algorithmic authority has become the default.
Christians should be especially alert to the spiritual logic driving this drift. Modern technocracy rests on the assumption that algorithmic judgment is more reliable than human judgment because it appears objective, mathematical, and emotionally detached. But machines possess neither wisdom nor conscience — they cannot extend mercy, weigh moral complexity, or bear accountability before God. Psalm 20:7 declares: “Some trust in chariots and some in horses, but we trust in the name of the Lord our God.”
Every generation finds a new technology in which it is tempted to place messianic confidence, and ours has settled on AI. Human beings are made in the image of God — not as entries in a matching database — and justice requires the kind of accountable, mercy-tempered judgment that no algorithm can replicate or replace. I explore that argument at length in AI for Mankind’s Future.
The Angela Lipps case is a warning lawmakers and law enforcement must heed. No arrest warrant should ever hinge on a facial recognition match without independent, corroborating investigation. Citizens must have an explicit right to challenge algorithmic determinations, including access to the underlying data and methods. Independent audits must be mandatory for any AI system used in law enforcement, employment, or education. None of this is technically hard; what’s missing is the political will to keep human beings morally accountable for decisions made in their name and to ensure no American’s freedom depends on a machine’s confidence score.
Regrettably, Angela spent Christmas Eve 2025 in a North Dakota jail cell for a crime committed in a state she had never entered, flagged by software she never authorized and never questioned by the investigators who sought her arrest. That outcome followed a chain of human choices — to trust the algorithm, to skip the interview, and to offload accountability to a system that has none.
Every pastor, parent, legislator, and citizen has some influence over whether this becomes standard practice in America. Now is the time to use it.
Notice: This column is printed with permission. Opinion pieces published by AFN.net are the sole responsibility of the article's author(s), or of the person(s) or organization(s) quoted therein, and do not necessarily represent those of the staff or management of, or advertisers who support the American Family News Network, AFN.net, our parent organization or its other affiliates.