A 58-year-old grandmother spent Christmas Eve 2025 walking out of a North Dakota jail. Not because she completed a sentence. Not because justice was served. Angela Lipps walked free after five months of incarceration for a crime she had absolutely nothing to do with. She was 1,000 miles away in Tennessee when the bank fraud was committed. The only evidence against her was a facial recognition match from Clearview AI that police later admitted contained what they delicately called "a few errors."
The Digital Cage: How an AI Algorithm Stole Five Months From Angela Lipps
Five months. That is approximately 150 days. That is missing Thanksgiving with family. That is watching the seasons change from a concrete cell while your name gets dragged through the mud. That is the real, human cost of handing over the power of accusation to algorithms we do not understand, cannot audit, and have no control over.
I keep thinking about that distance. One thousand miles. It is not even close. It is not a case of mistaken identity where someone could arguably be in two places. It is not a technical glitch that required more verification. It is a failure so profound that it exposes everything broken about how we are deploying artificial intelligence in law enforcement. When the tech says "match" and the human stops asking questions, we have surrendered something fundamental about how justice is supposed to work.
Angela Lipps is not a statistic. She is a person who had her life interrupted, her reputation damaged, and her trust shattered. She is not alone. This keeps happening. Robert Parris Williams in Detroit and Porcha Woodruff, who was eight months pregnant when she was arrested by Detroit police, are just two names on a growing list. Each represents someone who discovered that the technology sold as a tool for public safety can just as easily become a digital cage.
When an algorithm accuses, who do you appeal to? When the computer says you were there, how do you prove you were a thousand miles away? The burden shifts in ways we have not fully grappled with. Traditional evidence can be challenged, examined, and questioned. But AI output carries this false aura of scientific objectivity. It is math, right? Math does not lie. Except when it does. Except when the training data is biased, the thresholds are set wrong, and the checks and balances that should catch errors are stripped away in the name of efficiency.
The Black Box Problem
Clearview AI built its business by scraping over 30 billion images from social media platforms, news sites, and the open web. They did not ask permission. They did not verify accuracy. They built a correlation engine and sold access to law enforcement agencies desperate for technological solutions to complex problems.
Facial recognition accuracy varies wildly depending on the quality of the image, the angle of the face, and lighting conditions. Critically, it also depends on the demographics of the person being identified. The National Institute of Standards and Technology (NIST) has found that some algorithms have error rates significantly higher for African American and Asian faces compared to Caucasian faces. This is a predictable consequence of training data that does not represent the full diversity of the population.
The deeper failure is procedural. Law enforcement agencies have adopted these tools without adopting the oversight frameworks that would catch errors before they destroy lives. In the case of Angela Lipps, the police treated a match as evidence rather than as a lead requiring independent verification. They did not check alibis. They did not look at the geographic impossibility. At every level, opacity served as a shield for incompetence.
I have spent years building systems, and I know the seductive promise of automation. You cannot automate judgment. You cannot outsource accountability. When we treat AI outputs as answers rather than inputs, we create these failure modes where a computer's guess becomes probable cause.
Open Source as Resistance: Building Liberation Infrastructure
I believe technology should be liberation infrastructure, not a digital cage. What happened to Angela Lipps is the opposite of everything software should be. It is opaque instead of transparent. Centralized instead of controlled. Mysterious instead of accountable.
Open source is a philosophy of power distribution. When code is open, it can be audited. When algorithms are transparent, their failure modes can be identified and corrected. When systems are locally controlled, the people affected by them have a voice in how they are deployed. This matters enormously when we are talking about tools that can deprive people of their liberty.
Imagine an alternative to the Clearview AI approach. Imagine facial recognition systems built on open datasets with documented limitations, deployed with mandatory verification protocols, and subject to regular audits by independent researchers. The philosophy at PhantomByte centers on three principles that would have prevented this ordeal.
First, local control over data and algorithms. When your community owns the infrastructure, you can set the rules about how it is used. Second, transparency as a requirement. Every algorithmic decision that affects liberty should be explainable in specific technical detail. Third, human judgment should be preserved and enhanced rather than replaced. Technology should inform human decisions, not make them automatically.
Every line of code carries responsibility. The question is not just "Does it work?" but "What happens when it fails?" and "Who bears the cost of those failures?" Angela Lipps paid that cost. The system that incarcerated her did not.
What You Can Do: Building Systems That Protect
The Angela Lipps case is a call to action for everyone who builds, deploys, or is affected by AI systems.
If you are in law enforcement or policy: Demand algorithmic impact assessments before deploying facial recognition. Require independent audits of accuracy rates across demographic groups. Implement mandatory verification protocols that treat AI output as a lead requiring investigation, not as evidence sufficient for arrest.
If you are a developer: Consider the power dynamics in what you build. Open source your algorithms when possible. Document limitations explicitly. Build in safeguards against automation bias. Create audit trails that expose how decisions were made.
If you are a citizen: Pay attention to how your local police department uses technology. File public records requests about facial recognition procurement and policies. Support legislation that requires transparency and accountability for algorithmic policing tools. Vote for representatives who understand that technology governance is civil rights governance.
Technology that cannot be audited will produce injustice that cannot be corrected. Centralized power over surveillance infrastructure inevitably leads to abuse. Angela Lipps is free now, but the system that imprisoned her is still running. Clearview AI still operates. Police departments are still deploying facial recognition without adequate safeguards. The next wrongful arrest is likely already in progress.
We must democratize control over the systems that shape our lives. We must not wait for the next victim before we start.
Get More Articles Like This
Getting your AI agent setup right is just the start. I'm documenting every mistake, fix, and lesson learned as I build PhantomByte.
Subscribe to receive updates when we publish new content. No spam, just real lessons from the trenches.