A recent report from one of Cranberry Lemon’s experimental Police technology labs has shown that a trained Convolution Neural Network (CNN) has misidentified over 90% of the objects in police body cam footage from hundreds of hours of test data during police interactions. This surprising news to all computer vision engineers who estimated they would at least get 50% accuracy with their available training data. However; engineers closely watching the Body Cam CNN development have blamed inaccurate truth data for the dismal object classification accuracy and expected worse results after the police unions took three quarters of their data. Because of the large amount of normal objects labeled guns and knives from police reports, most of the inaccurate classifications appear to be a lethal weapon type 1 error.
CNN’s or Convolution Neural Networks are amazing tools for image and video classification. They work by applying convolution nodes which can use specific edge detecting filters to identify different sorts of shapes and objects over multiple convolution layers. After each layer, those edge detecting kernal filters are applied to highlight and identify different components of complex images which can create entire objects. At each stage of the networked layer of the CNN, Rectified Linear Units (ReLUs) would then be used to normalize the outputs for forward and backwards propagation in the training process. With enough properly labeled training data, a CNN should be able to create optimal classification estimates.

The CNN can even be extrapolated to analyze video and expression with three dimensional convolution filters. These 3d voxel filters were initially used for the police body cam identification process until they began identifying 82 times as many normal human expressions as physical attacks on the police officer. This was the first moment that the Cranberry Lemon researchers began to suspect that they were working with mislabeled data.
The CNN body cam project has been developed for the last four years in the special machine learning forensic sciences division of Cranberry Lemon. The algorithm was developed for the purpose of adding an objective perspective on police body cam footage for interpretations in court. In 2018, a dozen different machine learning experts attempted to automatically analyze police body camera footage using Atomic Visualization Actions only to discover the ‘Everyone one is criminal’ problem in this particular machine learning application. With their data and algorithm, there was a lot of trouble distinguishing between the finer details. For instance, many citizens on their cell phones were mistaken for smokers. The new algorithm aimed to just label important details like violent actions or the existence of lethal weapons. It turns out, this idea was entirely more monstrous and misguided than previously imagined.

In the attempts to refit the same computer vision classification problem, the machine learning engineers of Cranberry Lemon ended up creating the mythical Santa Claus from Futurama who was programmed with a naughty-nice threshold high enough to condemn all of humanity. After the CNN trained on thousands and thousands of hours of police body camera footage and after action police reports, it began labeling all non-Western European minority males in their 20s and 30s as threats and everything those minorities held as guns or knives. While the Futurama Santa Claus retreated to Neptune after Christmas, the new AI would disastrously tyrannize humanity on every beat and block. Thankfully, the engineers of Cranberry Lemon were up to date on their required annual ethics training and recognized the enormous type 1 error as a means of enslaving millions and not objectifying the judicial system.
While such an error is often corrected by human intervention, it was difficult when the CNN over sensationalized it. Not only did it mislabel, but it highlighted at extreme prejudice nearly every pixel as a serious firearm, knife or blunt weapon once it came to any conclusion one way or another. For instance, once fed a picture of the podcaster Joe Rogan taking the medication Ivermectin, the CNN misinterpreted the entire image as a horse infected with worms. Once a conclusion was reached in the early layers of the CNN, there was no convincing it otherwise and began amplifying the result into some extreme echo chamber.

In the attempt of an honest unbiased test, footage was collected by unpaid interns who walked around Pittsburg wearing body cameras. After the results returned with criticism of our interns looking nothing like cops, we spent some discretionary research budget on some uniforms for our simulated officers pictured below. It only turned the data noisier from attracting Suburban Pittsburg mothers who though or interns looked incredibly too cute. Despite the increased noise, it appeared to capture the same type 1 error for minorities holding lethal weapons.

Any notions of creating an ethical, real life, genital shooting RoboCop is far from any non-white supremacist reality. A small scrub of the training data and its’ police report labeled truth data revealed all sorts of errors. Not only were most lethal weapons and small civil infractions grossly mislabeled, I was not even close to going 30 over in a school zone as the CNN interpretation estimated. It will be a while before there is an automated objective interpreter of police body camera footage. Until there are some objective human sources, there’s no possible way we can train an AI to do so.
If you enjoyed this clickbait please like, share, and subscribe with your email, our twitter handle (@JABDE6), our facebook group here, or the Journal of Immaterial Science Subreddit for weekly content.
Even though this was a made up clickbait article the above mentioned article and IEEE news story is very real and it is probably a terrible idea to pair any artificial intelligence with police data. Because of the training process in AI, it will always reinforce any systematic bias that exists in the data collection process.
If you REEEEALY love the content, for the equivalent price of a Chipotle Burrito, chips and Queso, you could buy our new book Et Al with over 20 hand picked Jabde articles for your reading pleasure, it’s the perfect Christmas/Birthday gift for confusing your unsuspecting family members! Order on amazon here: https://packt.link/at4bw Please rate and review so that you can brag to your friends about having opinions or showcase your excellent taste in reading material!
