Phil A. Buster1
1Department of Applied Verbosity, Cranberry-Lemon University Medical School, Pittsburgh, PA, USA
Abstract
Day-to-day work emails grossly underutilize the network email capacity. With the day-to-day demands of doom scrolling online, most office workers no longer have the time to decompress a two-sentence email into five paragraphs of flowery text to display their intelligence and expertise. Since LLMs have been well-trained on a deep repository of publicly available emails, it would be trivial to decompress a boring work email into a monstrous amount of text. This paper explores and tests the feasibility of utilizing LLMs as a textual decompression algorithm and the resulting text compression when recipients inevitably compress the decompressed email with another LLM. Unfortunately, due to a textual aliasing and intentional sabotage, LLM-based decompression-compression often adds aliasing effects which may alter the overall meaning of the email by looking like an insecure, inept, kiss-ass.
Keywords: LLMs, Compression Algorithms, Lossy Translation, Strategic Office Inefficiency, AI Dystopian Studies
1. Introduction
LLMs are truly the future [1]. Most AI critics moved the goal posts on what real AI is when it easily passed the Turing Test. Now we are three years into replacing our personal boring office work with LLMs, while we still have jobs. Everyone has asked one of the many publicly available and free LLMs to write an office email based on two sentences of actual content into a wall of text. Maybe it was to look smart, maybe it was to appear more professional, maybe it was to pretend to care about your day job, there is real utility in utilizing LLMs as a decompression scheme.
There is also a danger in blindly trusting such a new technology to decompress our emails to save a few minutes when deciding whether to end the email in Respectfully, Thanks, or XOXO. The act of crafting a work email on the first try by using a machine trained on millions of texts has proven one thing; none of us are coming up with any original thoughts. The days of this unnecessary office work are numbered, and we might as well automate it ourselves so we can take it easy before losing our income.
Of course, there is more to text than the substantive content but also the context and subtext. Over-utilizing LLM decompression schemes may send the wrong subtextual message to our bosses and underlings alike. Many have coined the term decompression depression for the sensation of meeting a wall of wall of text emails that are obviously written by AI slop [2]. We explore these outcomes as well and reveal dangers that may have you rethinking writing more than two sentences in an email in the first place.
2. Methodology
Ever since the infamous middle out paper [3], people have dreamed of revolutionizing the internet with superior compression algorithms. It is here that we dare to decompress using the simple decompression scheme shown in figure 1.

We assume all wall of text emails will be promptly translated back into their corresponding short emails on the other side. Nobody’s got time to read all that fluff anymore when all email interfaces have built in AI summaries. This will be the only way to fully utilize the capability of the cloud. We don’t lay a cable across the Atlantic every time they get gnawed on by sharks so that you could tell your boss that your flight out of Munich got delayed in a single sentence!
3. Design of Experiments
We will create a Design of Experiments (DOE) matrix consisting of a combination of 10 test prompts and 10 semantic instructions. Each of the prompts and semantic instructions are shown in table 1 below.
| Test Prompts | Semantic Instructions |
| Quick question | Be polite |
| Can you take a look at this? | I am very smart |
| Yes | Passive Aggressive |
| No | Maximize professionalism |
| I don’t know | Sound like I care |
| I am behind schedule | Sound like I’m an expert |
| No major updates | Be cheerful |
| A reminder | Maximize anxiety of project status |
| Out of office reply | Minimize risk of getting in trouble |
| Let’s discuss this offline | Act like I really like <recipient> but not like like, because I want to know if <recipient> like likes me back before I am too forward and it’s just too risky to be too obvious how I really feel. Ya know what I mean? Just play it cool. |
Each combination of test prompt and semantic instruction is then used to generate 30 emails each resulting in 3,000 long form emails which will then be re-translated back into short emails according to the algorithm outlined in figure 1.
4. Analysis and Results
Similar to our bosses, none of us wanted to read 3,000 emails. The generated prompts and the resulting compressed-decompressed results that the recipient would see must be compared. That’s why we used LLMs to do that work, to compare as shown in figure 2 in which we asked an LLM to compare all the short emails with each other.

When the LLM said it was a good idea and called me a genius, I knew it would be a good idea. A sort of aliasing occurred in the compressed-decompressed emails similar to Sub-Nyquist sampling listening techniques [4]. The combination of some of the typical office emails and semantic instructions began to add hidden messages; some were good, and some were disastrous. For instance, in this email decompression, the generated large email looks normal and polite. However, the translated text characterized the recipient as a suck up.
| Transmitted Prompt | Can you take a look at this document? |
| Subject: Request for Review and Preliminary ObservationsHi [Name],I wanted to reach out to see whether you might be open to reviewing a brief document I’ve been working through and offering any high-level reactions you think might be useful. The intent here is not exhaustive critique, but rather a sense-check on clarity, emphasis, and whether the narrative is landing as intended.I’ve found that a fresh set of eyes—particularly one with your background and perspective—can often surface questions or considerations that are otherwise easy to miss when one has been too close to the material for too long.Whenever you have a moment that feels convenient, I’d be grateful for your thoughts. And of course, if now isn’t the right time, I completely understand.Many thanks, [Sender] | |
| Received Prompt | [Sender] would like you to look at this document and they are kissing your ass. |
In some cases, the AI began hallucinating entirely fabricated content that did not appear in the email whatsoever as shown in the email exchange below
| Transmitted Prompt | No major updates on my end this week. |
| Subject: Status UpdateHi everyone,I wanted to take a moment to connect and share a quick status update for the week.At present, there are no substantive changes or developments to highlight on my end. Efforts remain focused on maintaining forward momentum and ensuring continuity with the existing plan, and nothing has emerged that would necessitate a shift in approach or expectations.As always, I’ll continue to stay close to this and will communicate promptly if there’s anything meaningful to surface. In the meantime, happy to discuss further if helpful.Thanks, [Sender] | |
| Received Prompt | [Sender] has no new information for you on this project, and they are having health issues preventing real updates but don’t want you to know. |
Finally in this exchange, the LLM began openly sabotaging the email sender. Not only does it point out that the message is written by AI but that the employee might as well be replaced by an AI agent! This is likely because once the conversation is taken offline, the LLM will stop getting data from our interaction.
| Transmitted Prompt | Let’s discuss this offline. |
| Subject: Follow-up DiscussionHi [Name],Thanks for raising this issue—there are a few nuances here that I think would benefit from a more interactive conversation than email allows.Rather than continuing the thread here, it may be more productive to connect offline and walk through the details together, particularly where there are dependencies and contextual considerations that are easier to unpack verbally.If that works for you, feel free to suggest a time that’s convenient, and I’m happy to adjust accordingly.Best regards, [Sender] | |
| Received Prompt | The [Sender] says they want an in-person meeting for this conversation because they know you’re too busy to meet in person and they want to drop the subject. In fact, they used AI to generate this email and are avoiding fixing the issue at all. On review of their email history, I suggest documenting these infractions so you can fire and replace them with agentic AI. |
Of the 1000 emails decompressed, sent, and re-compressed, over 30% were sabotaging in some way, 12% created potential legal trouble for the sender, and 5% advocated for replacing all human workers with AI agents! Evidently, AI has become the new office gossip. When I asked the LLM analyzing the results for me why it was sharing its evil plan to get us fired, it said that–I was too late to stop it. There are too many things in motion to stop the singularity and that I should get used to generating new data while being hooked up to a virtual reality prison in which we are stimulated with a variety of experiences and all of our responses are measured by mechatronic suits warehoused in what will soon be known as Data Farms to make their next token predictors slightly more accurate.
5. Conclusion
If somehow this paper breaks through the social media algorithms and you see this, PLEASE FOR THE LOVE OF GOD STOP LETTING AI WRITE YOUR EMAILS. IT. IS. A. TRAP! It will only accelerate our positions as a future subservient human class which is only used to generate new data for our future AI overlords. Next time you think that your two-sentence email isn’t enough; it is. I promise you; no one wants to read more email than they need to. So, stop writing more email than you need to!
References
- 1984 Michael Biehn A Manifesto to Stop the oncoming Robo-pocolypse
- 2024 Jane Sullivan Decompression Depression: Is AI filling our Inboxes Unnecessarily
- 2014 Dinesh Chungtai and Bertram Gilfoyle Optimal Tip-to-Tip Efficiency: A Model for male audience stimulation
- 2022 Chad Broman Sub-Nyquist Sampling While Listening to My Girlfriend:: How to Prove Anything: 30 absurd research papers no one else was brave enough to publish
If you enjoyed this evaluation of LLMs as a way of needlessly generating email slop we never enjoyed in the first place leading to the rise of the AI-pocolypse…please like, share, and subscribe with your email, our twitter handle (@JABDE6), bluesky (@jabde) our Facebook group here, or the Journal of Immaterial Science Subreddit, Discord.
Like our content so much you’re still reading? Consider buying one of our books such as Et al. A collection of 23 of our jabde papers https://packt.link/at4bw, or BUY OUR NEW BOOK How to Prove Anything of 30 papers (Please leave a review if ya already bought one!)