Plagiarism in Pop Culture: Elsbeth

I will admit it. I am a sucker for a cozy mystery. Throughout this series, I have written about two episodes of Columbo, two of Death in Paradise and one Beyond Paradise. I’ve also looked at Instinct, Elementary, Law & Order Criminal Intent and Criminal Minds.

However, plagiarism is a popular focus for such series between large episode counts and the need to find motives for various crimes.

In that regard, the new “cozy mystery” show Elsbeth is no different. The series follows the character Elsbeth Tascioni, a Chicago lawyer brought to New York as part of a consent decree. Though unwelcome by the force, she proves herself a competent investigator, cracking some of the most challenging cases.

However, Elsbeth manages to do something impressive by combining two separate plagiarism threads into one story. It is also one of the first mentions of using AI to commit plagiarism in any pop culture show.

All this happened in episode 8 of the first season, in the episode Artificial Genius.

Content Warning: Spilers for Elsbeth Season 1, Episode 8

The Plot

The episode begins with Quinn Powers, a young tech CEO, pitching her app, Cerebus, to investors. Quinn explains that Cerebus is a crime reporting app meant to warn residents of crimes in their area. She talks about how she was a victim of a mugging years earlier and wished she had gotten a warning.

The investors seem thrilled. However, as she leaves, she is confronted by a local reporter, Josh Johnson, who claims to be working on a story about her that will reveal some details from her past.

Quinn becomes flustered but hatches a plan. She orders her employee to enter a new crime report about a local dognapper that does not exist.

Fabricating a dognapper

That evening, she started a virtual meeting with her team. Halfway through, she cuts to a pre-recorded presentation and uses that time to slip out, murder Josh with a cattle prod and then kidnap his dog (later releasing the dog in a park). However, Quinn loses her headband during the murder, resulting in it disappearing between the live intro and the live Q&A at the end.

While at Josh’s house, she uses his face to unlock his laptop and swap out his article about her. She then grabs a few other incriminating pieces and leaves.

Quinn Murders Josh

Though the police initially seem fooled by the dognapper story, both Elsbeth and her partner, Officer Kaya Blanke, suspect something else is up. Kaya, in particular, is suspicious of Josh’s article and runs it through an AI detector. It turns out that the article was written by an AI named Smart Cheat.

Cerebus also exposes its flaws. The police realize that’ “AI-generated” suspect profiles are useless and unreliable. Eventually, Elsbeth tracks down the secret Quinn has been hiding, that she is not the actual creator of Cerebus.

That honor falls to Ellen Davis, Quinn’s high school friend. Ellen created the app after Quinn was mugged but abandoned it after she realized it wasn’t reliable and was too prone to misuse.

After Josh’s dog is found, Elsbeth is forced to care for it. She then learns that the dog likes to hide things. She finds Quinn’s headband at Josh’s apartment. That proves Quinn was in the apartment, which results in her arrest.

Understanding the Plagiarism

The episode is unusual because it has two separate plagiarism plot lines. Both are brief, but both are very important to the overall story.

The first deals with the AI article. When Quinn murdered Josh, she unlocked his laptop with facial recognition and swapped out his article. Ignoring the technology issues, namely that you need open eyes to use facial recognition and that it’s unlikely his article would be a random file on his desktop, the plagiarism itself doesn’t make sense.

Though I can believe AI-obsessed Quinn would use an AI to generate the article, the changes to the laptop should be easily noticed through other means. Any reasonable forensic analysis of the data would show that the file was swapped.

However, the bigger problem is how plagiarism was eventually discovered. Kaya ran it through an AI detector, and it proved definitively that not only was his most recent article AI but that his others were not.

That is not possible at this time. Even the best AI detection is not that reliable. Besides, there would be much better clues, namely the change in writing style, formatting, etc. The AI will read like a different author than Josh, which would have been an easy way to spot the issue.

The second plagiarism issue dealt with Quinn not being the actual creator of Cerebus. While I understand that Ellen didn’t want to seem like the “sour grapes” type, there is a simple fact that she is likely the copyright holder of the software.

By all accounts, she created the software as a favor for Quinn. She never mentioned being an employee, selling it or signing her rights away. She said she walked away over ethical concerns.

It beggars belief that Ellen did not come forward to state her aforementioned ethical concerns or claim her piece in the impending IPO. There’s too much at stake here to believe she would remain silent.

Bottom Line

All in all, the plagiarism was only here to serve the plot. Though it’s unrealistic, if it didn’t work out this way, the plot would not work. The same is true of the technological issues and other elements of this episode.

I’m willing to give a pass to the story about Ellen being the original creator of Cerebus. Though unrealistic in how it unfolded, it’s meant to call back to stories like the original founders of Tesla and Steve Wozniak with Apple. The tech industry has a history of sidelining or hiding original creators, something the show was playing to.

So, while not fully believable, it does speak to a real issue.

The AI plot, however, is actively dangerous. AI is a major issue right now, and one of the core challenges is that we do not have a reliable automated way to detect AI writing. This show indicated that we do.

The CSI Effect is where jurors believe that technology is capable of something it isn’t due to fictional shows. It’s known to have a pro-defense bias in court cases.

This could have a similar effect on AI. Making it look like AI detection is a solved problem could cause some to assume material was written by humans when it wasn’t.

While I understand this is a work of fiction, we’ve seen that representation like this can have real-world consequences. To make matters worse, it wasn’t necessary as there were other, easier ways to show Josh didn’t write that piece.

Issues like this are one of the reasons I write this series, as pop culture influences real-world perception. However, this was both harmful and unnecessary.

While I’m an overall fan of the show, this point had me scratching my head.

More Plagiarism in Pop Culture (In Reverse Order)

Want more Plagiarism in Pop Culture? There Are 42 others to check out!

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free