Should a blog post on someone’s website end up verbatim in a newspaper or magazine publication? Is taking the dialogue from a television news show and publishing it in print an acceptable practice? With technology and media being what they are today, this type of thing happens all the time. Publishing duplicate content on the internet will get flagged by software that checks for duplicate content or previous publication, but what about cross-medium media plagiarism? How do you avoid that? It happens every day both on and off the internet. Is there a way to prevent it?
In many cases the answer is no. The best plagiarism software can only check submitted content and match it to any other similar content found online or on offline proprietary databases. Video and television, unless there is a published script on the web, cannot be flagged with this method. The best you can do is to transcribe the story and then check the transcription with the plagiarism software. This isn’t as difficult as you might think. Closed captioning can do it for you if it’s available. Transcripts may also be available directly from the media outlet that airs the story.
One of the more common forms of media plagiarism comes in the form of scientific research plagiarism. This is not as easily detectable as other forms of plagiarism where matched content can be flagged. The scientific discovery itself could be the item plagiarized, and writing about it as if it were your own is every bit a copyright infringement as copying and pasting the exact content. In order to write scientific white papers, data is necessary to back up the statements made in plain prose. Taking that data from another source that is not your own is also considered plagiarism.
Retype a newspaper story and publish it on the internet. Copy and paste a web story and publish it in print. Download a television script or scientific whitepaper and claim it for your own. These are the types of cross-medium plagiarism instances that modern technology has made possible. The technology to detect these abuses has also improved along the way, but ultimately the only way to prevent it is through the integrity of writers and editors. There are far too many instances where duplicate content is submitted and editors just don’t care. That needs to stop.
In February 2011, Google put through a change in their search algorithm called Google Panda, or Google Farmer as it is known in some circles. Part of this change involved the devaluation of duplicate content, essentially dropping any website that uses it from a prominent page position down to one in search engine oblivion. This change is a start, but it doesn’t cover material or content that isn’t published on the internet. There are companies that will buy printed material, like school papers for instance, and resell them for reproduction by students in other locales. This practice, heinous as it is, is not punishable by any significant fine or legal penalty, like other forms of plagiarism are. Hopefully, some day it will be.