3 reasons not to get on an AI-generated content train
AI is making waves now, right?.
Not only in terms of how hard it is to tell AI-generated content from a genuine one but also what it means for the rest of us humans working in the media.
So, let’s explore the potential risks you might encounter when introducing AI to your media production workflows, and how you can both improve your productivity with AI-generated content and stay true and honest about its usage.
What is AI-generated content
We start off with covering our bases first.
What is AI-generated content? We generally use that term to refer to content that was made with the help of Artificial Intelligence. It covers all types of content, like text, images, video, audio, and even music.
The way it works is the algorithm goes through a large dataset of existing content.
Now we have AI writing job descriptions, newsletters, making art, voicing presidents — which in turn has found it many applications:
- Marketers use it to create product descriptions;
- Customers support specialists generate personalized messages for clients;
- Social media managers craft their posts;
- Writers use it for research.
The reasons for such widespread applications of AI-generated content are obvious: the AI can pull more information much quicker than human, and produce any type of content in a shorter time as well.
Which is why, when you need a simple copy, or a highlight compilation, many professionals turn to AI-generated content.
Do you think AI-generated content is a cure-all?
But with many productivity issues solved by recent technology, there are many other problems that emerge.
Those can be boiled down to a couple of questions.
How to detect AI-generated content?
The first question is nice and hot right out of the news oven.
A German artist Boris Eldagsen entered a prestigious international photography competition organized by Sony, a renowned maker of digital cameras. But he ended up rejecting the reward because his photo that won the contest was in fact entirely created by Artificial Intelligence.
The whole stunt, according to Boris, was meant to spark a conversation on whether the AI-generated content should be allowed to compete with human-made content.
And the fact that it got under the radar of professional judges is concerning: it appears that it is much harder to tell artificial work from a genuine one than it was before.
A couple of weeks ago a lot of people were bamboozled by images of Donald Trump getting arrested in New York. But the thing is the ex-president of the US wasn’t arrested: the pictures were 100% AI-generated.
Another example is the “Balenciaga Pope” — an image of Pope Francis wearing a puffy white coat that surfaced the internet was also completely artificial in nature, yet led some people to believe it was all true.
Let’s get to the point I’m trying to make.
As the ability of AI to generate believable content grows, we might find it very hard to detect AI-generated content and tell true information from false. Think of it as the recent blue check mark removal from Twitter but for the entirety of the media environment.
Deep fakes of people on the news, advertisements from social media accounts copycatting legit businesses, companies bouncing off a freshly-broken story that is actually misinformation created by AI — those may have serious repercussions for a media business.
Relying on AI to take over the entirety of your video content generation, you might face those problems.
Who owns AI-generated content?
Still, despite sometimes messing up, AI in recent months has gotten very good at creating images, videos, 3D models, copy, and whatever else.
One part of that is the sophisticated technology itself — the results of big brains constantly working on improving AI.
But the other part is the way AI is meant to work and improve — learning from as many data sources as possible.
The problem is with the sources AI pulls from.
The AI sources the content from the web to study and try to replicate it. That’s how you get to make those Harry Potter and Balenciaga mashups: it pulls the images of fashionable clothes and fantasy movie and book characters to get an idea of what they look like. Then it works its magic and you get the images that do look like Harry Potter characters as if they are Balenciaga models.
Here’s a caveat: those characters look like the movie actors and not the book versions — since the original IP is an unillustrated book, AI takes inspiration from the movie adaptation.
So, if I ask the AI model to make something inspired by [ARTIST NAME], it will simply rip off that [ARTIST NAME] work.
This is why, as you may have heard, some artists are concerned about someone using AI-generated content plagiarizing someone’s work for a profit.
Copyright scandal is surely not the thing any media company wishes to find itself in.
AI Regulation from the government
Many companies are already trying to outsource a big chunk of their everyday operations to Artificial Intelligence. For example, an agency Codeword has “hired” two AI interns, who are now handling design prototyping and sketching, news analysis, tone editing, and written content drafting.
As that happens, governments from all over the world are keeping a closer eye on what that means for people who can get affected by the inevitable shift to automation technology.
That potential challenges include:
- Jobs being lost over the automation hiatus;
- Fraudulent activities facilitated by AI;
- False advertisement of AI services;
- Racial or other bias exhibited by algorithms.
In one of its recent blog posts as well as appearances on the news, the USA Federal Trade Commission issued a warning to companies aiming at using AI in their operations.
In one instance, the FTC reminded corporations that, despite the AI technology being new in regards to handling corporate tasks, the legal ground to hold them accountable is already there: one of the laws is aimed at preventing companies from denying people employment, housing, credit, insurance, or other benefits based on the algorithm's judgment.
On top of that, the White House invests in building research institutes that will study the advancements of Artificial Intelligence technologies that are “ethical, trustworthy, responsible and serve the public good”.
But all of that is pretty broad, I agree.
What does it mean for media companies trying to make the best out of their video content production pipelines?
It means that introducing AI to your daily operations might solve one problem, but create another. Venturing on a road yet untraveled, we will be the first ones to discover the real benefits and challenges that come with augmenting our work.
And we have discussed a couple of apparent ones above.
Now, let’s drop the buzz-killing mood and talk about how you can use Artificial Intelligence to the advantage of your production with respect, inclusivity, and honesty in mind.
AI automated apprentice
So here we are.
AI-generated content covers all types now: art, videos, and texts.
But sometimes AI gives people a couple of fingers more than they are supposed to have in their hands. It also uses a lot of work created by real humans, which causes copyright issues. Then, there are ethical concerns over AI’s decision-making.
But it is so tempting to trim down those hours of dull work, right?
So, how about — you know — using AI, but also having more creatively demanding tasks handled by humans?
But before we see how we can do that, there is one thing we have to clear out: you want your employees to work more effectively with the help of AI, not just overseeing the work AI does and then correcting its mistakes.
So, we have to implement an AI software solution that can analyze the content and make decisions on its own, while your employees are out there directing, shooting, pitching, and doing other intensive work.
Here are a couple of ways in which you can streamline your video content production using Visual AI video editing software:
- Cut highlights and trailers. CognitiveMill analyzes the footage, choosing the best moments from it, and then edits those together into a cohesive clip. It can cut highlights from sporting events or make teasers for movies.
- Recognize cast members. Visual AI tech recognizes cast members’ faces in the footage and identifies main and secondary characters in scripted media. You can use that data for archiving and video asset management, as well as cutting videos featuring certain cast members.
- Skip intro and outro sequences. Video automation solution pinpoints when the intro and outro sequences of TV series start and end. It can then allow the viewers to skip past those straight to the content without missing important plot points.
- Cut footage summaries. The software recognizes the context of the footage along with its narrative structure and then recaps it using the most important scenes.
- Crop the clips to a vertical aspect ratio. CognitiveMill crops the footage from landscape aspect ratio to portrait to better suit social media platforms. The algorithm makes sure the important parts of the scene are always in the shot.
- Insert targeted advertisements. Video analytics software can find the ad breaks in the footage and replace advertisements with the ones of your choosing. The tool identifies the theme of the footage and inserts contextually-fitting ad breaks.
Looking to enhance video production for your media business? Let’s talk! Drop us an email at support@cognitivemill.com or fill in the form below — we will be happy to demo the product, answer the question, and work out a deal!