
With the rapid advancement of AI technology, AI music composition tools that allow anyone to generate original music within seconds have emerged. As a result, a growing number of accounts are generating music with a single click and uploading it to platforms such as YouTube, Spotify, and Apple Music to earn revenue.
Of course, it is also possible to refine AI-generated music, write original lyrics, and collaborate with AI to produce high-quality works. In fact, there are professional musicians who successfully create sophisticated compositions through such collaboration. Therefore, this movement does not seek to indiscriminately criticize or eradicate the use of AI in the arts. Rather, it aims to advocate for mandatory AI-use labeling on platforms and to explore a path where traditional music and AI-generated music can coexist transparently.
At present, platforms such as YouTube and music streaming services have failed to establish proper labeling systems for AI-generated music. As a result, many users and musicians are being harmed. You, the reader, may also be one of those affected.
Issue 1: Trivialization of Copyright Infringement and Deception of Viewers
Recently, following the popularity of Zootopia 2, there has been a surge in cases where creators falsely present AI-generated playlists as official Zootopia OST playlists. By instructing AI to imitate the film’s atmosphere and aesthetic, these creators deceive viewers and generate substantial profits.
Failing to disclose AI usage in titles or descriptions, while presenting content as original compositions or existing works, constitutes a clear form of misleading conduct.
With AI music tools, a single prompt such as “Create a song similar to ○○ (an existing artist or track)” allows a composition closely imitating that style to be generated within seconds. It is entirely possible that playlists you regularly listen to on YouTube are AI-generated. However, because AI-generated content labeling is not mandatory, there is currently no reliable way to know. When creators do not disclose AI usage, third parties are deprived of the right to understand the true nature of the work. Users have the right to know whether AI was used, and denying this undermines the trustworthiness of the platform itself.
Issue 2: Mass Production of Low-Quality Content and Abuse of Platform Functions
Because AI music creation requires no musical knowledge, a cycle of mutual imitation has emerged among AI music creators, encompassing not only music but also album artwork and thumbnails through AI training. As a result, social media platforms are flooded with countless nearly indistinguishable channels, diminishing the value of each individual work.
This phenomenon stems from the absence of mandatory AI labeling, which fueled the narrative that “anyone can make money,”drawing individuals with no background in music or creative fields into the space purely for profit.
Consequently, traditional musicians are now facing accusations of using AI even for their original works and are being involuntarily swept into a saturated market created by mass production. If clear distinctions were made, human-created music could retain its value as something rare and meaningful. Without such differentiation, however, it is reduced to merely one item among countless mass-produced tracks. The increasing number of channels explicitly labeling themselves as “No AI Music” further demonstrates how the burden created by AI’s rapid expansion is being misdirected.
Additionally, YouTube provides channel owners with a feature that automatically filters specific words from comment sections. Many AI music channels abuse this function by blocking terms such as “AI,” preventing viewers from being informed by third parties that the content is AI-generated.
Issue 3: Imbalance Between Effort and Compensation – The Need for Revenue Restrictions
Long-form videos such as playlists are widely known to be highly profitable due to mid-roll advertisements. Traditionally, this revenue model benefited creators with musical expertise or those who invested significant time in editing long-form content.
In contrast, AI-generated playlists require only seconds to create, involve no exposure of personal identity, and demand virtually no editing effort—yet they enjoy the same high profitability. This extreme imbalance between effort and compensation is a serious issue. This is precisely why, while AI-generated music may be permitted to exist, its monetization and royalty structures should be subject to limitations. Once this reality becomes widely recognized, dissatisfaction will not be limited to musicians alone but will inevitably spread across other creative fields.
Social media platforms such as YouTube, where individuals can upload and monetize content without external review, inevitably become unregulated environments in the absence of clear rules. As long as platforms impose no consequences, the cycle of profiting from AI-trained works that infringe on existing copyrights will persist.
Recently, a Swedish music industry organization removed a hit song from the charts after determining it was AI-generated. Although the production team protested, claiming the song was not created by “simply pressing a button,” the appeal was rejected.
In South Korea, the Basic AI Act will come into effect on January 22, 2026, mandating labeling for AI-generated content, starting with high-risk AI applications.
Must we wait for national legislation to address this issue? Should platforms not recognize the situation and act swiftly on their own?
Requests to Platforms
-
Mandate clear AI-generated music labeling in visible locations such as next to titles, on thumbnails, or at the edges of cover art, enabling users to instantly determine whether AI was used.
-
Prohibit monetization of AI-generated music or adjust revenue distribution in proportion to human effort, fostering a system where traditional and AI-assisted artists can coexist.
-
Impose penalties, including channel suspension, for failure to comply with labeling requirements.
-
Provide users with an option to exclude AI-labeled content from algorithmic recommendations, respecting user preferences.
For the sake of artists and the audiences who support them, AI-generated art and human-created works must be clearly distinguished, and limitations must be placed on music that is mass-produced like a factory solely for profit.
On January 10, 2026, we submitted a ten-page report to YouTube detailing issues arising from delayed AI labeling implementation, supported by concrete examples and evidence. To date, no feedback has been provided. Instead, YouTube headquarters announced policy proposals that appear to promote AI usage.
Platforms must acknowledge this issue and fulfill their responsibility to provide users with accurate information.
Although this initiative currently focuses on music, we firmly believe it will contribute to strengthening artist rights protections across other artistic fields as well.