New images can be made using AI tools, but who is the real artist?

Auto Draft

Currently, artificial intelligence systems are doing the same thing, learning from a large collection of digital artworks to create new images from a smartphone app in a matter of seconds.

Since Vincent Van Gogh painted the swirling scene in 1889, countless artists have used it as inspiration.

Currently, artificial intelligence systems are doing the same thing, learning from a large collection of digital artworks to create new images from a smartphone app in a matter of seconds.

Ask for a “peacock owl in the style of Van Gogh” and they can churn out something that might look similar to what you imagined. The images generated by tools like DALL-E, Midjourney, and Stable Diffusion can be weird and otherworldly but also increasingly realistic and customizable.
However, while Van Gogh and other long-dead master painters are not complaining, some living artists and photographers are beginning to oppose AI software companies that produce images based on their works.

Popular image-creation services are the targets of two new lawsuits, one of which was filed this week by the photography conglomerate Getty Images, alleging that they copied and processed millions of copyright-protected images without a license.

Getty stated that it has initiated legal proceedings against Stability AI, the creator of Stable Diffusion, in the High Court of Justice in London for infringement of intellectual property rights for the benefit of the London-based startup’s commercial interests.

AI image-generators are referred to in a separate lawsuit filed in San Francisco, California, as “21st-century collage tools that violate the rights of millions of artists.” Stability AI, the San Francisco-based image generator startup Midjourney, and the online gallery DeviantArt are also named as defendants in the lawsuit, which was filed on Jan. 13 by three working artists on behalf of others who are similar to them.

AI-generated images “compete in the marketplace with the original images,” according to the lawsuit. Until recently, a buyer had to pay to commission or license an original image from a particular artist if they wanted a new image “in the style” of that artist.

Typically, users are charged a fee by image-generating service providers. Users must purchase a subscription after completing a free trial of Midjourney through the chatting app Discord, for example, which costs $10 per month or up to $600 per year for corporate memberships. The DALL-E image generator that is provided by the start-up OpenAI can also be used for a fee, and StabilityAI provides a paid service known as DreamStudio.

“Anyone that believes that this isn’t fair use does not understand the technology and misunderstands the law,” Stability AI stated in a statement.

Before the lawsuits were filed, Midjourney CEO David Holz said in an interview with The Associated Press in December that his image-making service was “kind of like a search engine” that pulled in a lot of images from the internet. He compared the way copyright laws have evolved to accommodate human creativity to technology-related concerns.

“Can a person learn from another person’s picture and create a similar one?” Holz stated, It stands to reason that people are allowed to do it, and if it weren’t, it would endanger not only the nonprofessional but also the professional art industries as a whole. Insofar as AIs are learning like humans, it’s pretty much the same, and if the images turn out differently, it seems fine.

The disputes over copyright are the beginning of a backlash against a new generation of impressive tools that can instantly generate new visual media, readable text, and computer code. Some of these tools were introduced just last year.

Additionally, they bring up broader concerns regarding the tendency of AI tools to spread false information or cause other harm. This includes the production of sexual imagery without a person’s consent for AI image generators.

It can be difficult to distinguish between real and artificial intelligence because some systems produce photorealistic images that can be difficult to trace. Even though some have safeguards in place to block content that is offensive or harmful, experts worry that people will soon use these tools to spread misinformation and further undermine public trust.

Wael Abd-Almageed, a professor of electrical and computer engineering at the University of Southern California, stated, “Once we lose this capability of telling what’s real and what’s fake, everything will suddenly become fake because you lose confidence in anything and everything.”

The Associated Press submitted a test text prompt on Stable Diffusion with the terms “Getty Images” and “Ukraine war.” The tool produced pictures of soldiers pointing guns and engaging in combat with distorted faces and hands. The Getty watermark was also present on some of the images, but the text was muddled.

Things like feet and fingers or details on ears that can sometimes indicate that they are not real can also be misinterpreted by AI,

Author: DPN

Leave a Reply

Your email address will not be published. Required fields are marked *