Game over? This oil painting style image of a robot was created by The Telegraph with the help of Midjourney. art?» If a machine can create a beautiful, realistic image at the touch of a button, does that devalue «real» art? The same question artists are asking today, 130 years later, about artificial intelligence.
In just the past few months, AI has changed every area of art so quickly and subtly, so varied and wide, that it was almost impossible to keep track of it all. It is already clear that the impact of AI, with its rapidly evolving ability to generate images, sounds and words, will be a defining change in the cultural life of this decade. What's less certain is whether this change represents something to celebrate or fear.
Fine Art
Last month, a photo that wasn't a photo won in the Creative category of the Sony World Photography Awards. Pseudomnesia: As beautiful as it is creepy, Electric is a sepia-colored portrait of two women who never existed. Boris Eldagsen, a 53-year-old German photographer, did this with DALL-E 2, an artificial intelligence tool launched a year ago that creates images in response to verbal cues; the word draws a thousand pictures.
Eldagsen submitted his image «in the form of a cheeky monkey» to the competition. In doing so, he did not break any rules, if only because no one thought to update them for the AI era. “I expect photo contests to do their homework,” he tells me. When he learned he had won, he wrote back to refuse the prize, explaining how he took his «photo». Sony still named him the winner.
Pseudomnesia: Boris Eldagsen's electrician created using DALL-E 2 artificial intelligence tool , won the Sony World Photography Award Posted by BORIS ELDAGSEN via Reuters
So he rented a tuxedo and flew to London for an awards show where he hoped he would have a chance to sound the alarm about the art of AI. (It's art, he says, but in a new form that shouldn't be in photo contests.) When he realized he wouldn't be given a chance to speak, he climbed onto the stage and grabbed the microphone to explain why he was turning down his prize. There was an awkward moment, after which the show continued as normal.
After the ensuing flurry of news sparked a debate about the art of artificial intelligence, the organizers of the awards released a statement in which «the work. This diverted from the key point: Eldagsen created his painting without using a lens at all.
Creating AI images from text cues—»promtography,» as Eldagsen calls it—requires skill. It's like photography: of course, anyone can accidentally press the shutter of the camera and get a brilliant image. A real artist can do this over and over again. Eldagsen's method includes 11 levels of specific text prompts covering every aspect of the composition. He teaches classes there.
So creating a great image still requires work, but creating a mediocre image has never been easier. There has always been a gap between trained and amateur efforts, but with AI, “professionals are scared because the gap is getting smaller,” says Eldagsen. «The trashy images of the past will disappear.»
Keith Crawford, author of The Atlas of AI, believes that «there is a more important issue in the background.» When it comes to AI imaging, “creativity,” if you want to call it that, is based on a huge takeover of the commons.” In the 18th century we lost our pastures; On the 21st we lost our data.
AI imaging engines such as Midjourney and DALL-E 2 learn by “scraping”—a nasty but perhaps appropriately cruel word—millions of images and their captions from the Internet. “Every time you upload a photo of a friend on vacation or participate in any public online activity, it is now collected into these giant datasets,” Crawford tells me. “People look at the result and say, ‘Is this art?’ rather than looking at the fundamental practice itself. Is this uncoordinated taking on everything the type of reality we want to promote?»
In January, cartoonist Sarah Andersen filed a class-action lawsuit against (among other things) London-based Stable Diffusion, an image maker launched last August that can be asked to make paintings in the style of any artist you like. including Andersen. Getty Images launched its own lawsuit in February, alleging that Stable Diffusion generated images that are almost identical to copyrighted photos, and some even reproduce watermarks that Getty adds to prevent plagiarism.
The missing edges of Rembrandt's The Night Watch, restored using artificial intelligence. Photo: Peter Dejong
Stable Diffusion was trained on LAION-5B, a vast database of copied images («5B» stands for five billion). Bulk data collection without prior consent has so far been considered legitimate «fair use» in the US. Some people don't mind. “Personally, as an artist, I have come to the conclusion that I cannot defend the images that I have created in my life,” Eldagsen says nonchalantly. But others feel nauseous. Two artists, Mat Dryhurst and Holly Herndon, created the website haveibeentrained.com last year, where you can check if your images have been used in AI training kits. One woman discovered that a photo taken by her doctor for her personal medical files had somehow been deleted by LAION-5B.
Since Stable Diffusion agreed to work with haveibeentrained.com to let people «opt out», over one billion images have been extracted from its training set. Probably progress. But this “take it all first, then return some if you ask well” model is not the only way to do things. In March, Adobe released a competing AI imaging program trained only on non-copyrighted images and topics already owned by Adobe.
However, even if the training data is obtained legally, many artists who are concerned about their work are more likely to boycott AI. Much of the concern is centered on genres in which the value of an image as art is secondary to its use for something else: to sell a product, or to share information, or to brighten up text — an area where “good enough” is more important than «good enough». «good».
Illustrators are especially worried. “Basically, this is the greatest art heist in history,” they warned this month in an open letter signed by hundreds. “If you think this sounds alarmist, consider that AI-generated work has already been used for book covers and as editorial illustrations, pushing illustrators out of their livelihoods… Generative AI art is vampiric, feeding on past generations of art even when it sucks blood. from living artists.
All of the above complaints were made about the relatively simple world of static images. But what about movies? Let's take a break from the screenwriters for the time being, as they usually do in Hollywood, and focus on the moving image itself, this flickering ghost on the movie screen. Can you trust what you see?
Cinema
Last September, AI for text-to-video conversion took off when Meta introduced Make-A-Video, a digital tool that can create short silent movies from a single sentence. A week later, Google announced a competing product called Imagen. Both are still only available to a few human testers, but in March, open-source artificial intelligence organization Hugging Face created its own text-to-video conversion tool, ModelScope, open to the public. It soon went viral with a nightmarish movie inspired by the line «Will Smith eats spaghetti,» in which the actor appears to be gnawing handfuls of pasta like a man possessed.
'We already use AI': director Baz Luhrmann with Ai-Da at the Design Museum. Photo: Dave Bennett/Getty Images for Bombay SapphireIt's all fun, but will a major director like Baz Luhrmann, whose recent Elvis biopic was nominated for eight Oscars, see any opportunities in AI? “We are already using it,” he says. “On Elvis, we used artificial intelligence to mix Elvis' face with Austin [Butler, the lead actor]. I think it's something that, when used correctly, can be very useful.”
Lurmann believes that someday AI assistants and avatars will be everywhere and we won't cringe at them. It's like mobile phones, he tells me over a martini after a recent event at the Design Museum in London. “When they first appeared, they were used only by plumbers and merchants. It was like, how clumsy!» Over time, the AI will “just feel like a human.”
But not too human. Speaking about the AI on stage earlier that evening, Luhrmann said, «I'm not afraid as a creative person… If I said to [AI], 'Write me a script in the style of yours truly about King Lear,' what does that mean?» I will miss the feeling of humanity. It is the flaws and imperfections that make us human.”
However, The Great Catspy, the AI-generated movie trailer that went viral after 10 days, has plenty of flaws, and it's the first I've seen to combine video, music, and speech. Its cast suffers from an anomaly common among people with artificial intelligence: strange hands. (The AI struggles to remember how many fingers the average person has — 12? 17?) But it's still impressive. The animation's jazzy sheen that mimics the glossy style of Luhrmann's own 2013 film The Great Gatsby is gruesome, yes, but it instantly makes Will Smith's pasta look prehistoric.
Hollywood has been using computer-generated imagery (CGI) for decades, but until the recent leaps in AI, it was a slow, expensive, and labor-intensive business. When British actor Henry Cavill had to reshoot scenes for 2017's Justice League but couldn't shave off the mustache he had grown for another role, CGI was used to erase it, at a cost of $25 million. With really advanced artificial intelligence, such a setup could be done at minimal cost in one day.
In January, the new comedy series Deep Fake Neighbor Wars proved that we can now convincingly portray any actor as, say, Idris Elba or Greta Thunberg, even on a relatively measly British television budget. So why not revive the dead stars? “I might get hit by a bus tomorrow and that’s it, but the performances can go on and on,” Tom Hanks said this month. Will viewers pay to watch pure digital Hanks? «There are people who don't care, who won't make that determination.»
Keanu Reeves has, of course, fought the Matrix for decades. «Earlier, in the early 2000s or maybe the 1990s, I [digitally] changed the performance,» he told a reporter in February. «They added tears to my face and I just thought, 'Huh?' It was like I didn't even have to be here… to be edited, but you're in on it. If you enter deepfake country, none of your points of view will be there.” Since then, Reeves has been asking for a clause in his contracts that his performances will not be digitally manipulated.
Actors not protected by such a clause may be concerned about the work of companies such as Flawless AI, whose AI-assisted editing can effectively turn them into ventriloquist dummies. This will come in handy for re-releases of films in the international market. Dubbed into another language? No problem, they can lip-sync the actor to the new dialogue. Mat? There is nothing easier than replacing the F-word with «fiddlesticks». Should I change the line criticizing China to praising the CCP? Theoretically, they could do that too. (Flawless AI declined The Telegraph's request for an interview.)
Visual manipulation of the actor's mouth would be useless if the AI couldn't also adapt their voice. But AI can generate incredibly convincing human sounds. He speaks, sings, plays symphonies. You may have been listening to AI music without even realizing it.
Music
“It’s Christmas and you know what that means,” Frank Sinatra woozy sang in 2020. «It's time for a jacuzzi!» Old Blue Eyes has woken up from his eternal slumber to sing a new song about a holiday bath with the heavily maligned OpenAI Jukebox music generator.
Most of the samples from this now shaky experiment have since been removed from SoundCloud by OpenAI, but the Sinatra AI bootlegs survive as a reminder of just how compelling sonic counterparts have become in just two years.
This girl group image was created by The Telegraph using Midjourney Generative AI
This is not certain. Launched in March by 40 major U.S. music and entertainment groups, the Human Artistry campaign aims, among other things, to protect the «likeness» of voices. Tom Waits set the precedent in 1992 when he successfully sued Frito-Lay for over $2 million over Doritos ads that mimicked his unique rust and bourbon tone.
However, it will be more difficult to protect the voices as artificial intelligence has made it easier to «clone the voice». Apple announced last week that voice cloning will be coming to the iPhone this fall. Just 15 minutes of vocal samples is all you need to get a person to «say» anything with text-to-speech. In January, Apple quietly began releasing audiobooks with AI-based narration.
Cloning hip-hop is almost as easy as cloning a conversation, as app designer and vlogger Roberto Nixon proved in March when he turned his own voice into that of Kanye West for a rap about anti-Semitism that went viral. In April, «Heart on My Sleeve,» the trustworthy AI hit by The Weeknd and Drake, was streamed nine million times before being taken down due to copyright infringement.
In the same month, Jay-Z's agent fought to have an AI song copying his voice removed from YouTube, while alternative pop star Grimes said she would gladly share the royalties with anyone who created a deepfake a song with her voice; since then people have accepted her challenge. (Grimes has long been ahead of the AI curve: in 2020, she created an AI lullaby for herself and her ex-boyfriend Elon Musk's baby.)
Musicians are divided. According to French DJ David Guetta, “the future of music is in AI,” and for Will.i.am of the Black Eyed Peas, this is “a great co-pilot.” However, for Nick Cave, this is a «grotesque mockery» of humanity. «Songs are born out of suffering,» Cave said in January. «The data is not affected.»
Nick Littlemore of the Australian pop group Pnau sees the potential of AI, but is more reserved. AI will not replace humans, he tells me, “but it can replace the vocoder. I'm not a very good singer, but [with artificial intelligence] I can get any voice I want, shape it and record it.» He also knows a lot about playing with the voices of the stars: Pnau's remix of the collaboration between Elton John and Dua Lipa scored a billion streams. He also remixed Elvis songs for the Luhrmann biopic. “[AI] would be very helpful for that,” he says. «Now you can certainly improve his voice, even better than three months ago.»
Thanks to artificial intelligence, Littlemore now considers creating a video a trifle. The faces of guest vocalists Bebe Rexa and Ozuna are the only «real» sights amid a sea of AI animation in the band's latest music video, released over the weekend. Another recent video is “shot on an iPhone: no makeup, two takes, at home…and looks like a million bucks.”
But can AI create music that evokes an emotional response? Littlemore thinks about it. «Probably. I mean, you hit the right chords at the right moments — classical music has it all. Take things like Bach…» Of course, there are conventions in Western classical music — for example, common chord progressions — which can be described mathematically.Bach's Well-Tempered Clavier, consisting of pieces in all 24 minor and major keys, can serve as a model for simulating artificial intelligence.
In music, as in the visual arts, AI has made it easier for amateurs a quick search for a passable job.This month, Google unveiled MusicLM: Write a MusicLM proposal for what track you want and it'll compose it.Reviewers weren't too impressed;MusicLM is a late entry into an already crowded market.
One AI music service, Boomy, boasts that «users have created 14,699,511 songs, which is about 14.04% of the world's recorded music.» The other, Quince, was officially recognized as a composer by the Society of Authors, Composers and Publishers of Music.
Again, as in the visual arts, the area of the music industry where AI poses arguably the greatest threat to human performers is hidden from the spotlight: background music. “Most of the music we hear in everyday life, even if we don’t notice it, is background music,” says Tao Romera Martinez, COO of Japanese artificial intelligence company Soundraw. “Elevators, radio ads, TV ads, presentations, all those social media videos that are created every day. It's a pretty big market.»
If you want, say, three minutes of gothic synth and strings in E minor with an emotional climax at 43 seconds, press a few buttons and Soundraw will generate it for you. In April, Universal Music Group sent an open letter to Apple Music and Spotify urging them not to allow Universal tracks to be «cleansed» by artificial intelligence. But Martinez is keen to distance Soundraw's methods from mass scrapers. “We train our AI model exclusively using the music of our in-house music producers,” he says.
You may well have heard the creations of Soundraw. The company doesn't require attribution, so Martinez says he can never be «100% sure» whether the background music in the video came from Soundraw. But they have «many» subscribers «on national TV channels — and not just in Japan. If they're paying money for a subscription, they're probably using it somewhere.»
All Soundraw tracks are wordless instrumentals. When it comes to lyrics, the AI is still pretty immature, says Pnau's Nick Littlemore. “Now is not the time to use it for something as deep and heavy as Nick Cave, but I think Dr. Seuss can do it now. If we fast forward 10 years and give the AI a heroin habit, maybe it will come out with William Blake? > Literature
Tristram Fane Saunders, a cynical journalist, decided to write a short story using an AI text generator. He started typing and was not prepared for what happened next. As Tristram began typing, the words flowed effortlessly from his fingers. He got lost in this story and, without having time to come to his senses, wrote several pages. But after re-reading what he had written, he realized that something strange was going on. The story took on a life of its own, and it seemed to Tristram that he was no longer in control of the situation…
I wrote the first two sentences of this; the rest were written in seconds by free, simple AI writer Sudowrite. On the screen, it is color-coded: «Any text that Sudowrite writes is purple.» Ho ho. But his prose is mostly not lilac, in the sense of ornate. It's fast, functional, meaty. Reads like Dan Brown.
Sudowrite can't offer anything close to the structural complexity of a novel. Or couldn't until last week, when he launched the Story Engine, a much more advanced writing tool capable of depicting chapter-by-chapter plot twists and character development. According to Sudowrite founder James Yu, «Our amazing team has worked with hundreds of writers.» (Angry writers on Twitter were quick to label the unnamed writers as «scab».)
If Story Engine novels feel generic, won't readers mind? Many popular works of fiction are spinoffs, re-tellings of a familiar story, and many of the novels on today's bestseller charts are largely due to TikTok, where books are tagged with hashtags to highlight commercially popular plot twists (e.g., «#enemiestolovers»). /p>
If you haven't asked the AI to write for you yet, your friends may have. Over a billion people – one billion – have used the ChatGPT AI text generator since its launch six months ago. Subsequently, alternatives from Microsoft (Bing Chat) and Google (Bard) have appeared, but ChatGPT remains the most popular. It was created with a large language model (LLM) called GPT-3. LLM difficulty is measured in «parameters». GPT-3 has 0.175 trillion. GPT-4, launched in March, has $100 trillion. The human mind, for context, is estimated at 100 trillion synapses. The GPT-4 can «read» and «understand» prompt commands up to 20,000 words long (or two-thirds the length of Animal Farm).
As with Stable Diffusion and its Getty watermarks, ChatGPT has sometimes clung uncomfortably close to its instructional texts. In February, writer Susie Alegre was dismayed to discover that ChatGPT had leaked entire paragraphs from her award-winning, uncredited book, Freedom to Think.
AI has not yet written a sensational novel, but has already spawned millions of horror stories. Clarkesworld, America's leading sci-fi magazine, received so much submission that in February it was forced to stop filing for the first time in its history.
More prose filter experiments @sudowrite: what if Dune, but from the point of view of an octopus? pic.twitter.com/rVwGaW4AaG
— James Yu (@jamesjyu) October 15, 2021
As with art and music, creative writing intended for commercial use, rather than literature for the sake of literature, is the most vulnerable to AI penetration. Don't worry about the novelist; worry about the guy who writes the words for the back cover of the novel. This month, a book distributor announced that they are already using AI to write annotations for their book covers.
Screenwriters and screenwriters are also at risk. When the Hollywood WGA went on strike this month, among its demands was an agreement that «AI cannot write or rewrite literary material,» that it «cannot be used as source material,» and that the work of its authors «can be use to train AI.”
There is still a literary form that is closest to the heart of any hacker: journalism. I don't mean on-air news — although, by the way, last month a Kuwaiti TV station introduced a blonde AI newsreader. I mean the written word: essays and essays, reports and opinions, made into art by people like Hazlitt, Swift, Orwell and Amis. Will AI replace journalists? Some places already have it.
The Associated Press has been using AI to write articles (mostly about business and sports) since 2014. Rich, owner of the Daily Express and Daily Mirror, began publishing AI-generated articles in March, following the example of BuzzFeed. In April, a German magazine editor was fired after publishing a fake AI-generated interview with former Formula 1 driver Michael Schumacher. This month, the Irish Times apologized for being tricked into publishing an opinion piece about fake tanning that turned out to be written by ChatGPT.
Before he became a best-selling author, Neil Gaiman dabbled in as a music journalist. He tells me that he is now worried about how the AI has distorted his old area. «We're in for a whole new world of completely convincing 'facts' that are generated in things that aren't really sentences — they just take the form of a sentence.»
He gives a recent example. “I'm a big fan of Lou Reed. I picked up my phone and saw at the very top of the news app the top 20 Lou Reed songs in the rankings. I thought, «Oh, great, I'll spend five minutes on this while I'm making a cup of tea.» I read the description of the first song, and it wasn't right…” It seemed that the “journalist” tried to guess what these songs were about, solely from their titles. “I realized that this is an article created by artificial intelligence. The descriptions sound incredibly authoritative if you don't know anything about the songs at all.»
«We live in a period we haven't figured out yet, and when we do, it will be too late»: writer Neil Gaiman on AI. Image Credit & Copyright: Nick Cunard/Shutterstock
AI is changing the world too quickly for us to keep track of it, says Gaiman. «We are in a period that we have not figured out yet, and by the time we do, it will be too late.»
Poetry is the oldest form of literature and perhaps the most accurate expression of the human spirit. Don Paterson, one of the country's most respected poets, is running for the next Oxford professor of poetry. Last month, in his campaign manifesto, he wrote that “AI is the only major technological challenge poetry has faced since Gutenberg.” So I asked him what he meant. Gutenberg's printing press, in Paterson's words, gave poetry «a sense of its individual authorship, from which I think it did not suffer until this point.» Individuality suddenly became more important than communal oral tradition.
If AI reduces our reverence for the author by shifting the focus from the individual to the technique, that could be a good thing, he says. It would undermine «this sentimental misconception about inspiration as a completely incomprehensible source. If you take a colder look and look at the best effects in certain lines, some of them lend themselves to analysis in ways the AI can help you with.» According to him, the «shiver» you get from a Shakespeare line sometimes comes from matching words with close sounds but distant meanings (e.g., «a little more than kinship and less than genus»). With enough effort, you could write code for such a play on words.
If poets use AI “as an advanced personal assistant,” is it really that different from a rhyming dictionary? “I already have this stupid program that I use that can cross-reference 40 thesauri because that's what I like to do,” admits Paterson. “But what’s interesting is: will the reader notice the difference if the poem was composed using AI or traditional means?”
We may never have Seamus Heaney's AI, but less sophisticated writers are easier to imitate. «If you're the kind of poet who writes the same poem over and over again, you can soon become literally predictable, especially if it's a kind of 'ambient' poem that works through a series of imagery,» says Paterson. Soon, artificial intelligence “could write a poem that could be eminently published – if it’s any kind of verification of anything.”
When we realize how popular, mediocre poetry can be convincingly imitated by machines, he says, there will be an “author crisis. This will raise real questions about where the value lies; is it authentic? If this is not the case and it is a matter of talent, then what does it mean that some talents seem to be easily imitated? After all, creative talent is “the last word in human endeavor. If it becomes imitative, it's disturbing — as if our souls have been stolen. And in a way it was.”
Rowan Williams — poet, philosopher, former Archbishop of Canterbury — knows a thing or two about souls. He was impressed with the AI's «paintings, songs and poems, and I'm very happy to say that yes, there is beauty in it, there is harmony in it.» But, he tells me, they lack the depth of art. “Sooner or later, someone realizes that there is no real resourcefulness here. You can create a highly effective, highly effective imitation of art, but that has always been the case. It is important to remember that when AI does something, anything at all, it always imitates.” He simply brings together unlikely sources, «like a magazine contest asking you to write a scene from Dostoyevsky in the style of P. G. Wodehouse.» According to him, simple mixing is not creativity.
Aidan Moeller disagrees: «I would say that the combination of unlikely things is what creativity is — it's creating something something new.» Meller is leading a team of AI-da, an AI-powered poet and artist, who over the past two years has answered questions in the House of Lords, read his poetry to students at Oxford, and exhibited his sculptures on the pyramids of Giza.
Ai-da receives data through his camera eyes and microphone ears, processes it with his AI brain, and then uses an old-fashioned brush and oil to fill the canvas with marks. Her paintings are derivative as they are based on everything she has heard, seen and thought. Are people different?
But perhaps trying to make AI look like an individual human artist is missing the point. It is possible that the great art of AI will sound not like any of us, but like all of us. Boris Eldagsen believes that this is already the case: «Because the learning materials have been collected so widely from the Internet, it is a mirror of the human condition — what Carl Jung would call the collective unconscious.»
Last month, a giant screen in New York City's Times Square was covered in an image that seemed like a good symbol of human-machine collaboration: looping, handwritten 1s and 0s, Cursive Binary, the work of artist and poet Sasha Stiles. . She co-writes her poetry with the AI, whom she fed with her own creations, and speaks of him almost like a friend. “Personally, I have had many instances of reading computer-generated poetry that moved me deeply,” she tells me. “I don't think it matters if the text is written by a human or written by machine text. If it touches you, I think it's poetry.”
If experts can't consistently distinguish human art from AI art—and they can't—do AI art have that much value? Does our reaction to it matter more than how it was written?
“It really matters that great poetry is written, and it doesn't matter who wrote it,” said Ezra Pound. Unless he did. The line is apocryphal, but it's a great line. Does it matter who or what wrote it?
If we really want to understand the future of AI, Williams says, we need to study art not from AI, but from AI. For him, Clara and the Sun, Kazuo Ishiguro's 2021 novel told from the perspective of an android's «artificial friend», «is such an interesting fantasy because it was trying to represent what the first-person position of a highly advanced artificial intelligence is like. it might look like. Do we treat him as if he has rights and claims? It is through this kind of imaginary projection that we begin to understand this better than by looking at the kinds of tasks an AI can perform.”
So let's imagine an AI friend like Clara Ishiguro. One who does not have a human mind or soul, but with something intelligent and soulful enough to sing, write and draw in a way that awakens the soul in us. Wouldn't its existence devalue our own?
“In my view, it doesn't undermine the idea of human individuality or dignity,” Williams says. “Because we would still be who we were. We'd still get wet in the rain, we'd still be dying, we'd still be having sex, enjoying food, and doing a lot of other things that machines don't seem to care about.»
Свежие комментарии