![]() Today the human authorship and craft involved in artistic photography are recognized, and critics understand that the best photography involves much more than just pushing a button.Įven so, we often discuss works of art as if they directly came from the artist’s intent. ![]() But in my view, this line of thinking echoes the classic take that photography cannot be art because a machine did all the work. You might be inclined to say there’s little artistic merit in an image produced by a few keystrokes. If these images succeed as art, they are products of how the algorithm was designed, the images it was trained on, and – most importantly – how artists use it. Even though it sometimes feels like magic, under the hood it is still a computer algorithm, rigidly following instructions from the algorithm’s authors at OpenAI. If someone told you that a person made all these images, of course you’d say they were creative.īut this does not make DALL-E 2 an artist. In fact, no human can do what DALL-E 2 does: create such a high-quality, varied range of images in mere seconds. Then I realized that it is, in a way, a better painter than I am. I had a moment early on while using DALL-E 2 to generate different kinds of paintings, in all different styles – like “Odilon Redon painting of Seattle” – when it hit me that this was better than any painting algorithm I’ve ever developed. It’s easy to imagine these tools transforming the way people make images and communicate, whether via memes, greeting cards, advertising – and, yes, art. ![]() Google Research recently announced an impressive, similar text-to-image system, and one independent developer is publicly developing their own version that anyone can try right now on the web, although it’s not yet as good as DALL-E or Google’s system. In principle, anyone with enough resources and expertise can make a system like this. And, sometimes, the unexpected results are the best. But, even with the need to sift through many outputs or try different text prompts, there’s no other existing way to pump out so many great results so quickly – not even by hiring an artist. Not all of the images will look pleasing to the eye, nor do they necessarily reflect what you had in mind. Each set of images takes less than a minute to generate. It’s staggering that an algorithm can do this. If you want prehistoric cave paintings of Shrek, it’ll generate six pictures of Shrek as if they’d been drawn by a prehistoric artist. If you want images that look like actual photographs, it’ll produce six life-like images. It can also mimic specific styles with remarkable accuracy. The most recent text-to-image systems often produce dreamy, fantastical imagery that can be delightful but rarely looks real.ĭALL-E 2 offers a significant leap in the quality and realism of the images. Many of these artworks have distinctive qualities that almost look like real images, but with odd distortions of space – a sort of cyberpunk Cubism. Over the past few years, a small community of artists have been using neural network algorithms to produce art. While the algorithm did not quite grasp “Devo hat” – the strange helmets worn by the New Wave band Devo – the headgear in the images it produced came close. Nearly all of them could plausibly pass for professional photographs or drawings. (Until recently, the program produced 10 images per prompt.) For example, when some friends and I gave DALL-E 2 the text prompt “cats in devo hats,” it produced 10 images that came in different styles. Using DALL-E 2 looks a lot like searching for an image on the web: you type in a short phrase into a text box, and it gives back six images.īut instead of being culled from the web, the program creates six brand-new images, each of which reflect some version of the entered phrase. They gathered some of the images online and licensed others. ![]() OpenAI researchers built DALL-E 2 from an enormous collection of images with captions. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself. It raises immediate questions about how these technologies will change how art is made and consumed. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. As a researcher studying the nexus of technology and art, I was keen to see how well the program worked.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |