more
    DALL-E 2: Why is everyone obsessed with AI and can it end your design career?

    DALL-E 2: Why is everyone obsessed with AI and can it end your design career?

    Any sufficiently advanced technology is indistinguishable from magic.Arthur C. Clarke The world today is scarily similar to a middle school math c...

    Any sufficiently advanced technology is indistinguishable from magic.

    Arthur C. Clarke

    The world today is scarily similar to a middle school math class. You stare at a neat “10+15=25” equation on the whiteboard and everything makes perfect sense, then you rest your eyes for a couple of seconds and — boom — the whiteboard is filled with letter-only formulas, you have no idea what’s going on, and the teacher is announcing a test. Panic alert!

    It’s the same way you have to constantly keep tabs on all the latest industry developments to stay in the loop and not miss out on anything. Otherwise, it can cost you a job!

    Wait what? Did we skip a chapter?!

    Not really… It’s the same old robot takeover and the rapid growth of Artificial Intelligence. Now, in the form of a tool that can generate high-quality, photorealistic images that have never existed before based on a verbal instruction. 

    Five letters, one number. DALL-E 2 — a designer’s nightmare. 

    Or is it? 

    Is it really as scary as everyone thinks it is? Can it really end your design career? Or, perhaps, it can open up a land of opportunity for designers and make their lives ten times easier? 

    Read this article to find out more about what DALL-E 2 is, how it works, what applications it has in design and marketing, and whether or not it poses a threat to designers and artists. 

    Oh, and since we’re here… Check out VistaCreate’s library of AI-themed templates:

    What is DALL-E 2?

    The first thing you need to know about DALL-E 2 is that it’s not a brand-new concept. But the “2” in the name already hints at that, right? 

    DALL-E is an AI system that creates images from textual prompts in natural language. You can come up with any idea for an illustration, describe it to the system using conversational terms, and have the tool produce ready-made art in a matter of seconds. 

    Fun fact: The name DALL-E is a nod to the artist Salvador Dali and the Pixar movie robot WALL-E.

    However, if you don’t remember hearing about it before, don’t worry — DALL-E has only become like a really big thing in the last couple of months, when OpenAI (a San Francisco-based company best known for its massive GPT-3 natural language model) rolled out the second iteration of the tool.

    Images created by the original DALL-E back in 2021, despite still being incredibly impressive (hello, we’re talking about artificial intellect creating artwork in seconds based on a mere description!), were pretty low-quality, low-resolution, conceptually basic, and, overall, not always that spot-on. 

    The successor of the technology, DALL-E 2, however, builds on that system by offering faster, more realistic, accurate image generation with enhanced textual comprehension and four times greater resolution. 

    In fact, according to research by Aditya Ramesh and Prafulla Dhariwal, when evaluators were asked to compare 1,000 image generations from each model 71.7% preferred DALL-E 2 over DALL-E for caption matching and 88.8% — for photorealism.

    Here’s a side-by-side comparison of “a fox sitting in a meadow drawn in the Claud Monet style” created by DALL-E 1 and DALL-E 2 that illustrates the improvements in the system: 

    DALL-E 2 can do several things. 

    @jacob.seeger This opens up so many posibilities for the future of art and the role of artificial intelligence… #dalle2 #dalle #dallemini #artificialintelligence #ai #robot #mindblown #art ♬ original sound – jacobseeger

    First of all, you can use it to create an image from a description in a specific style. For example, below is an illustration of an astronaut playing basketball with cats in space as a children’s book illustration:

    Or, if you want to inherit specific artists and not a generic artistic style, here’s another example: a bowl of soup that is a portal to another dimension in the style of Basquiat. 

    According to the CEO of OpenAI, Sam Altman, “these work better if they are longer and more detailed”. 

    On top of that, the second iteration of the DALL-E tool offers additional editing capabilities called inpainting. The functionality makes changes to existing images based on natural language instructions user input. It can add and remove any elements to an image, while integrating the expected changes to shadows, reflections, and textures.

    Here’s an example of DALL-E 2 incorporating an image of a corgi in the user’s location of choice (what an adorable painting of a corgi, pup, right? We would’ve never thought it’s AI-generated!) 

    Finally, this OpenAI tool allows you to create a range of images stylistically similar to or inspired by the one you upload. Here’s a series of Gustav Klimt’s ‘The Kiss’ variations: 

    Sounds like magic, right? 

    Well, for those of us who are unfamiliar with data science and machine learning, it sure is one of the closest things to magic. However, for those who are elbow-deep into modern technology (or at least aware of the current societal and tech trends), there’s a viable explanation for how DALL-E 2 does its magic.

    “DALL-E was created by training a neural network on images and their text descriptions. Through deep learning, it not only understands individual objects but learns from relationships between objects.”

    What are the applications of DALL-E 2 in design and marketing?

    At the moment, DALL-E 2 is only available to a chosen bunch — creators haven’t rolled it out for the general public to use and you can only access the tool if you get picked from the massive waitlist. 

    So, for now, all applications of DALL-E 2 in design and marketing are merely theoretical — the majority of us are yet to be able to turn to the AI tool as soon as we have an idea.

    Nonetheless, considering the content we see from the luckier lot that does have access to DALL-E 2, there are already several lines of use for this AI system. Here are a couple of ways we think DALL-E 2 could be valuable to marketers and designers in the future:

    • It can help marketers and designers generate unique, impactful images for blog posts and e-books to illustrate the points made in copy.
    • It can allow marketers to create license-free stock images for their website pages and landing pages. 
    • It can help create visuals for digital or print brand collateral used internally and externally. 
    • It can help businesses create ads that stand out from their competitors because of creative, unusual visuals. 
    • It can help inventors generate product sketch ideas. 
    • It can create unique, accurate visuals that help describe complex information, products, or services across all digital assets.
    • It’s great for creating various types of mockups, including mockups to brainstorm branding, campaign ideas, video scripts, or commercials; mockups of logos.
    • It can help create mockups to inspire and guide human designers on more complex visual generation projects.

    And, if you think that it’s all too good to be true and won’t come into play for another decade or so, take a look at the June edition of Cosmopolitan; it features the world’s first magazine cover designed by artificial intelligence.

    It took the editors of Cosmopolitan and Karen Chen, a digital artist and the first real-world human granted access to the DALL-E 2 technology, about an hour to come up with the concept and the correct wording to feed in the request, and about 20 seconds for the system to create this completely unique, AI-generated image of “a strong female president astronaut warrior walking on the planet Mars, digital art synthwave”.

    (Source)

    But Karen Chen didn’t stop there. Another work by the digital artist is a music video for Nina Simone’s “Feeling Good”. In the video, Chen uses images generated by DALL-E 2 to tell a coherent story just like any other animated music video would:

    One of the points Chen outlined when discussing her DALL-E 2 creations with press is that in the past, she was limited in her creative expression as a visual artist because she can’t draw. And now she “has the power of all these different kinds of artists”. 

    This realization isn’t exclusive to only Karen Chen. A lot of practicing artists and designers have understood the possible applications of the new technology. This raised a lot of tumult and a plethora of heated discussions on the web. 

    Is DALL-E 2 going to disrupt design as a profession and make designers redundant?

    Does DALL-E 2 pose a threat to designers worldwide?

    The very second entrepreneurs, managers, and other non-designers find out about a magical tool that can draw them anything they want in a matter of moments, absolutely for free, a swarm of questions starts buzzing in their heads:

    • Why would you hire an expensive designer to create visuals for your business when DALL-E 2 can do it for free?
    • Why do you need a photographer and a subscription to a pricey photobank when a machine generates affordable photorealistic pictures? 
    • Why pay for an expert graphic editor and all the expensive software they use when DALL-E 2 can edit images based on instructions from anyone in your company?

    Indeed, one of the most common fears associated with DALL-E 2 and AI technology in general is that it’s the death knell for human creativity and designers as a profession. Graphic design professionals all over the world are feeling uneasy about the fact that their services can soon go out of demand and they will get replaced by robots. 

    But don’t rush into panic mode.  

    First of all, DALL-E 2 is a project in progress, so the answer to whether or not it’s going to end designers as a profession is: “not yet”. There are still lots of cons this technology has, some of the most prominent ones being: 

    • It’s only available to a small number of people, which prevents the technology from being rapidly disruptive. Each week, DALL-E 2 is being released to just a thousand users as engineers continue to make tweaks. 
    • It can only produce what it knows and has seen before. If you pitch DALL-E 2 a concept it hasn’t been taught yet, it’ll provide its best guess, which will be wacky 9 times out of 10. 
    • The image quality isn’t that great. While DALL-E 2 is four times better than its predecessor, DALL-E 1, when it comes to image resolution, it’s still not perfect. Areas requiring finer details often turn out blurry, smudged, or overly abstract. 
    • The system isn’t “woke” enough (which can be a massive problem for businesses that strive to be as inclusive as possible in their visuals). Due to being trained on biased data sets, the majority of people that DALL-E 2 renders are white or have europeaid features. 
    • DALL-E is surprisingly bad at counting. The first users of the system have noticed that DALL-E 2 consistently messes up instructions when they mention numbers:

    • It has a hard time figuring out how many fingers people have. A very specific drawback of the system, but still one worth mentioning: for DALL-E 2, the number of fingers a human has on average seems to be as random as the number of leaves on a tree. Ah, these silly machines!

    Moreover, because of the system’s strict content policy aimed at curbing the misuse of DALL-E 2, this AI tool is imperfect by design too, and not only by chance. OpenAI team has removed the most explicit content from the training data to minimize DALL·E 2’s exposure to these concepts and, therefore, prevent the system users from generating violent, adult, or political content. So, if you’re a business that uses explicit images in its communications and would like to delegate creation thereof to AI, hold your horses — your designer is completely and utterly irreplaceable! 

    On top of that, in an attempt to overcome the possible creation of deep fakes and other harmful content, DALL-E 2 is intentionally bad at generating photorealistic images of real faces. Instead, the system will — on purpose — add a pair of wonky eyes, an unrealistically crooked nose, or wobbly lips. 

    Gary Marcus, scientist and the author of Rebooting AI, sums up the usefulness of the AI tool with all its applications, advantages, and drawbacks:

    DALL-E is probably best used as a source of inspiration rather than a tool for final products. You can say something like “a boat on the sea, in a Van Gogh style”, and get something beautiful. But if you want to change the end product, perhaps to “a boat on the sea but with five people rather than 4, with the tallest person in the front and the shorter person in the back, with the same boat, but painted brown, and a slightly darker background”, the system probably won’t understand the language well enough to meet your exact specifications.

    But even once the majority of these drawbacks get fixed, the chances of DALL-E 2 replacing designers are still critically low. At the end of the day, without human input, without creative thinking only humans are capable of, DALL-E is worth nothing. It’s merely a tool for bringing ideas to life, not a tool for ideation. 

    A similar argument is put forward by Sam Altman. He believes that technology like DALL-E 2 will make business professionals more creative if anything. Here’s how Altman words it:

    It’s an example of a world in which good ideas are the limit for what we can do, not specific skills.

    At this stage, DALL-E 2 looks like it could be a great muse to inspire designers and illustrators. For now, it feels more like a fun, throwaway kind of thing — but you need to keep an eye on innovations like DALL-E 2 and others in the AI field; these things can move fast! 

    “The invention of the camera didn’t mean the death of painting. Rather, it forced painting to evolve to other styles – more abstract and impressionist, since cameras could capture photorealism so well. Eventually, the invention of social media coupled with cameras gave rise to an explosion of ways for a new generation of painters to share their art. 

    Similarly, the invention of AI art doesn’t mean the death of artists. But artists will have to evolve. The ones who do well will be the ones who find new creative ways to make AI work for them, rather than against them.

    But I don’t want to sugarcoat it either. With AI, we are about to enter a period of massive change in all fields, not just art. Many people will lose their jobs. At the same time, there will be an explosion of creativity and possibility and new jobs being created—many that we can’t even imagine right now.”

    — Karen Cheng

    So, don’t sweat just yet, but be prepared to enter the new era of design. Explore and foster your creativity, see what unconventional ideas you can come up with, and start experimenting with your style. The easiest way to do this is with VistaCreate — where online design is made easy.

    Valerie Kakovkina

    Content marketing manager at VistaCreate. Valerie loves all things marketing, with her favourite areas being email marketing and social media. When out of the office, Valerie loves travelling, going to parties, and helping her friends with their art projects (oh to be surrounded by artists).

    Latest news

    Don’t miss out on our top articles

    Subscribe to the VistaCreate blog newsletter and get our top articles in your inbox each month. No spam ;)

    Guest blogging with VistaCreate

    Share your story or article idea with us and thousands of VistaCreate blog readers. Pitch your ideas by filling out a short form.
    Create a design now