How Much A.I. is Too Much A.I.?

I don’t blog enough to be topical or cover current trends, and I like to keep things light and funny, or positive and optimistic in general. In fact, at the end of 2022 when I actually tried to be topical and write about artificial intelligence, I did it from a pretty altruistic standpoint with the hope that working with the changing landscape that a.i. isn’t going anywhere is a better approach then railing against it. At my core, I still believe that, and I’ll put a pin in that for now, but today I’d like to revisit a.i. with a little more experience after having had the wonderful opportunity to visit the Adobe Max Conference in Los Angeles as an attendee.

At the Adobe Max Conference, October 2023 where I got to meet Aaron Draplin!

I say, “a little more experience” because not long after my initial blog post about a.i. in December, I followed that up in April of this year (I told you I don’t blog enough to be topical!) with a post detailing my impressions using ChatGPT and Dall–E which very quickly became dated after both platforms made serious advancements technologically. I think for most people at that time, a.i. was still a fun and funny little moment of pop culture that humorously added to the zeitgeist with things like Keaton Patti’s bananas Olive Garden commercials and Trump Rallies—all created by feeding hours of those particular brands original content into a.i. bot programs. Maybe the undercurrent had a slight worrisome tone of the inevitability of robot overlords, but it was still relatively light–hearted and quaint. Who could have ever imagined we’d be pining for the simpler times of 2022.

Of course, like most technological milestones, once something starts to get traction, it really takes off. Granted, a.i. has actually existed since the mid 20th century (starting with the Perceptron Mark I in 1957), but really started to generate public interest in the 80s with the goal of revolutionizing computer processing. It’s directly because of this that artificial intelligence has thrived recently—not so much because tech geniuses have learned more about a.i. themselves—but rather because computer storage, memory, and speed have increased beyond what most average consumers even need. And in 2023, Dall•E released its latest text–to–image model featuring significantly more nuance and detail, and ChatGPT became the fastest growing consumer software application which now offers it’s GPT–3.5 engine operating its services on a freemium model while ChatGPT Plus offers its GPT–4 engine to users for $20 USD a month. If that’s not impressive enough, as of this post, ChatGPT itself isn’t even a full year old yet having launched in November of 2022! In March 2023, Adobe released its generative a.i. tools to Photoshop in beta testing. Today, those a.i. tools are fully integrated into Photoshop and Firefly a.i. is in beta for Adobe Illustrator—creating editable and functional vector illustrations—as well as Adobe Premiere Pro which also offers beta tools for speech to text editing for videos. It is insanely easy to use and saves lots of time.

All of these images on Adobe Stock are generative a.i.

Let’s step back and address just a couple quick bullet points I made previously regarding a.i.’s learning technique. Now initially I made a mistake and thought the process was called stable diffusion, but that is in fact the name of a latent diffusion model developed by a company called Stability AI. Artificial intelligence uses machine learning to develop a deep knowledge of whatever subject it’s tasked to create. I’m cutting out a lot of context here for the sake of brevity, but imagine a robot who has the capacity to instantly read every single book on a particular subject so that it can then use that knowledge to compose its own creation based on that immediate education. It only has the information it has acquired, so it bases everything it can do on that information alone. I read hundreds of text books about various software programs when I was in school, but it almost all felt useless after I actually started working and realized experience was vastly more important. As a result, most of my initial professional work looks ridiculous, much like what a robot who only learned from reading and not actually doing.

Now imagine that same robot is instructed to paint a masterpiece, but in order to do that, it has to visually take in every painting currently on display in the Louvre. So it rushes around the museum and sees works from DaVinci, Géricault, Michelangelo, and Jacques-Louis David. Afterwards, you ask the robot to paint its own masterpiece, but upon completion you notice it hasn't really painted anything original, but rather cut and pasted elements like the Mona Lisa's smile or Roman columns from the Oath of the Hoaratii. It's specifically these issues that have a lot of creative people feeling pretty upset because the robot hasn't actually created anything, it's just stolen components from others. But then you also notice that because the a.i.'s creators have basically told this robot to go out and learn everything from the internet, you start getting into real troublesome areas because if you haven't noticed, there's some pretty horrible stuff online and it's not exactly hidden either. So now on top of being an art thief, the robot has also learned to be racist, sexist, and creepy. You know, like actual real life people.

Again, I'm really compacting a lot here to keep things from getting bogged down with technical jargon, but these are real concerns that have had companies like Adobe make serious public efforts to proactively promote responsible guidelines for generative a.i. learning and sharing, as well as protecting intellectual property and reflecting diversity in a positive way. For the most part, this corporate responsibility and good faith approach has been necessary, not just because of the reactions from creatives worldwide, but because of the accountability these types of organizations can be held to legally. Remember, theft of any kind is generally frowned upon.

So now we get to the Max conference where the undeniable star was Adobe's generative text–to–image a.i. program, Firefly. All over the conference floor, in its classrooms and displays, and promoted heavily at each keynote session were strikingly beautiful images all created by artificial intelligence. What Firefly offered was so prominent that it sometimes felt invasive. Is it cool and will it save designers from doing tedious things like masking, editing, and rough concepts that require super quick turnaround? Oh man, you bet your a$$ it will. But as one classroom speaker jokingly noted at the beginning of one session that it was the only conference event not promoting any new artificial intelligence tools, he was met with a roaring standing ovation.

Adobe competently showed that it's a.i. deep learning methods were trained solely on its own library of photographs, illustrations, images, and graphics (pretty much its entire stock library) and that its engineers were working tirelessly to integrate guardrails so that diversity and inclusion were represented equally and respectfully. So there you go! Problem solved. You can stop worrying now about everything. Robots are kind, love is love, intellectual property theft is a thing of the past, and the system works great.

Even if all that were true, there's still a hiccup or two. Now put your personal feelings about a.i. aside for just a bit and let's pull that pin out regarding my optimistic outlook from earlier. The cold hard fact is that a.i. isn't going anywhere, and just by looking at what was just a year ago and what is now today, it's pretty obvious that what a.i. can create is only going to get more impressive and it's going to be up to everyone to ensure it's guided properly, safely, and responsibly. Up until the Max conference, I could be heard saying, "Man, if this is where it's at now, imagine what it'll be like in ten years!" But that was out dated thinking when I first wrote about a.i. Especially when Adobe has expressed its expectation to see user generated a.i. images increase 5 times more in the next 3 years than it currently already has which is already over 15 billion!

So while ethically we have a lot of work to do, I think it's also fair to say that we're currently pretty early on in this saga and we're already experiencing some pretty heavy a.i. fatigue, and to that point, I'd like to redirect your attention to stock images. I love stock images, like I adore stock images and I've written about them before (and it's a funny article I'm really proud of too), but because I almost exclusively use Adobe Stock, and because Adobe Stock is the epicenter of Adobe a.i.'s learning process, it's kind of saturating the store with its own product and nothing else. Imagine going to your local grocery store, and all they sold was their brand of corn flakes. No produce, no deli, no butcher; just aisle after aisle of varying sized boxes of store brand corn flakes and nothing else. Because Adobe's generative a.i. has come along so far and has gotten so good, it offers generative a.i. images as stock image options. Originally you'd see one ore two pop up, then it became the majority of what was offered. Now depending on what you're looking for, a.i. generative images can be all that's available.

Real quickly if you haven't read my post on stock image sites; they provide a designer access to photos, graphics, templates, or illustrations the designer wouldn't otherwise have time to create themselves. Creating an ad for a new coffee chain? I can search for something like, "Friends enjoying coffee together in a cafe" on a stock image site just like I might look for something in a search engine and I will get various results that will hopefully match the look and vibe I'm going for.

So just to clarify, I don't have an issue with a.i. generated images. The quality is really good and getting better. There will be times when an a.i. generated image is much better than anything else the stock image site is offering, but it's frustrating how much of it there is. When searching for "Friends enjoying coffee together in a cafe", the language in the search itself is referring to a very human experience. The generative a.i. image is good, but it's not perfect. Plus I think there's a subconscious bias that I personally have that (at least in this instance) using something that was not created by humans but that is supposed to be representative of a human experience does not feel like a genuine, intentional choice to encourage others to buy into the design I'm making.

This image was generated by a.i. Can you tell?

I realize there's so much to unpack there. I'm using a computer to design this resource, is it really that bad that I'm requesting a computer create an additional element to that? Will future designers be less likely to have such a bias if they grow up understanding how ubiquitous this technology is? How "human" does an image have to be to properly reflect a human experience when we're all already familiar with shorthand cues like seeing people in such settings—photographed, illustrated, or otherwise—that creating that established connection is even necessary?

For their part, Adobe has a very clear and up front policy regarding their ethical standards and practices regarding posting, hosting, sharing, and creating generative a.i. images. Now people will point out a lot of that responsibility falls heavily on the users respecting that system, and this is what I personally believe is at the heart of all of this and creates the endless loop of debate surrounding this. Policing people from playing with this technology may be noble, but it's also antithetical to creativity in general. I'll repeat it again:

Artificial intelligence is not going anywhere and will continue to develop, but a.i. is a mirror reflecting back on the people who use it, interact with it, and engage others with it. Just like the world we live in, what we put in will be synonymous with what it gives back.

A.I. images generated from Dall E 2 and Firefly using the prompt, “Renaissance painting of a black cat in a pink tu tu

But what do you think? Is a.i. the cool new future that will bring the world peace and prosperity? Is this the beginning of the end? Will a.i. steal jobs from creative people as well as blue and white collar workers? Or am I just fixated on the loving memory of our cat Destiny and want a.i. to realize how special she was like my wife and I do?

Our actual (late) cat Destiny killing it better than any artist—a.i. generated or otherwise—ever could. Rest in Peace, sweet girl.

Thanks so much for stopping by, and I really would love to hear your thoughts on this. Please follow me on Instagram and let me know! Also like last time, here are more sources as well as some other great articles about a.i.:

A.I. Art: Clarification & Controversy

I wasn’t even planning on writing a blog post this week let alone one on such a topic as art generated by artificial intelligence, but as it is part of my industry and I’ve seen so much outrage from my peers (many who are friends) I thought I’d do a little digging and put something a little more comprehensive together than, “Support artists! Denounce technology!

I’m going to try and keep all of this as brief, simple, and informative as I possibly can, but I’m also going to try and approach this from a (slightly) less biased angle.

What is A.I., and is it Bad?

You don’t have to be relatively well informed to know what AI is. It covers all aspects of our lives from taking care of minor tasks we don’t even think about to the stuff of science fiction nightmares requiring Will Smith to show us that even machines can—and in fact do—love. 

Recently I finished reading Yuval Noah Harari’s Homo Deus: A Brief History of Tomorrow, a follow up to his bestseller Sapiens: A Brief History of Humankind. Harari writes a lot about AI and its benefits. Without going into too much detail and wasting time here, the three big takeaways for now are that:

  • AI is actually a really good thing that can drastically help with all sorts of things to improve life everywhere for everyone 

  • Fundamentally, human life as we continue to understand how it works is not that different from how AI learns, adapts, and grows itself

  • It’s becoming clearer that creative things people believed AI could never reproduce aren’t far off and many areas like classical music are actually surpassing humans in quality, structure, and beauty 

I know most people will not believe any of what I have just written, and that’s completely fine. I would never insist you take my word (or anyone else’s) on anything as gospel at–a–glance. I would strongly encourage you to do your own research though. The point is, however, that whether we like it or not, the world is going to continue to change, it always has, and it’s certainly not going to stop because a few of us don’t like the idea of being replaced by anyone or anything. Automation has been changing how we do our jobs and live our lives for centuries, and the whole process has continued to increase at breakneck speed, especially since the 1980s. In other words, AI is most definitely here to stay, and it’s probably better we figure out how to change with it, rather than stubbornly anchor ourselves against it until the next generation sees us as living fossils who refuse to accept change.

But we’re not here for a lecture on science or philosphy from some Muppet–loving Jersey boy whose blog readership doesn’t extend far past his own family and close friends. So let’s instead talk about AI generated art, specifically the latest fad known as Lensa or “Magic Avatars” because there’s a lot to unpack and it gets complicated. 

Avatar Insanity or High Art?

Remember Bitmojis? I hated Bitmojis when they first debuted. Not because of how well or poorly they’re drawn (depending on your tastes), but because I didn’t come up with the idea first. For those not familiar, Bitmojis aren’t too dissimilar from modern day avatars you create like Meta’s more CGI looking Facebook avatars or Apple’s Memojis. My argument at the time was also that they took away opportunities from artists like myself to create illustrations for profit. On the flip side, they give people who do not possess the skill to draw an opportunity to express themselves creatively quite literally. 

Bitmoji, Facebook, and Memoji avatars of yours truly, and one I drew myself (2017)

The point is that there will always be new technology to engage users and experience something they would otherwise have to commission an artist for. I don’t know a single person that has ever created a digital avatar using some type of technology and then claim that their “artwork” was anything other than a fun opportunity to represent themselves online or even just to “jump on the bandwagon.”

Now I recognize this is a slippery slope that could lead to something more nuanced down the road, but for the time being, let’s all try and remember a social media avatar is not the same as a portrait or caricature that you would give as a gift, hang in your living room, or rock on the side of your 1988 conversion van as you drive to your next gig.

AI Art Stink

The main focus of this article and the commotion that it’s caused is a program called Lensa by Prisma AI. At a glance, Lensa is a pretty standard photo and video app that lets you take and edit media with a variety of different features. It’s most popular of course are the filters that “turn your photos into works of art in the style of famous artists” and transform them “with popular art styles - anime, cartoons, sketches, watercolors...” all by using artificial intelligence. The app is free but offers premium monthly or annual subscriptions. 

Okay, so nothing too new there. I am very far from having my finger on the pulse of any social media platform, but even I’m familiar with these types of things enough to know there are tons of them. I’ve even used some of them before myself, so why is this app getting artists in particular so upset?

The Lensa Learning Problem

Not too long ago (like literally just several months ago) Dall–E 2 debuted to slightly different fanfare. Dall–E 2 is also an AI art generator but it seems it’s absolutely ludicrous creations were more comical than threatening (for the most part). Hence, many people looked at Dall–E’s attempts as technological proof that a computer could never imitate the skill of an experienced artist. Or that optimistically, it’s still a ways off. 

Dall–E (and Lensa) uses Apple’s TrueDepth API which most iPhone users know as the same technology that allows them to unlock their phones just by looking at it. Dall–E 2 learned how to create its images by studying tons of information to create images based on text prompts. This AI learning technique is known as Stable Diffusion. 

This is tricky, but basically when AI uses Stable Diffusion, it’s not just learning to recognize features and characteristics of someone’s art, but essentially manipulating and reproducing elements from that art. So the issue then is that artists are accusing AI of using this learning technique by having Lensa specifically build its creations from existing art without permission from the artists it’s emulating. Now it’s hard to find sources to corroborate some of the allegations I’ve read, but many artists have actively accused Prisma AI of stealing their art specifically, requesting the company stop doing so repeatedly, then Prisma AI allegedly refusing and even cyber bullying them about it. There is compelling evidence to lend credibility to these claims where remnants of the artists signatures from original works are sometimes still visible in the AI generated art. 

Darker Secrets

Okay, maybe I’m coming across a little too unbiased by praising what AI could (or hopefully should) be even though I’ve stated in the past how infuriating and hurtful art theft can be. So let’s look at how one of the more nefarious problems with Lensa isn’t even allegations of art theft, but how it depicts your pictures when they become “art.”

A big problem with AI in general is that it learns its lessons from its creators, and even modern day AI—which is touted as pure—has repeatedly displayed racism, nepotism, and sexism. Lensa has lightened skin tones of people of color, struggled with (re)producing Asian features, and sexualized women and children. Other AI art generators have done similar unwarranted things like taken on macabre tones when “crossbreeding” images that did not previously convey violence. 

I want to be clear: these are not alarmist warnings that AI will rise up and destroy humanity. Artificial intelligence reflects the zeitgeist, which includes everything from cute and fun to morally questionable. In other words, whether it’s an art–stealing bot, a perverted algorithm, or even an altruistic ghost in the machine, it’s all taking its cue from us.

Now What?

So where do we go from here? As artists, we tend to react more emotionally because, you know, suffering is kind of “our thing” (until artificial intelligence corners the market on angst too). But like I mentioned before, technology is going to keep moving forward regardless of how we feel about it and that’s not necessarily a bad thing. A knee jerk reaction is to call out for regulation but that’s way easier said than done for two big reasons. 

While there has been initiative in congress to try and moderate how things like AI continue to develop, bureaucracy—love it or loathe it—purposely moves slowly to make sure it’s covering all its bases properly (and that is an exceptionally gracious and arguably naive platitude). On top of that, it’s an antiquated system that’s literally hundreds of years old. Technology moves ludicrously faster, meaning that by the time well intentioned and thought out legislation finally passes even in the best of circumstances, the applied science behind that technology is usually obsolete, meaning any government progress was all for nothing. 

The second problem is that government officials aren’t exactly young entrepreneurs anymore who even understand the technology they’re hoping to regulate, prioritize, or control. So when you ask older people using an even older system to help answer these questions, you eventually have to consider if the whole process wouldn’t be better served by the AI you want them to regulate in the first place! Understand though that this is not an endorsement to willfully hand the keys over to tech bros like Elon Musk or Sam Bankman–Fried. It’s pretty clear that being rich does not equal being responsible… or smart… or ethical… or competent… or sane. 

So if creative people only know how to get upset over it, and our leaders only know how to politicize it, as usual, it all comes down to you, the user. AI really can do incredible things, but moving forward, it’s up to us to decide how we’ll use it. 

And this is something everyone really needs to learn how to do better. Consider if that neat new AI avatar is worth the likes verses its privacy policy. Yeah, that’s a whole other kettle of fish because one thing practically no one understands is what kind of personal information you allow software developers access to when you agree to terms and services. 

Like almost every app, Lensa uses legalese to ensure you maintain the rights to your photos, but then vaguely explains they have the right to use those photos to independently aid in research, development, and improving new and existing products. 

This is not a dystopian outlook from a conspiracy theorist either. Your personal data is way more important to all of these developers than what kind of review you leave them on the App Store. A great rule of thumb when you download an app is its cost. If the app is free, then you’re the product that’s for sale.

Sources

I did a fair amount of research for this post, so if you’d like to read a bit more in depth on all of this, please check out these articles:

Lensa AI app: What to know about the self portrait generator by Meera Navlakha
Mashable

Understanding the impact of automation on workers, jobs, and wages by Harry J. Holzer
Brookings

Prisma is coming to Android, but there's a way to get it sooner by Stan Schroeder
Mashable

Careful — Lensa is Using Your Photos to Train Their AI by Shanti Escalante-De Mattei
ARTnews

Lensa, the AI portrait app, has soared in popularity. But many artists question the ethics of AI art by Morgan Sung
NBC News

‘Magic Avatar’ App Lensa Generated Nudes From My Childhood Photos by Olivia Snow
Wired

Stable Diffusion
Wikipedia

DALL–E
Wikipedia

If you’d like to track what some artists are saying, a lot of insight on Lensa’s theft I read came from Jon Lam on Instagram. He credited Lauryn Ipsum for the discovery of remnants of artist’s signatures on AI creations. Karla Ortiz is helping lead the fight for artists rights.

Incompatible Clients

Today I want to write about something that may sound like it’s just complaining about people, but that’s 100% not the case. Well, maybe 80% not the case, but I’ll elaborate more as we go along. As the title suggests, this is about incompatibility. Like every other relationship, there are just some instances where two parties aren’t meant for each other. This is a very hard pill to swallow when it comes to a client and server/service relationship because of one key difference: an agreement based on work for pay.

Okay, maybe 70% not the case in regards to complaining.

This isn’t prevalent in any other relationship, but when it comes to a client and server/service based relationship, it’s foundational. The issue is that from every other angle, it resembles so many aspects of traditional relationships. Whether it’s familial, friendship, coworker, romantic, online, or even temporary; a client and server/service relationship hits a lot of the same bullet points. At the core, its strongest relational bond is that of collaboration. When you work with someone closely on anything, you can’t help but learn more about who they are and what makes them tick. You may find you really like each other too, but nothing is getting accomplished. The client doesn’t seem to like any of the server/service’s (Good grief, I’m just going to call the server/service a “designer” from here on out) concepts, the designer doesn’t seem to be understanding the client’s requests or critiques, and before you know it; there’s a general consensus that everyone is just wasting each other’s time.

In my experience, there are two four key reasons for these failings. This is going to seem very biased, but usually the problem is the client (Crap, maybe we’re at 60% not the case that this is just complaining). This isn’t a character flaw, but let’s dive deeper.

1. The Client Doesn’t Actually Know What They Want

Shots fired! Of course it’s their fault! How could it ever be that of the perfect designer (or illustrator… man, I’m already confounding the language again—we’re really flying by the seat of our pants here!). Okay, hear me out: a client traditionally hires a creative person to accomplish something they cannot. It could be because the client doesn’t have the tools or talents to do so or because they don’t know what they should even be going for. A healthy relationship then builds from collaboration to create something. Sometimes that lack of knowledge or understanding can be too much of a hurdle and the client realizes they may need to do a lot more soul searching to figure out what it is they need.

Now this is rare, as I’ve stated in the past, many times the client legitimately trusts the person they have hired. It can become clear and possibly even a little overwhelming at just how much thought (should) goes into things like personal branding or creating something that is attached so closely to the client.

2. The Client Is Too Passionate About Their Idea

I considered making this a subcategory of the first reason, but I have personal experience that’s unique. I was commissioned by an absolutely lovely human being who had a very clear vision for what they wanted. There was a lot of passion behind this project; real honest–to–goodness love that wasn’t just important to the direction, but the overall feel as well. If I were obtuse, I’d just brush it off as emotional baggage as the project was very much a tribute to people who are no longer alive. The truth was that this client had a lot invested in the project and had a difficult time disconnecting some of those emotions from the people they were now celebrating.

Maybe you can’t be too passionate about anything and the subtitle for this section is a little callous (I swear this post is 58% not about complaining about people). The point is that the world will continue to move on—even from the worst of tragedies—and that’s always going to be a lot harder for some people over others. This particular client really is a wonderful person, and we parted ways amicably. Just as I ended the last reason, when this happens, the client may need to do a lot more soul searching to figure out what it is they need, and it may not be something a creative person can fix.

3. The Client Is Insane

54% not the case that this post is complaining about people. Of course, while every human being has horror stories in regards to dealing with anyone, I have been very fortunate that I’ve never had a truly bad client. However, my friend and fellow creative Scott Modrzynski has. Here is his story.

There was a sobriety house in Los Angeles that I got in contact with through a mutual. The owner was looking for a logo, and I was more than happy to help. I don't remember what we settled on, price-wise, but it was inexpensive because 

A) I'm terrible-to-me at pricing, and
B) I have the additional guilt of charging people money for doing the lord's work. 

The initial concept, as it was conveyed to me, was vague, so I came up with something based on the operation's initials. The owner thought my usage of negative space to create letter forms was cool, but not his style. No problem. He sent me some examples of something he liked. It was sleek, classy, and minimalist. I mocked up some examples, and they weren't working for him. He liked Viking runes. I came up with something gruffer, that had a Nordic tilt to it. He thought it was great, but was hoping for something with a SoCal vibe. Throughout all our email exchanges, and occasional phone calls, there was a lot of flowery, bullshit language from his end that made no sense, and completely divergent ideas that seemed utterly incongruous to our previous contact. At that point, I realized what a piece of shit designer I am, because I don't know how to make a minimalist, Viking-inspired, SoCal logo that dives into the beating heart my own soul. I gave it a final go, and he said he'd get back to me after the weekend. That was at least five years ago. 

It was my first experience with a nightmare client, and made me appreciate my day job, since I wasn't really in a position where I needed to chase down these side gigs for any reason other than making some extra coin.

4. The Designer is Overly Ambitious

Every so often during a total solar eclipse while a volcano is erupting on your birthday as you and your ambidextrous twin sibling ride a two–headed Laquita porpoise on your way to pick up a winning lottery ticket in the Namib desert; a creative person will bite off more than they can chew. It’s easy for an inexperienced designer to insist everything they do needs to go in the portfolio and will go out of their way to convince the client that because of their creative experience (however extensive or limited it may be), they simply know best. That is to say it’s possible if not improbable that it’s the designers inability to communicate or properly deliver what’s been discussed and promised.

Such instances require a federal judge to prove such a rarity in which most cases prove it was actually the client who was wrong.

Okay, this post is officially drawing a line in the sand insisting that it is 52% not the case that it is complaining about people.

Yes, But What Should You Do?

Okay, let’s talk practically about solving these problems. Patience is key. Word of mouth about incompatibility is going to spread faster than any good work that you do, so remember the client is paying you and not everything has to go in the portfolio! Bad design is everywhere and the real shock is that good designers continually add to that because some clients just don’t care about the golden ratio, proper kerning, that a caricature is meant to exaggerate certain features, or that what you offer is actually a niche service that can’t be obtained at Target or created by AI (yet). Sometimes the best way through is to accept all directions regardless of how counter–intuitive they actually are to good taste, get paid, and forget about them; making sure to make a mental note to always be on vacation should they return for repeat business. Most importantly, separate your personal feelings from your work. I know I sound like a broken record, but not every piece has to go in the portfolio or on social media. Keep your emotions out of your responses and if you are feeling particularly revved up, make sure to burn off those heavy feels before contacting a client. I have seriously over–soured one or two bad business relationships because I didn’t walk off some anger and frustrations first.

Inevitably, there may come a tipping point where it’s clear that a particular relationship just isn’t going to work. Be honest with your client. It’s always good to know a bunch of other creatives in your field that you can recommend in place of yourself. Obviously make sure you give your buddies a heads up first, but having alternatives and providing other solutions really helps you out here. I always strive to never say, “no” to a client. That doesn’t mean I roll over and let someone take advantage of things, but saying, “You know what we can do…” shows you respect their opinion and that you’re listening as well. Solve the problem before acknowledging there actually is one.

If money has already been exchanged, there are a number of variables that could determine if anything is returned or still owed, so there’s no definitive answer you’ll find here. However, Scott’s nightmare is reason enough to consider applying the formula of deciding how much of a loss you may be willing to take just to bail. There may be zero chance of salvaging the relationship, but see what you can endure to make sure your reputation doesn’t take much shrapnel.

Don’t write off the client’s frustration either. They may not be able to communicate their ideas well at all, and as a non–creative person that can be difficult. It bears repeating to make sure your head is out of the fire when responding so that if there is a strong hate–hate relationship; you’re not the hothead. Openly admit you’re not the right fit for the job and that you don’t want to waste the client’s time, even if they’ve wasted yours. Remember, you want out of this, so take it on the chin and never look back. If the client is a real problem, make sure you warn all your creative peers on the down low.

If all else fails, start a blog and write about how you’re totally not complaining about people and vent your problems there!

A big thanks to Scott Modrzynski for taking the time to share his insights today! The dude is a very talented artist and designer who I’ve had the privilege of working with on three different collaborative projects together.

Just some of Scott’s incredible work!

Check him out on Instagram and Twitter, and also take a peak at some of his really cool stuff like this Batman Typography or his Cereal Freaks. You can also follow me on Instagram and Twitter and come back here on Fridays for more creative thinking!