Intelligence with Feelings
The first time I created an image using artificial intelligence was before the era of Midjourney and ChatGPT. Back then, this technology was in its infancy (and it showed). The results were so visually poor that I could barely understand what I was looking at. The colors bled into each other, forming a graphic mess that looked like an accident. I remember being amazed at how it evolved—the ability to generate any visual from just a handful of words. At first, I was scared. I worried it would take away my job. But over time, I learned how to use it, embraced the new possibilities it introduced into my life, and asked myself: where will this go next?
At the time, I was a senior-year design student, constantly generating images. My final project revolved around the future of design alongside generative AI. I explored its limits & pushed its boundaries. Like any new language, I had to learn to communicate with the machine—how to distill and refine my words to get the exact image I wanted.
The first time I created an image using artificial intelligence was before the era of Midjourney and ChatGPT. Back then, this technology was in its infancy (and it showed). The results were so visually poor that I could barely understand what I was looking at. The colors bled into each other, forming a graphic mess that looked like an accident. I remember being amazed at how it evolved—the ability to generate any visual from just a handful of words. At first, I was scared. I worried it would take away my job. But over time, I learned how to use it, embraced the new possibilities it introduced into my life, and asked myself: where will this go next?
At the time, I was a senior-year design student, constantly generating images. My final project revolved around the future of design alongside generative AI. I explored its limits & pushed its boundaries. Like any new language, I had to learn to communicate with the machine—how to distill and refine my words to get the exact image I wanted.
The first time I created an image using artificial intelligence was before the era of Midjourney and ChatGPT. Back then, this technology was in its infancy (and it showed). The results were so visually poor that I could barely understand what I was looking at. The colors bled into each other, forming a graphic mess that looked like an accident. I remember being amazed at how it evolved—the ability to generate any visual from just a handful of words. At first, I was scared. I worried it would take away my job. But over time, I learned how to use it, embraced the new possibilities it introduced into my life, and asked myself: where will this go next?
At the time, I was a senior-year design student, constantly generating images. My final project revolved around the future of design alongside generative AI. I explored its limits & pushed its boundaries. Like any new language, I had to learn to communicate with the machine—how to distill and refine my words to get the exact image I wanted.
The first time I created an image using artificial intelligence was before the era of Midjourney and ChatGPT. Back then, this technology was in its infancy (and it showed). The results were so visually poor that I could barely understand what I was looking at. The colors bled into each other, forming a graphic mess that looked like an accident. I remember being amazed at how it evolved—the ability to generate any visual from just a handful of words. At first, I was scared. I worried it would take away my job. But over time, I learned how to use it, embraced the new possibilities it introduced into my life, and asked myself: where will this go next?
At the time, I was a senior-year design student, constantly generating images. My final project revolved around the future of design alongside generative AI. I explored its limits & pushed its boundaries. Like any new language, I had to learn to communicate with the machine—how to distill and refine my words to get the exact image I wanted.
It was exciting. The deeper our "relationship" grew, the more I understood it. It gave me tools I couldn't have acquired anywhere else at that speed. But despite all its advantages, it felt too technical, too detached. It didn’t truly understand me, and I struggled to create what I was looking for.
It was exciting. The deeper our "relationship" grew, the more I understood it. It gave me tools I couldn't have acquired anywhere else at that speed. But despite all its advantages, it felt too technical, too detached. It didn’t truly understand me, and I struggled to create what I was looking for.

It was exciting. The deeper our "relationship" grew, the more I understood it. It gave me tools I couldn't have acquired anywhere else at that speed. But despite all its advantages, it felt too technical, too detached. It didn’t truly understand me, and I struggled to create what I was looking for.
It was exciting. The deeper our "relationship" grew, the more I understood it. It gave me tools I couldn't have acquired anywhere else at that speed. But despite all its advantages, it felt too technical, too detached. It didn’t truly understand me, and I struggled to create what I was looking for.

I felt I needed to balance between authenticity and emotion. So, I went as far as possible from algorithms: photography. I bought a camera, took courses, and started photographing the people around me. The style I connected with most was candid photography—capturing moments without people noticing me. This approach helped me create authentic images that captured real emotions and genuine moments. You could truly experience the event through them. I discovered that holding a camera allowed me to get closer to people; they let me in, sharing intimate, raw, and emotional parts of their lives.
Suddenly, I had two very different design tools: instant, technical Gen-AI, and the emotional, humane touch of photography.
Today, it feels like we are in a transitional phase regarding the latest "wow" technology—Artificial Intelligence. We're caught between excessive enthusiasm and the "where is this going?" stage—some kind of a gray area that stretches the boundaries of ethics and truth.
Today, everyone is using AI. You can find endless examples of AI-generated content. Since this AI tsunami hit, my feed has been flooded with pizzerias that, instead of photographing their actual food, try to sell me some five-minute-generated food mutant.
I don’t hold it against small businesses. I get why they’d opt for a free, instant tool over high-quality work. I’m talking about global companies—brands with creative teams with the resources and the ability to make conscious decisions for each campaign. Some have adopted AI as a tool and produced incredible results, while others let it do the work for them. A great example of this is the recent Toys"R"Us ad.
I felt I needed to balance between authenticity and emotion. So, I went as far as possible from algorithms: photography. I bought a camera, took courses, and started photographing the people around me. The style I connected with most was candid photography—capturing moments without people noticing me. This approach helped me create authentic images that captured real emotions and genuine moments. You could truly experience the event through them. I discovered that holding a camera allowed me to get closer to people; they let me in, sharing intimate, raw, and emotional parts of their lives.
Suddenly, I had two very different design tools: instant, technical Gen-AI, and the emotional, humane touch of photography.
Today, it feels like we are in a transitional phase regarding the latest "wow" technology—Artificial Intelligence. We're caught between excessive enthusiasm and the "where is this going?" stage—some kind of a gray area that stretches the boundaries of ethics and truth.
Today, everyone is using AI. You can find endless examples of AI-generated content. Since this AI tsunami hit, my feed has been flooded with pizzerias that, instead of photographing their actual food, try to sell me some five-minute-generated food mutant.
I don’t hold it against small businesses. I get why they’d opt for a free, instant tool over high-quality work. I’m talking about global companies—brands with creative teams with the resources and the ability to make conscious decisions for each campaign. Some have adopted AI as a tool and produced incredible results, while others let it do the work for them. A great example of this is the recent Toys"R"Us ad.
I felt I needed to balance between authenticity and emotion. So, I went as far as possible from algorithms: photography. I bought a camera, took courses, and started photographing the people around me. The style I connected with most was candid photography—capturing moments without people noticing me. This approach helped me create authentic images that captured real emotions and genuine moments. You could truly experience the event through them. I discovered that holding a camera allowed me to get closer to people; they let me in, sharing intimate, raw, and emotional parts of their lives.
Suddenly, I had two very different design tools: instant, technical Gen-AI, and the emotional, humane touch of photography.
Today, it feels like we are in a transitional phase regarding the latest "wow" technology—Artificial Intelligence. We're caught between excessive enthusiasm and the "where is this going?" stage—some kind of a gray area that stretches the boundaries of ethics and truth.
Today, everyone is using AI. You can find endless examples of AI-generated content. Since this AI tsunami hit, my feed has been flooded with pizzerias that, instead of photographing their actual food, try to sell me some five-minute-generated food mutant.
I don’t hold it against small businesses. I get why they’d opt for a free, instant tool over high-quality work. I’m talking about global companies—brands with creative teams with the resources and the ability to make conscious decisions for each campaign. Some have adopted AI as a tool and produced incredible results, while others let it do the work for them. A great example of this is the recent Toys"R"Us ad.
I felt I needed to balance between authenticity and emotion. So, I went as far as possible from algorithms: photography. I bought a camera, took courses, and started photographing the people around me. The style I connected with most was candid photography—capturing moments without people noticing me. This approach helped me create authentic images that captured real emotions and genuine moments. You could truly experience the event through them. I discovered that holding a camera allowed me to get closer to people; they let me in, sharing intimate, raw, and emotional parts of their lives.
Suddenly, I had two very different design tools: instant, technical Gen-AI, and the emotional, humane touch of photography.
Today, it feels like we are in a transitional phase regarding the latest "wow" technology—Artificial Intelligence. We're caught between excessive enthusiasm and the "where is this going?" stage—some kind of a gray area that stretches the boundaries of ethics and truth.
Today, everyone is using AI. You can find endless examples of AI-generated content. Since this AI tsunami hit, my feed has been flooded with pizzerias that, instead of photographing their actual food, try to sell me some five-minute-generated food mutant.
I don’t hold it against small businesses. I get why they’d opt for a free, instant tool over high-quality work. I’m talking about global companies—brands with creative teams with the resources and the ability to make conscious decisions for each campaign. Some have adopted AI as a tool and produced incredible results, while others let it do the work for them. A great example of this is the recent Toys"R"Us ad.
I have a lot to say about this ad, but I’ll focus on two major points. Anyone who’s ever touched AI can immediately tell that most of the commercial was AI-generated. The glitches—one of AI’s most recognizable quirks—are impossible to ignore. If you look closely, you’ll spot countless visual errors—from broken fonts on signs to cars and bicycles with bizarre anatomy. But what stood out to me the most were the toys themselves. Even though they attempted to include real Toys"R"Us products, it’s clear that most of the toys in the ad—the very soul of the brand—are, at best, unrealistic and, at worst, complete visual gibberish.
The ad tries to recreate a very humane moment—a special little boy named Charles dreaming of opening a toy company. But "Charles" gets replaced by (not-so-successful) duplicates at least three times due to AI’s inability to maintain a consistent character. The frames that should have been emotional moments feel hollow and fake. (Casting a real child, with natural facial expressions and body language, would have made the emotional connection so much stronger.)
The real problem here is that the ad leans on technique instead of concept. It feels like Toys"R"Us wanted to jump on the AI trend but failed to use it in the right proportions. Instead of controlling it and using it to enhance the raw material they created, they let it take the lead—disconnecting the viewer emotionally. In contrast, Coca-Cola created a brilliant AI-driven ad.
The ad revolves around a concept where a Coca-Cola bottle travels through artworks in a museum, changing its style to match the art piece it appears in—some modern, some created using traditional techniques—until it reaches an art student who "draws inspiration" from the bottle.
As simple as it may sound, this concept successfully ties all the elements together and connects us, as viewers, to the product.
I have a lot to say about this ad, but I’ll focus on two major points. Anyone who’s ever touched AI can immediately tell that most of the commercial was AI-generated. The glitches—one of AI’s most recognizable quirks—are impossible to ignore. If you look closely, you’ll spot countless visual errors—from broken fonts on signs to cars and bicycles with bizarre anatomy. But what stood out to me the most were the toys themselves. Even though they attempted to include real Toys"R"Us products, it’s clear that most of the toys in the ad—the very soul of the brand—are, at best, unrealistic and, at worst, complete visual gibberish.
The ad tries to recreate a very humane moment—a special little boy named Charles dreaming of opening a toy company. But "Charles" gets replaced by (not-so-successful) duplicates at least three times due to AI’s inability to maintain a consistent character. The frames that should have been emotional moments feel hollow and fake. (Casting a real child, with natural facial expressions and body language, would have made the emotional connection so much stronger.)
The real problem here is that the ad leans on technique instead of concept. It feels like Toys"R"Us wanted to jump on the AI trend but failed to use it in the right proportions. Instead of controlling it and using it to enhance the raw material they created, they let it take the lead—disconnecting the viewer emotionally. In contrast, Coca-Cola created a brilliant AI-driven ad.
The ad revolves around a concept where a Coca-Cola bottle travels through artworks in a museum, changing its style to match the art piece it appears in—some modern, some created using traditional techniques—until it reaches an art student who "draws inspiration" from the bottle.
As simple as it may sound, this concept successfully ties all the elements together and connects us, as viewers, to the product.
I have a lot to say about this ad, but I’ll focus on two major points. Anyone who’s ever touched AI can immediately tell that most of the commercial was AI-generated. The glitches—one of AI’s most recognizable quirks—are impossible to ignore. If you look closely, you’ll spot countless visual errors—from broken fonts on signs to cars and bicycles with bizarre anatomy. But what stood out to me the most were the toys themselves. Even though they attempted to include real Toys"R"Us products, it’s clear that most of the toys in the ad—the very soul of the brand—are, at best, unrealistic and, at worst, complete visual gibberish.
The ad tries to recreate a very humane moment—a special little boy named Charles dreaming of opening a toy company. But "Charles" gets replaced by (not-so-successful) duplicates at least three times due to AI’s inability to maintain a consistent character. The frames that should have been emotional moments feel hollow and fake. (Casting a real child, with natural facial expressions and body language, would have made the emotional connection so much stronger.)
The real problem here is that the ad leans on technique instead of concept. It feels like Toys"R"Us wanted to jump on the AI trend but failed to use it in the right proportions. Instead of controlling it and using it to enhance the raw material they created, they let it take the lead—disconnecting the viewer emotionally. In contrast, Coca-Cola created a brilliant AI-driven ad.
The ad revolves around a concept where a Coca-Cola bottle travels through artworks in a museum, changing its style to match the art piece it appears in—some modern, some created using traditional techniques—until it reaches an art student who "draws inspiration" from the bottle.
As simple as it may sound, this concept successfully ties all the elements together and connects us, as viewers, to the product.
I have a lot to say about this ad, but I’ll focus on two major points. Anyone who’s ever touched AI can immediately tell that most of the commercial was AI-generated. The glitches—one of AI’s most recognizable quirks—are impossible to ignore. If you look closely, you’ll spot countless visual errors—from broken fonts on signs to cars and bicycles with bizarre anatomy. But what stood out to me the most were the toys themselves. Even though they attempted to include real Toys"R"Us products, it’s clear that most of the toys in the ad—the very soul of the brand—are, at best, unrealistic and, at worst, complete visual gibberish.
The ad tries to recreate a very humane moment—a special little boy named Charles dreaming of opening a toy company. But "Charles" gets replaced by (not-so-successful) duplicates at least three times due to AI’s inability to maintain a consistent character. The frames that should have been emotional moments feel hollow and fake. (Casting a real child, with natural facial expressions and body language, would have made the emotional connection so much stronger.)
The real problem here is that the ad leans on technique instead of concept. It feels like Toys"R"Us wanted to jump on the AI trend but failed to use it in the right proportions. Instead of controlling it and using it to enhance the raw material they created, they let it take the lead—disconnecting the viewer emotionally. In contrast, Coca-Cola created a brilliant AI-driven ad.
The ad revolves around a concept where a Coca-Cola bottle travels through artworks in a museum, changing its style to match the art piece it appears in—some modern, some created using traditional techniques—until it reaches an art student who "draws inspiration" from the bottle.
As simple as it may sound, this concept successfully ties all the elements together and connects us, as viewers, to the product.
But the highlight of this campaign isn’t just the concept—it’s the execution. They seamlessly combined various technological tools, including Stable Diffusion, which allowed them to train a model on Coca-Cola’s aesthetics and swap styles accordingly. Alongside generative AI, there was a full production team—actors, makeup artists, photographers, video editors, and 3D artists—who ensured the ad looked as polished as it did.
This ad is a perfect example of a company asking itself, "What experience do we want to create, and how do we achieve it?"
I believe every design tool, whether modern or traditional, has its place. As creatives, we need to ask ourselves: What’s the right way to use these tools, How do we balance technology with emotions and how do we stay authentic, even when using AI.
AI can be an incredible tool for inspiration or generating visuals. But at the end of the day, we must remember that we’re designing for people. Our goal is to reach them and evoke emotion—no matter what tools we use.
But the highlight of this campaign isn’t just the concept—it’s the execution. They seamlessly combined various technological tools, including Stable Diffusion, which allowed them to train a model on Coca-Cola’s aesthetics and swap styles accordingly. Alongside generative AI, there was a full production team—actors, makeup artists, photographers, video editors, and 3D artists—who ensured the ad looked as polished as it did.
This ad is a perfect example of a company asking itself, "What experience do we want to create, and how do we achieve it?"
I believe every design tool, whether modern or traditional, has its place. As creatives, we need to ask ourselves: What’s the right way to use these tools, How do we balance technology with emotions and how do we stay authentic, even when using AI.
AI can be an incredible tool for inspiration or generating visuals. But at the end of the day, we must remember that we’re designing for people. Our goal is to reach them and evoke emotion—no matter what tools we use.
But the highlight of this campaign isn’t just the concept—it’s the execution. They seamlessly combined various technological tools, including Stable Diffusion, which allowed them to train a model on Coca-Cola’s aesthetics and swap styles accordingly. Alongside generative AI, there was a full production team—actors, makeup artists, photographers, video editors, and 3D artists—who ensured the ad looked as polished as it did.
This ad is a perfect example of a company asking itself, "What experience do we want to create, and how do we achieve it?"
I believe every design tool, whether modern or traditional, has its place. As creatives, we need to ask ourselves: What’s the right way to use these tools, How do we balance technology with emotions and how do we stay authentic, even when using AI.
AI can be an incredible tool for inspiration or generating visuals. But at the end of the day, we must remember that we’re designing for people. Our goal is to reach them and evoke emotion—no matter what tools we use.
But the highlight of this campaign isn’t just the concept—it’s the execution. They seamlessly combined various technological tools, including Stable Diffusion, which allowed them to train a model on Coca-Cola’s aesthetics and swap styles accordingly. Alongside generative AI, there was a full production team—actors, makeup artists, photographers, video editors, and 3D artists—who ensured the ad looked as polished as it did.
This ad is a perfect example of a company asking itself, "What experience do we want to create, and how do we achieve it?"
I believe every design tool, whether modern or traditional, has its place. As creatives, we need to ask ourselves: What’s the right way to use these tools, How do we balance technology with emotions and how do we stay authentic, even when using AI.
AI can be an incredible tool for inspiration or generating visuals. But at the end of the day, we must remember that we’re designing for people. Our goal is to reach them and evoke emotion—no matter what tools we use.
s02i01=The first AI image I created
s04i01=A frame from the Toys״R״Us commercial