The Future of DCO

Gen-AI is making traditional creative effectiveness testing obsolete.

What makes an advertising image effective? It’s a question that marketers have struggled with since the beginning. Understanding and producing content that makes an individual engaged, amused, moved and inspired to action is at the heart of the advertising industry. These days, the creation of creative assets is informed by myriad data inputs, such as survey data, trends analysis, focus group responses, competitor analysis and indeed a fair amount of subjective opinion from contemporary taste-makers.

Creativity is limitless, but unfortunately production and media budgets are not. This means that even with lots of data inputs and highly talented creative teams involved in ideating new ideas, it is still important that marketers test out different creative assets, to ensure ideas will work in practice. Much of this testing happens downstream as part of media strategy, where planners select ads they think will work the best for the outcomes they are tasked with driving, be that through A/B testing or Dynamic Creative Optimization (DCO).

Generative AI is bringing exponentially more data into the measurement of creative effectiveness than ever before- and more data means more (and better) insight. In this article I will explain how this revolutionary new approach is a logical evolution from existing creative effectiveness solutions, charting the journey from A/B testing, to DCO to generative AI-enabled solutions like SmartAssets.

A/B Testing

A/B testing, also known as ‘split testing’, is a controlled experiment where two variations of an advertising creative are presented to users simultaneously. The objective is to determine which version performs better in terms of achieving specific goals, such as higher click-through rates (CTR), more sign-ups, or increased sales.

A/B testing allows marketers to test hypotheses around what makes a creative asset more effective, by identifying which of two differing assets drives more engagement. However the result is very binary: A worked better than B, or B worked better than A. It also doesn’t tell you what specifically in the asset was driving the improved performance. Was it the size of the product image? The coloring? The setting? The model? The logo? The layout? 

In addition there are two major constraints to implementing A/B testing at scale: the manual and time-consuming effort to create the versions, and the effort of analyzing the results to extract meaningful insights beyond the two assets in the test.

As ever, constraints drive innovation, and so several solutions emerged that cause A/B testing to evolve into Dynamic Creative Optimization (DCO): low-cost versioning and machine learning.

Low Cost Versioning

A number of emerging economies have built strong talent pools capable of taking a master toolkit of creative assets and reworking them to create all the versions needed by the client. Due to the majority of the work being digital, many large advertisers use a complex international supply chain of production resources to be able to create more versions at a lower cost. Likewise, a number of automatic versioning tools have been developed, where a software takes the master creative assets and by using templating, creates many more versions, which also sometimes need some post-editing.

Both of these approaches have reduced the cost of versioning, making more assets available to feed creative effectiveness testing.

Machine Learning

Machine learning refers to the application of artificial intelligence (AI) techniques that enable computers to analyze and learn from data to make data-driven decisions and predictions. It involves using algorithms to uncover patterns, trends, and insights from large datasets, which can then be used to optimize marketing strategies and campaigns. 

Machine Dynamic Creative Optimization

Built on machine learning technology and enabled by low-cost versioning, DCO is a technology-driven approach to advertising that enables marketers to create highly customized and relevant ads for each viewer. It goes beyond traditional static advertising by dynamically assembling ad components based on user data, behavior, context, and other real-time variables.

DCO is an evolution of A/B testing where more versions can be tested, and the resulting findings immediately and automatically acted upon to ensure that any assets in the public domain are as effective as they can be in that moment. As such it is more efficient and scalable than A/B testing. As the leading source of creative effectiveness intelligence and optimisation (until now), DCO in 2022 was valued at US$878 million in 2022

However DCO also has some constraints in terms of scalability and efficiency.  Multiple versions of an ad are created and put live. The model then looks at the performance of each of the assets and identifies, based on a set of marketing parameters, which asset is performing the best. It then prioritizes serving the highest performing version, effectively throwing all the other versions away. This is highly inefficient both in terms of the effort to create the “wasted” versions and in terms of the media budget spent on suboptimal versions while the model worked out the best one.

Secondly, DCO disregards historical data. It only looks at how an ad is performing now. The model does not take into account all the information gathered from previous campaigns. As data is absolutely key in AI and machine learning, this is a big gap that could reveal a lot more insight into creative effectiveness.

Lastly, DCO, as with A/B testing, still operates at a creative version level. That is to say it can tell which of a number of creatives performs the best, but it doesn’t look at why. It can’t say what in that asset drove engagement. Was it that the product image was bigger, or the logo was on it, or that the model was smiling or the colourway was heavy on red. The reason it has not looked at the creative components of an asset, is because of the huge manual effort required to tag up ads to allow the model to know what is in them at a component level. 

Effective Creative with Generative AI

Getting insight into what makes an ad creative effective can now be done at a deeper level than ever before, bringing more data to the table. Using image recognition, it is possible to automatically tag assets up at the creative component level, without human intervention, and at speed. In the below example, creative elements in the image are tagged.

Woman in the gym with creative tags explaining each creative elements, and metrics such as platform guidelines and emotional response.

This can be done for all assets (static and video) on any given platform, both current AND historical. Feed this into a large language model (LLM) like ChatGPT, and suddenly you can search and query creative assets at a more granular level than ever before, using natural language as shown below.

 

 

Using this new creative component dataset, cross referenced with media performance data, we can extract meaningful trends and insights about what exactly in the asset is driving engagement. The above example also shows how other data sources can be cross-referenced with the creative component data to draw out insights at scale. So for example with the integration of digital media platform data, it is possible to find all the assets that had a high ROAS.

And what’s more, these learnings can be used to evaluate new assets BEFORE they go live. With this pre-flight information, brands can create assets they know are going to be effective, cut down on excess production cost that would have been spent on ineffective versions, and save the media spend that would have been wasted in a DCO test. So for example if we know that assets with a red colourway have historically underperformed other colors, then better decisions can be made about which colors should be used. Furthermore, hypotheses can be tested before spending media, by comparing new assets to historical data, bringing insights that can be fed earlier into the creative process.

 

What’s Next?

SmartAssets generates creative component data and derives creative effectiveness insights by cross-referencing it with performance data. These powerful insights then become the basis for creative recommendations. Add the logo here, increase the product size there, change the background to be outdoors etc.

Historically, making any changes to a creative asset requires multiple rounds of back-and-forth with a production team. With the advances in generative AI, it is now possible to make many of the recommended changes automatically, within the SmartAssets platform, without returning to post-production. This increases speed to market and decreases production costs.

Creative effectiveness is truly at an inflection point. Generative AI, by increasing data and automation, is enabling scale and efficiency that previous solutions like DCO cannot match. It’s hugely exciting and SmartAssets is here to help brands make the most of it.

Author: Lindsay Hong, CEO and Co-Founder at SmartAssets

Request a demospan>

Scroll to Top