Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
When the Microsoft-funded lab OpenAI launched ChatGPT in February, millions of people realized almost overnight what tech professionals have long understood: Today’s AI tools are advanced enough to support everyday life and an incredibly wide range of industries. to transform. Microsoft’s Bing jumped from a distant second in search results to a much higher profile level. Concepts such as large language models (LLMs) and natural language processing are now part of mainstream discussion.
But with the spotlight comes scrutiny. Regulators around the world are taking note of the risks AI poses to user privacy. The Elon Musk-backed Future of Life Institute gathered 1,000 signatures from technology leaders asking for a six-month break to train AI tools more advanced than GPT-4, which powers ChatGPT.
As vehement as the legal and technical issues are, the fundamental ethical questions are easy to digest. If developers have to take a summer break to work on AI improvements, will they shift focus to ensure AI upholds ethical guidelines and user privacy? And at the same time, can we manage the potentially disruptive effects AI can have on where advertising dollars are spent and how media monetize?
Google, IBM, Amazon, Baidu, Tencent and a range of smaller players are working on – or already launching, in Google’s case – similar AI tools. In an emerging market, it is impossible to predict which products will dominate and what the results will look like. This underscores the importance of protecting privacy in AI tools right now – planning for the unknown before it happens.
Join us on July 11-12 in San Francisco, where top executives will talk about how they integrated and optimized AI investments for success and how they avoided common pitfalls.
As the digital advertising industry eagerly awaits AI applications for targeting, measurement, creative personalization and optimization and more, industry leaders will need to take a close look at how the technology is being implemented. In particular, we will need to look at the use of personally identifiable information (PII), the possibility of accidental or intentional bias or discrimination against underrepresented groups, how data is shared through third-party integrations, and global regulatory compliance.
Search vs. AI: The Great Spend Redistribution?
When it comes to advertising budgets, it’s easy to imagine how a “search vs. AI’ confrontation might look like. Gathering all the information you’re looking for through AI in one place is really helpful instead of rewording queries and clicking left to zoom in on what you’re really looking for. If we see a generational shift in the way users discover information, that is, if young people accept AI as a central part of the digital experience in the future, non-AI search engines risk losing their relevance. This can have a major impact on search inventory value and publishers’ ability to monetize search traffic.
Search continues to drive a significant portion of traffic to publisher sites, even as publishers continue to seek audience loyalty through subscriptions. And now that advertising is making its way into AI chat — Microsoft has tested ad placement in Bing chat, for example — publishers are wondering how AI providers can share revenue with the sites their tools get information from. It’s safe to say publishers will be looking at another set of black boxes of walled garden data they rely on for revenue. To thrive in this uncertain future, publishers must lead conversations to ensure stakeholders across the industry understand where we’re rushing.
Develop processes with privacy in mind
Industry leaders need to pay close attention to how they and their technical partners collect, analyze, store and share data for AI applications across their processes. The process of obtaining explicit consent from users for their data to be collected and providing clear opt-outs should take place at the beginning of any AI chat or search interaction. Leaders should consider implementing consent or opt-in buttons with AI tools that personalize content or ads. Despite the convenience and sophistication of these AI tools, the cost simply cannot be paid with user privacy risks. As industry history has shown, we can expect users to become increasingly aware of these privacy risks. Companies should not rush the development of consumer-facing AI tools and compromise privacy in the process.
At this point, with AI tools from Big Tech companies generating most of the attention, we should not be misled by a false sense of security that the effects of this evolution will be Big Tech’s problem. The recent layoffs we’ve seen at major tech companies are leading to a great proliferation of talent, which will in turn lead to AI advancements from smaller companies that have grabbed onto talent. And for publishers not looking forward to working with yet another walled garden to survive, there’s an extra level beyond the crucial level of privacy where their best business interests are at stake. Market leaders should see the emergence of AI chat as the pivotal moment.
Let’s take this opportunity to prepare for a privacy-safe, transparent and profitable future.
Fred Marthoz is VP of Global Partnerships and Revenue at Lotame.
Data decision makers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.
To read about advanced ideas and up-to-date information, best practices and the future of data and data technology, join DataDecisionMakers.
You might even consider contributing an article yourself!
Read more from DataDecisionMakers