Article by Grace Yee for Adobe.
AI has revolutionized the way we use technology, with generative AI enabling creation on a scale we haven’t experienced before.
At the heart of our innovations, like the text-to-image feature in Adobe Firefly, our family of creative generative AI models and Acrobat AI Assistant, which can easily summarize and provide answers by analyzing PDFs, we have embedded our AI ethics framework to ensure ethical considerations are central to our technology development. Back in early 2019, we saw that AI was showing up in our products in profound ways and as such, we knew that we needed a framework to assess how AI could impact our customers and society and what we could do to develop it in an ethical way.
As we mark the five-year anniversary of our AI Ethics principles, which are rooted in accountability, responsibility, and transparency — I’m proud of the progress we’ve made and even more excited about the future of ethical innovation at Adobe.
When we set out to develop our AI Ethics principles, we aimed to ensure they would remain meaningful and central to our ability to navigate the future and drive innovation in our products. And because our principles are intentionally designed as practical guidelines — as opposed to theoretical concepts, they are crafted to be easily actioned by product and engineering teams. This practical approach allows our principles to remain relevant and effective in the evolving AI landscape.
Developing our AI ethics principles
Before diving into the development of our AI ethics principles, we set a high-level goal: to ensure that our principles were simple, concise, practical, and relevant. This mission guided us as we assembled a cross-functional team of Adobe employees with diverse gender, racial, and professional backgrounds. Together, we crafted three core principles: accountability, responsibility, and transparency.
These principles became the foundation of our AI ethics governance process. Accountability means having mechanisms in place to receive and respond to feedback or concerns. Responsibility involves systems and processes to ensure thoughtful evaluation and due diligence. Transparency is about being open with our customers about how we use AI. These principles allowed us to evaluate the human impact of AI technology and build processes for training, testing, and reviewing products.
Our principles are centered on the people who use our technology, reflecting our belief that AI should enhance human creativity and enable users to boost their productivity and innovation. The development of these principles was a collaborative process, drawing on diverse perspectives to identify potential biases and design our tools to be equitable for all users. We remain committed to ongoing dialogue with our stakeholders — customers, partners, and industry peers — to continuously refine our approach and address emerging ethical challenges.
Actioning our principles
We used our AI Ethics principles to develop a robust framework to evaluate new AI features that go to market. This assessment is designed to ensure that our AI technologies are developed and deployed in a responsible and ethical manner, taking into account potential risks and impacts on users and the society at large.
For example, when we introduced Firefly, it underwent rigorous AI Ethics assessment to prevent it from generating content that could perpetuate stereotypes or misinformation. This approach is part of our commitment to mitigating potential harms before they can affect our users or their communities.
Our AI Ethics assessment begins by comprehensively understanding how AI is utilized in a feature, including inputs and outputs. Product teams then engage in a risk discovery exercise to identify potential risks associated with the feature, including considerations of perpetuating harmful stereotypes, hate content, violence, or unintentional generation of harmful content.
This assessment is an iterative process that I believe needs to evolve over time to adapt to new technologies and challenges, ensuring that the evaluation remains relevant and effective in addressing emerging ethical concerns.
Scaling for the future
Establishing a global AI ethics governance structure has not been without its challenges. Scaling our process to meet the pace of innovation at Adobe required close collaboration with product and engineering teams. We worked alongside them to understand how AI features work, their intended use, and the necessary guardrails. This partnership encouraged proactive thinking from product teams about mitigating harm and bias from the very beginning of product development. As a result, product teams have fully embraced the value of ethical innovation, as it allows them to streamline workflows and address potential issues before my team even identifies them.
In the five years since we’ve started to formalize this process, guided by our AI Ethics principles, we have helped the product and engineering teams scale our innovations significantly. In the first two years of our assessment process, we evaluated around 125 AI features across all of Adobe’s products. The following year, we reached that same milestone in just half the time. Fast forward to this year: in just the first half, we’ve already surpassed that benchmark. This rapid growth reflects our ability to scale efficiently, enabling Adobe to accelerate AI innovation while upholding our commitment to responsible innovation. It also speaks volumes as to how our product and engineering teams have come to recognize the value of partnering with my team — this collaboration not only strengthens the integrity of our AI innovations but also helps build trust with our customers.
As we continue to scale, my team’s focus remains on enhancing the robustness of our assessments and expanding the scope of ethical considerations. This growth trajectory not only reflects our commitment to ethical innovation but also positions Adobe as a leader in the responsible use of AI technologies.
Closing thoughts
Over the past five years, my team has transformed our AI ethics principles from theoretical concepts into actionable guidelines embedded within our engineering and product development practices. These principles have become foundational to our approach, guiding the responsible development and deployment of AI features and ensuring clear communication about AI usage.
Looking ahead, we remain dedicated to advancing ethical AI by fostering a culture of transparency, inclusivity, and responsibility. As AI technology evolves, our commitment to these values will help us build innovative solutions that empower the creative community and enhance digital experiences.
I’m confident that the solid foundation we have established with our AI ethics principles will continue to steer our efforts, enabling us to adapt to new challenges while maintaining our dedication to ethical innovation.