Responsible AI: Striking the balance between progress and accountability

The power of AI is now the domain of the masses. It can usher in a new era of creativity, productivity and personalisation, if done responsibly.

Generative artificial intelligence (AI) has boundless potential to revolutionise life and business as we know it, with the ability to enhance human creativity, accelerate tasks, and improve decision-making. However, such transformational technology also opens the door to new questions about ethics and responsibility in the digital age.

The rapid pace of innovation means the power of AI is no longer limited to a few well-resourced organisations; it is now the domain of the masses. This democratisation of such powerful technology can usher in a new era of creativity, productivity and personalisation, if done responsibly. Placing thoughtful safeguards around AI development and use will help it realise its full potential to benefit individuals, organisations and society as a whole.

Effective governance to promote safe and responsible AI

At this early stage in the development, commercialisation and adoption of AI technologies, principles for effective governance can be highly effective at promoting safe and responsible AI while enabling human innovation and creativity.

As early as 2019, Adobe proactively and voluntarily developed a layered, multi-disciplinary process for responsible AI. First, we established AI Ethics principles of accountability, responsibility, and transparency to guide the development of our AI-powered solutions. Second, we set up an AI Ethics Committee and AI Ethics Review Board to action ethical concerns with new features and technologies. Third, we embedded an AI Ethics Assessment in our processes to help us identify the potential ethical impact of a new AI system or feature. Finally, we promote community feedback from our users. For example, Adobe Firefly, our new family of creative generative AI models, launched in beta with a built-in feedback mechanism so users can easily report if it produces a result they perceive as biased or inaccurate.

Tackling misinformation with open standards

Responsible AI also requires businesses to come together across industries, and with policymakers, to develop and implement industry standards and frameworks that help minimise the potential harms of generative AI such as misinformation.

While generative AI has the power to unlock vast new opportunities for creators and consumers everywhere, in the hands of bad actors, these tools can be misused to spread misinformation in the form of fake images and videos. In a world where we can no longer trust what we see, we need a way to verify what’s real.

Technical solutions like Content Credentials can help bring more transparency and trust to digital content. This is the solution backed by a global coalition called the Content Authenticity Initiative, which now counts more than 1500 members including Adobe. Content Credentials are essentially a digital ‘nutrition’ label that can show information such as the creator’s name, the date an image was created, what tools were used to create it and any edits that were made. By having this information readily available, people can make more informed decisions about the content they consume. It also helps ensure artists, including those who work with AI tools, get credit for their work.

Content Credentials are built on an open standard, so anyone can implement them into their tools and platforms. We believe the adoption of open industry standards like Content Credentials will support greater transparency and trust in our online ecosystem and we’re committed to working with industry peers, policymakers, and CAI members toward their widespread implementation.

In many ways we are just scratching the surface of the generative AI. As it continues to evolve, it will bring new challenges to the fore. Industry, government, and the wider community will need to work together to solve them. By being thoughtful about the technology and the implications it can have, instilling good governance and ethics, collaborating on standards and sharing best practices, we can unlock the unlimited possibilities it holds and build a more trustworthy digital space.

Katrina Troughton is vice-president and managing director, Australia and New Zealand, Adobe.