Idea

AI: Friend or Foe?

Danika Houghton

Artificial Intelligence (AI) has long been touted as the next revolution. From sci-fi to the real world, we have long seen the promise and pit falls of what AI has to offer.

Could the rapidly emerging, general-purpose technology, ‘Foundational AI’ – signal that this long-awaited revolution is upon us?

Stanford University’s, Rishi Bommasani and Percy Liang, define foundation models as: “Models trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks.”

With foundational models, AI is moving from the artisanal to the industrial, with a broad base of applications across a diverse range of sectors.  As these models become available as a service they have the potential to change and augment many aspects of work and life.

One of the biggest impacts will be in how we work and the jobs we do, with technology working along side us, helping increase our productivity.

While implementation might feel some time away, you might already be engaging with these models. One example is writing with GPT-3, another is software programming with services like Copilot.

In a recent interview with the Economist, Microsoft’s Chief Technology Officer, Kevin Scott, explained that they have over 100,000 active users of Copilot and they are seeing the technology deliver 35% of the code in the commits of their programmers. Meaning, instead of writing boilerplates or looking up documentation, programmers are getting more time to think deeply about the problems they are looking to solve, which is creating better solutions.

The promise of foundational AI is seeing massive investment from the big computing clouds to those who use the models in their products.

With the possibilities for these models to work alongside everyone from scientists to artists, foundational AI has the power to help people everywhere on some of our biggest human challenges, from health to climate change.

But there are risks.

Standards Australia highlights that AI can internalise human biases, leading to discrimination. It can also have unintended consequences and errors, there are security vulnerabilities with hacking or misuse, and AI models are controlled by a small number of organisations.

While there are no easy solutions to the challenges of AI, the ethics in AI have been a huge focus for the Australian government and industry, with a proliferation of guides for organisations to responsibly design, develop and implement AI, including the Australian AI Ethics Framework.

Standards Australia has also focused on effective use of standards to promote responsible use of AI and AI-associated standards (ISO/IEC and IEEE), which are increasingly being referenced in policy and legislation. For example, the European Union is seeking to leverage the work of SC 42 to certify AI systems under its forthcoming AI Act.

This balance between government, industry and people alongside the continued conversation around the ethics, will continue to see this technology and our use AI evolves.

Its why Standards Australia is focused on its Critical and Emerging Technologies (CET) Standards for this emerging field as it is central to the safety, security and wellbeing of Australia and our region. To find out more about Standards Australia and the company’s work in critical and emerging technologies, please visit – Home | Standards Australia.

More ideas like this