Super-intelligent AI – social saviour or world threat? The reality might be somewhere in between

Wrapping your head around the current state of super-intelligent AI requires you to consider two parallel realities – the first involves ethical and philosophical questions, and the second is focussed on technical capability.

Synching those two realities together rather than allowing them to diverge will be one of the biggest challenges we face – as a society and as an industry focussed on harnessing technology for social good. 

At Liquid, we’re not afraid of big challenges – especially if the outcome will have big impacts. That’s why our latest Future Led event tackled super-intelligent AI; a technology that is maturing and requires us to envision our future selves. 

Our panel on Super-intelligent AI – Social saviour or world threat? featured technical experts and ethical thinkers: 

  • Sue Keay – CEO of the Queensland AI Hub and Chair of Robotics Australia  
  • Nick Therkelsen-Terry – CEO of Max Kelsen 
  • Evan Shellshear – head of analytics at Biarri 
  • Justine Lacey – director of the CSIRO’s Responsible Innovation Future Science Platform 

Image from iOS (25)

Left to right: Nick Therkelsen-Terry, Justine Lacey, Evan Shellshear, Sue Keay, Andrew Duval.

Our speakers explained that while the current applications of AI were limited, we shouldn’t let that fool us into thinking the future is too far away to worry about. 

Right now, the two realities of AI exist, bouncing off their own possibilities and limitations, but the pace of technological change will be “very dramatic”, according to Nick Therkelsen-Terry. 

This means we need strong frameworks and buy in – from businesses, governments and the public. 

As an industry that seeks to make a positive impact on people’s lives, fostering discussions like this one will be key to ensuring super-intelligent AI can benefit rather than burden society. 

 

Understanding the current state 

As Sue Keay and Nick Therkelsen-Terry explained, today’s AI systems are impressive – but not super-intelligent. 

“I think a lot of the applications of artificial intelligence are still pretty unsophisticated. We are seeing the ability of these systems to be able to do tasks but not to string those together in the way a human would,” Sue said. 

Nick added: “AI or deep-learning is much better at finding objects in an image than humans are today. It’s much better at listening to certain things in audio than humans are, it does a whole lot of very specific tasks at a higher degree of efficacy than a human does.  

“What it’s very bad at is generalising. It’s very poor at learning new tasks.” 

So how do we get to super-intelligence? How do we get to the higher learning that allows systems to generalise? Again, both Sue and Nick agreed we don’t have a clear pathway about how to get there – but Nick cautioned that the “rate of change will be very fast, very dramatic”. 

“What we have is an extremely powerful technology, which I think the rate of change is going to be very, very different to previous industrial revolutions,” he said.  

“This idea of – don’t worry we’ll just retrain everyone as AI experts – is entirely folly. … We are very adaptable. But we are not adaptable overnight.” 

But for Evan Shellshear, any fears we might have about committing to an AI future shouldn’t define our approach. 

“We should be worried about not inventing super-intelligence,” he said. The benefits, he argued, surely outweigh the challenges we face. 

 

Building frameworks, considering ethics, and rapidly testing 

We might be a little while away from super-intelligent applications, but that doesn’t mean we don’t have work to do now.  

As the panel discussed, planning what we want the world to look like is thorny – who is the “we”? And how do “we” make decisions about the decisions that AI will make?  

Nick suggested this was the real pinch point right now – that technical capability was miles ahead of the broader societal debate.  

“There’s a big divide between what we should be thinking about and what practitioners are actually thinking about,” he said. 

“Because the research is at such a pace, and the change is at such a pace, and there are so many people working on these problems … but they’re not thinking about these things. They are thinking about, how do I build a mathematical system that does that task better or can generalise faster?”  

(This did make me think of Anthony Bourdain’s voice. And Jeff Goldblum in Jurassic Park: “Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.”) 

And while all that might feel a bit scary, Justine Lacey believes “ethics is so hot right now”. 

“Who’s doing it, and why they’re doing it, and how transparent that is, really does matter.” Ethics and innovation were not mutually exclusive, she added. 

“I hate the dichotomy between, ‘I’m trying to innovate, don’t bring ethics here right now’. I don’t think ethics slows us down. It forces us to think more creatively.  

“It’s unlikely many people are out there telling their software engineers, ‘Our new shiny thing will probably destroy someone’s life but it won’t be your life, so don’t worry about it’. 

Another key approach, suggested by Sue, was the ability to rapidly test new technologies to make sure they adhered to certain principles. 

“We need to be thinking about ways that we can rapidly test whether some of these systems are having unintended consequences and actually creating a world that we don’t approve of, or actually improving the current world, because I don’t think anyone would say the current world is perfect – that shouldn’t be our model.” 

This is an approach we can get behind. Testing systems and products, especially with people, helps us right now to refine and optimise the digital experiences we’re building. 

Superintelligent AI 

Can we eliminate bias in AI systems? 

The answer to this question depends on us: can we eliminate bias in ourselves and in society?   

“Bias is sometimes useful. It’s hardwired into our systems, and it’s helped us survive,” Justine said. 

“But when we’re using bias in conversations about AI, it seems to be all about prejudice, as in, how we’ve amplified our prejudice through AI to make unfair things happen for certain groups of people.”  

As Nick explained, if an AI system is developed using a heap of historical legal data, and the model predicts a claim payout that differs depending on gender – that’s on our legal system.  

“It’s not the AI that created the bias, it’s us as a society that created the bias, and the AI has just looked at the data, learnt from the data, and replicated what it’s seen, and thus learnt that bias,” he said. 

Being conscious of the bias in our systems and data is the first step towards addressing it. 

As Nick said: “The question then becomes, ok, you know there’s bias, first thing is, what do you do about it? And should you do anything about it?” 

 

‘An AI trained on Monets would not paint a Jackson Pollock’ 

Could AI replicate the creativity of a Kurt Cobain or a Picasso? Turns out, it’s debateable – and depends on how you define “creativity”. (You can judge the creative outputs of AI artworks and music for yourself.) 

As practitioners of design thinking to deliver creative solutions, knowing we remain one up against AI might calm fears about future employability. 

According to Nick, AI is good technology for exposing biases, but it can’t do something that is truly new. 

“Kurt Cobain didn’t listen to the top 40 and then write his music in a garage in Seattle. He took heaps of drugs and had a tough background and wrote really angry, angsty music that defined a generation,” he said. 

“AI music can’t do that. It can’t create a new genre. It can’t improvise. 

“Because what we do is model known distributions. We can introduce noise, but that is not creativity, that’s not intuition, that’s not understanding. It’s noise.” 

But Evan countered that there’s always been noise – and focusing on “winners" was an example of selection bias. 

“For that Kurt Cobain, there were 10,000 other people – ‘the noise’ – that were doing random other things … and it just so happened that Kurt Cobain was the one optimum that managed to continue into the future,” he said. 

“When Picasso came out with his famous drawings that had a level of cubism in them and abstraction, humans couldn’t appreciate it. It took years for us to recognise that that was genius.” 

 

Looking ahead 

Our Future Led panel offered an optimistic vision of the future of AI – albeit a tentative optimism. 

The challenges are steep – ethics is the obvious big one – but investment is another, as is the bravery of leaders to commit to a decades-long venture, and the willingness of people to embrace systems that can expose our prejudice or make more decisions in our day-to-day lives. 

It’s also a tough conversation in a time when the COVID pandemic continues to have far-reaching impacts, including on the reorganisation of national priorities.  

Nick summed it up: “The problem is this technology is still a 10- to 20-year proposition for the really huge economic impacts, and the interesting question is … are our new overlords the FAANGs [Facebook, Apple, Amazon, Netflix, Google] of big tech?  

“If they’re the ones who own this technology, cause they’re the ones who’ve invested in it over the past 20 years, then what does that do to the traditional geopolitical structure?” 

Super-intelligent AI will test us in numerous ways, but our industry – at the nexus of people and digital innovation – has the ability to contribute to discussions about these challenges, and help advance optimism rather than anxiety.