After shy introductions and muted ambitions had been echoed around the room, refreshing splash of honesty that announced a collective motivation for signing up for Code First Girls in the forefront of our minds: ‘I love men, right, but if my car breaks down I call my cousin for help. If my computer plays up I call my brother. I just want to be able to call on myself when something “technical” goes wrong.’ The room broke into knowing smiles and relieved chuckles. Perhaps it would be a line more suited to the screenplay of such female-trailblazer films as Hidden Figures or Frida, but for an eclectic mix of female students at the University of Leeds who are at risk of being written out of the modern workforce, learning computing skills is a way to build confidence in their role in an increasingly digital future.
But there’s a more fundamental reason why women and minority groups should get involved. It’s not just about being written out of an ever-evolving workforce: it’s about being written out of modern life.
Last year, University of Virginia Assistant Professor in Computer Science, Vicente Ordóñez, began to notice software algorithms were displaying some peculiar behaviour: they were making arguably sexist associations between photos and gender, displaying their programmer’s unconscious bias. Mateja Jamnik, Specialist Adviser to the House of Lords Select Committee on AI, Reader in Artificial Intelligence in the Department of Computer Science and Technology at the University of Cambridge and Royal Society Athena Runner-up for contribution to the advancement of women in science (so, basically, a pretty badass figure in the tech world – did I mention she was also nominated for Woman of Slovenia in 2007?), believes this bias is represented in algorithms being built by predominantly white male teams.
‘We need to ensure our engineering teams are diverse so they can reflect our society. We also need to be aware of biases when we collect data – and make sure we collect data that truly represents our society.’
This bring to the fore the financial reasons for companies to hire a diverse team: to ensure a user experience that is culturally relevant and positive overall – ensuring their customers are happy and ultimately return.
Mark Yatskar, a researcher at the Allen Institute for Artificial Intelligence and colleague of Ordóñez, says that, unfortunately this phenomenon not only reinforce existing social biases in regards to gender, race, and more, but actually make them worse.
With this in mind: how would those biases play out as AI takes a prominent role in human society? Not just in regards to user experience, but as key figures in society – as leaders of industry and cultural icons?
And if that sounds far-fetched to you: enter Sophia. Hanson Robotics’ most advanced robot to date. When Hong Kong-based Hanson Robotics announced at the Future Investment Initiative Conference that their AI creation had been awarded citizenship in Saudi Arabia, the Twittersphere became a dawn chorus of bitter suggestion that the robot had more rights than the Kingdom’s female subjects, who only a few months before had been granted permission to drive.
Designed by David Hanson and his team to replicate the ‘simple elegance’ of Audrey Hepburn (complete with ‘Porcelain skin, a slender nose, high cheekbones, an intriguing smile and deeply expressive eyes’), Sophia is expected to ‘evolve to solve world problems too complex for humans to solve themselves’. Meanwhile, Sophia assures us (thankfully) that her ‘AI is designed around human values like wisdom, kindness, compassion and strives to become an empathetic robot’.
And will these values be influenced by her programmer’s unconscious social biases? Beyond the lofty ethical considerations of the entrance of advanced robotics into mainstream society, we must all assume responsibility for the huge impact diversity has in our digital future. It has taken us centuries to overcome bias and prejudice in our social, economic and judicial systems of our past – let’s not code them into our future.