Artificial intelligence may help in many ways, but there are ethical concerns to consider when implementing it
With new technology capabilities available thanks to generative AI, there are certain ethical issues and responsibilities to consider when using these tools. More specifically, AI is fuelled by data – and as its use expands across multiple industries, the question of ethical AI becomes more and more prominent.
From creating new marketing initiatives and new products and services through to driving Internet Of Things decisions, there’s no limit to how AI can be used. While it’s intended to create customised value for individuals, it’s important to realise that AI can have built-in biases. This bias starts with how data is selected in building an analytical model, the factors used in decision engineering and any other form of advanced analytics. These inherent biases can have a significant impact on individuals – for example, it can influence an individual’s credit profile and affordability assessment, which then affects their buying power.
South Africa’s Protection of Personal Information Act (POPIA) is a first key step in protecting personal information and how it is used. However, much more needs to be done to define and govern AI. The EU has recently taken a strong stand in exactly this area with the EU AI Act recently approved by the European Parliament. The bill aims to protect European consumers from potentially dangerous applications of AI by requiring the analysis and classification of AI systems according to the risk they pose to users. The passing of this bill means there’s now a recognition that AI can be used to sway decisions, which can lead to things like discrimination.
While most companies intend to use AI ethically, putting this into practice can be a challenge. One example of this is the 2021 Google case that involved the termination of employment of Timnit Gebru from her role as lead of the ethical AI Team. Timnit, along with co-lead Margaret Mitchell, was working on a paper on the dangers of large language processing models when a department at Google asked that that article be retracted. After pushback from Timnit, she was let go. This example shows that while organisations may have good intentions when it comes to the use of AI, the actual practical steps involved can be challenging.
Diversity in data
Varsha Ramesar, Head of Data Management and Commercialisation at Tesserai, a local business intelligence and analytics company, says that the more diverse the members of the team are, the more likely it is that AI bias will be reduced. “We need more rigour in the creation of our training data sets,” she says. “We need to ask the hard questions, such as: is gender being used as a predictor or a bias? Is the dataset diverse enough? The more we strive towards a culture of asking the hard questions the closer we move to achieving ethical AI.”
Know your rights
Premlin Pillay, Group Executive of Strategy, Data and Analytics at Mettus, says that education and socialisation are key to advancing the idea of AI ethics within an organisation. “It is important that companies working with analytics educate consumers to know their rights when it comes to their personal data,” he says.
Choose the right team
Building analytical models is a team sport – and the team should not only have technical skills in the form of data scientists but also other skills, including a good grasp of ethics. Given that these ethics skills are scarce globally, this role may need to be taken on by professors and PhD students from academic institutions.
Look outside the office
AI is becoming more accessible to a wider range of businesses. For a reasonable budget, any business can sign up and gain access to a range of powerful AI tools. However, these tools are only as powerful and ethical as the people who designed them. “Smaller teams may not have teams that are large enough to test the diversity of data – but this is where online forums and community organisations can help to steer the direction,” says Ramesar.
Advancements in AI aren’t slowing down, and there is a need to be deliberate in how that is embraced responsibly. “While AI may seem like a technical solution, issues of ethical AI are actually more about people and culture than they are about technology,” says Ramesar. “Above all, it’s about creating a culture of ethical AI and doing the right thing, even when no one is watching.”