Following its successful discussion forum on the Pitfalls of Artificial Intelligence (18–19 May 2023) earlier this year, the National Science and Technology Forum (NSTF) was invited to host a panel session on the same topic at the Science Summit of the 78th United Nations General Assembly held in New York on 15 September 2023.
NSTF Executive Director, Ms Jansie Niehaus, summarised the session’s objective: “We’re talking about ensuring responsible AI development, regulatory frameworks and ethical considerations in harnessing AI’s benefits for a more sustainable future. Balancing AI’s potential with its ethical and societal implications will be a challenge in the years to come. South Africa (SA) has been working on AI for a while, but we are behind the curve if we look at where the rest of the world is, so we are in catch-up mode.”
The panel session’s topics ranged from the implications of AI development for leadership and society and a re-think about the purpose of universities, to the work of the AI Institute of South Africa (AIISA) and the Centre for Artificial Intelligence Research (CAIR).
Implications of AI Development for Leadership in Government, Business, Research and Society
Chief Executive Officer of South Africa’s National Research Foundation (NRF), Prof Fulufhelo Nelwamondo, reminded the audience that, “almost everywhere in the world everyone is talking about the benefits of AI, but no one thought that it would reach this level of maturity in such a short time.”
While AI boasts many advantages, he outlined several concerns regarding its implementation. The first is the limited view the designer might have had of where the data came from.
The NSTF provides neutral collaborative platforms where issues and sectors meet
- One of the National Science and Technology Forum (NSTF) functions is to hold discussion forums, bringing the private and public sectors together to address important issues and engage with government policy.
- Feedback from these discussion forums is disseminated to role players and stakeholders.
- The NSTF represents about 130 member organisations participating as key stakeholders of the SET and innovation community.
The other concern: “There’s also the big issue of frequency bias around AI, where frequently held beliefs are considered to be more accurate. When one starts training these models with those kinds of biases, they continue and propagate the very same biases going forward.”
“There are issues of systematic discrimination, depending on where the data is obtained from. If you train the AI model with specific data, for instance representing a specific type of people, it will automatically discriminate against data it does not recognise. For example, we know that Africa will be home to 42% of the world’s youth population by the year 2030 according to the World Economic Forum. If that happens, and you have models that were trained from the global north where people are considered [more economically] stable when they are older, in Africa such models will discriminate against the majority, affecting many things, including, for example, home loan applications.”
Nelwamondo continued: “The agendas of designers also come into play. We know that AI for marketing can be very good. It can be designed to maximise screen time – for example on YouTube, TikTok and so on, where you want people to be glued to a particular screen. This in itself could be a distraction from other main issues. The question is where should behavioural economists draw the line? What about the users’ welfare? AI can be trained with rewarding systems that could contribute to users’ addictions. There is still a lot of thinking required about these issues where the massive amounts of data that AI can process at great speed can take advantage of the human’s slow processing.”
The issue of accountability is critical. “As an engineer, if I build a bridge and it falls, I will be held accountable. But what about the AI designers’ accountability?” Other questions pertain to the designer’s ownership and ethics. “For example, if a [self-driving] car is about to crash and it – through its AI programming – has to decide on crashing against an old man or a child to save the owner of the car, who do you blame for the choice? The owner of the car or the car manufacturer? It is not clear, yet.”
“We’ve seen cases of people falsifying data. We have seen things that are happening now – deep fakes, fake emails, improved spams and con chatbots. We can talk about a whole lot of things that have now become major risks that come with this particular tool and the underlying issue is where the data are coming from.”
Prof Nelwamondo said that these pitfalls are not unique to AI but other fields and applications had guardrails designed in the form of, among others, standard operating procedures, regulatory policies, national and international standards and voluntary bodies open to public and private participation.
“On the AI side, we are still far from that. We have them for data usage, for example, the protection of personal information legislation, but not for the design and deployment of AI.” He compared the situation of AI with that of nuclear technologies and pointed out that nuclear is easy to trace.
“AI is a game changer, but it requires government, private sector and civil society to work together because there are serious implications for everyone. Do you just want them to consume this technology without even knowing what it means for them? How do you even educate them? How do we deal with science engagement where the developers of this technology can engage with the end users of the technology? There are a whole lot of gaps that must be closed.”
Prof Anish Kurien, Node Director at the French South African Institute of Technology of the Tshwane University of Technology (TUT), said that there are negative connotations attributed to AI. However, “the challenge in many developing countries is to find a suitable strategy that can take advantage of AI. We have a lot of challenges in our society which, in turn, impact our use of some of these technologies.”
“There is a lot of good work being done in various sectors to try and address some of these challenges such as OpenAI work and the Alan Turing Institute that’s working on some of these bias-related problems,” he said. Prof Kurien also argued that the challenge of training poses a significant problem.
In the context of AI-generated academic outputs, what is the purpose of the university?
Prof Sioux McKenna, Director of the Centre of Postgraduate Studies at Rhodes University (RU), reminded the audience that, “We need to realise that universities are not separate from society. And so all of the structures and systems and wonders and ills that you find in society are just as evident in the academy.”
“Looking back, the early universities were designed to serve the elite,” Prof McKenna continued. “Their function was to reinforce social stratification by empowering only those deemed worthy and it was a reflection of the social norms of the time. At the time of World War 2, we had these massive social shifts. There were conversations across society about the need to bring about more equity and this played out in the universities by massification, which didn’t only come about because of calls for equality, but also because of the explosion of knowledge that happened in that era and along with it this idea that we are now in a knowledge economy.”
The ideology of neoliberalism started at the time of US President Ronald Reagan and UK Prime Minister Margaret Thatcher. Prof McKenna said, “Neoliberalism has severely constrained the extent to which these massified university systems have been able to retain that focus on empowerment through knowledge evident in the old elite system.” She argued that it is this neoliberal approach that is part of the reason for universities’ knee-jerk reaction to AI.
She maintained that universities are seen as industries, students and industry are customers and wealthy universities are seen as better than the rest.
“We then do this ‘police-catch-punish’ thing which often relies heavily on software in very inappropriate ways. [Technology companies] know full well that many academics are under the misconception that Turnitin and other text-matching software detect plagiarism.” She stressed that plagiarism, whether intentional or unintentional, is a major problem, but it is an educational issue and related to student learning. “Plagiarism is a problem because it shows that our students are not learning. It requires an educational response. If we step back from student cheating and instead ask educational questions about how and when and why our students use AI, then our response, I think, will be quite different.”
“The White Paper on Higher Education in SA gives higher education four purposes, one of which is to nurture a critical citizenry and I think we will have failed in that responsibility if we graduate people who are unable to engage with AI in the workplace and the wider world without both an ability to use it well and equally importantly, the ability to do so critically.”
“I’m very much in favour of AI, but with huge provisos and it worries me that the universities are not engaging with these big deliberations about the bad aspects of AI. For instance, AI is basically for profit. It’s made by ‘big tech’ based on scouring and mimicking other people’s copyrighted work without any payments or any acknowledgement. It surveys us in unregulated ways; it is designed to extract stuff by getting us to buy more stuff and it is not designed to alleviate poverty.”
“However, it is not all doom and gloom – there are cracks in the current system and where there are cracks, the light can come in. We can always find spaces and find connections and collaborations to ask how AI is intersecting with an agenda of contributing knowledge at the frontiers of the field and troubling ethical dilemmas that are emerging from AI,” Prof McKenna concluded.
Prof Nelwamondo argued that the evolution of AI has brought about substantial implications for the way universities approach teaching. “The role of the universities has changed more significantly over the last few years than ever before. It is a question of how you teach things in a manner that is in line with where the world is going. For example, maybe you don’t need to memorise things anymore because we have this data at our disposal. It is a question of how we then redesign the curriculum to be in line with the new knowledge focusing on courses like data science.”
He reasoned, “Even if you do something seemingly unrelated to technology, you must understand aspects of data science, you must understand the elements of AI as part of the curricula. The role of the university in that would then be to teach this new curriculum and educate people differently. The better society is educated, the better [empowered] society would be to deliver us to a great place.“
Building AI Capacity in SA and Establishment of AIISA
Prof Kurien related how AIISA was established. AIISA “is an initiative that originated from the Presidential Commission on the Fourth Industrial Revolution (4IR) that was established in the early part of 2020. One of the objectives of its 2021 report was to see how SA can take hold of these emerging digital technologies at a global scale.”
He said one of AIISA’s mandates is that there must be a strong connection between the institute’s and government objectives and prerogatives in terms of driving AI adoption in various sectors of the state. The second mandate is that it must contribute to the generation of technology applications in key government sectors. The third mandate is the empowering of its workforce.
Prof Kurien explained that the main driver of this initiative is the Department of Communication and Digital Technologies (DCDT). Two founding nodes were established within the initiative. The first was the University of Johannesburg (UJ), launched in 2022. The second node is the Tshwane University of Technology (TUT), launched in March 2023. “We want to mature a lot of the AI-based activities that are taking place within the institution, and consolidate those, to respond to some of the national imperatives.”
Human-centred AI Systems for Scientific Knowledge Discovery and Sustainable Development in SA
Prof Deshen Moodley, Associate Professor in the Department of Computer Science of the University of Cape Town (UCT), gave a different perspective on AI, seeing it as a partnership between the machine and the human user. Moodley is also the Co-Director of CAIR.
“It is meant to be the human and the machine working together, where the machine enhances cognitive performance, including learning, decision-making and new experiences. Historically, when people thought about AI, it was about the machine replacing humans,” he explained. Augmented AI, he added, is about building intelligent machines or software that amplify rather than replace human cognitive power. This perspective of AI brings about a few research challenges, of which Prof Moodley highlighted two – adaptation and cognition.
“Humans are usually good at adaptation, i.e. dealing with change in the world. It’s not just adapting to some kind of dynamic world or system in the real world. But also the user changes as they work with machines – their knowledge changes, their skills change. So, the machine has to also evolve and learn about how the user is evolving and changing. Recently, we worked with a digital health expert. We looked at if we could remove all constraints, how we would reimagine public health in Africa and SA, in particular.”
“We did not look at just AI systems, we also looked at 3D digital twins and, specifically, how can we combine and leverage all of these technologies. Currently, the shift is moving from just looking at curing at the point of care – the clinic and physicians – to a more preventative health system. We think that the augmented AI system, looking at 3D digital twins, wearables and so forth where the costs are expected to drop, we could use AI systems to predict adverse health conditions.”
Prof Moodley further explained: “The architecture of the model involves the user and populations, analysing the current situation and using predictive modelling to ascertain what is going to happen and preparing or adapting to situations by coming up with timely interventions.”He remarked that this 3D-pseudo reality is not ‘blue sky’ thinking but already exists, for example, the City of Cape Town developed an interactive 3D model of Table Mountain where tourists can navigate the mountain via Google. “The idea is that users who don’t have sophisticated knowledge of AI can interact with the system. So, the AI can simulate and show you what is going to happen or it can look back and figure out the reasons or the causes for what is currently happening. That’s quite a powerful mechanism”.
“These augmented AI systems are the future, but there is still a lot of work to be done in terms of human-machine interfaces. What is needed for such an ambitious vision and if we want to leverage AI to have a real impact in developing countries?” Prof Moodley said that creating national AI ecosystems is necessary. This is also borne out by international research literature.
He warned, “If there’s no explicit national strategy, AI applications will develop in an ad hoc manner and it will be constrained to specific application areas where there’s investment. There’ll be these start-ups and niche technology companies that will dominate it. That will drive up the cost of AI and it might just increase inequality.”
“We need to create these ecosystems that connect local, regional and national networks of skilled or re-skilled people. This collaborative platform must be agile because AI is quite fluid.”
“Establishing and maintaining cross-institution and cross-sector research and innovation ecosystems is hard work, but it is possible. The government and institutions like the NSTF and the NRF are crucial to creating such an ecosystem. They can help coordinate and provide an enabling environment for a vibrant AI community in SA,” he concluded.
Speakers can be contacted through the NSTF Media Liaison and Communications Manager, Mr Barnard Manne. Further information can be found on the NSTF website and the NSTF YouTube channel.
- The National Science and Technology Forum (NSTF) is:
- Independent non-profit stakeholder body and network – a civil-society forum
- Voice to government for the science, engineering, technology (SET) and innovation community
- Includes private and public sectors
- Promotes SET and innovation in South Africa since 1995
Credit for information: NSTF