Calculated Conversations #14: Professor Mehrdad Ghaziasgar on AI & Assistive Technology

In this episode of Calculated Conversations, I’m excited to share my conversation with Professor Mehrdad Ghaziasgar, an Associate Professor in Computer Science at the University of the Western Cape (UWC) and the head and principal researcher of the Assistive Technologies Research Group within the Center-of-Excellence at UWC CS.

Professor Ghaziasgar’s work lies at the intersection of artificial intelligence and assistive technology, pioneering innovative solutions to improve accessibility for individuals with disabilities. With more than 10 years of experience, his team has secured over R7 million in grants and completed more than 50 projects, leading to nationally recognized prototypes like iSign and the Visually-Impaired-Helper.

Beyond his groundbreaking research, Professor Ghaziasgar has made a profound impact in academia. He has authored or co-authored approximately 40 publications, served as an expert reviewer for journals and conferences worldwide, and supervised numerous postgraduate students, many of whom graduated with distinction. His commitment to education extends to pioneering innovative teaching methods, including the revolutionary “open-discussion” assessment model, designed to bridge the gap between academia and industry.

In our conversation, Professor Ghaziasgar shared his insights on:

  • The evolution of AI in assistive technology
  • The challenges of developing culturally competent sign language recognition systems
  • How his research group is working on cutting-edge solutions for the visually impaired
  • His unique approach to education and the role of collaboration in solving real-world AI challenges
  • How aspiring computer scientists can leverage technology for social good

This discussion offers a fascinating look into the future of AI-driven accessibility and education.


Here is what he had to say:


1. Your research on assistive technologies, such as sign language recognition systems, has the potential to revolutionize accessibility. What do you think are the key challenges in advancing this field, and how can they be overcome?

Key challenges in advancing sign language recognition systems include improving accuracy in diverse, real-world environments, such as noisy settings, or with different users having unique signing styles. Another challenge is ensuring that these systems need to be culturally competent and respect the diversity within the Deaf community.

Overcoming these challenges requires a multi-pronged approach:

  • Collecting diverse datasets for training models
  • Integrating real-time adaptability in systems
  • Collaborating closely with the Deaf community to ensure the solutions meet their needs and expectations.

Furthermore, advancements in transfer learning and unsupervised learning may help improve these systems, even with limited annotated data.


2. AI plays a major role in your work. How do you see machine learning and computer vision evolving in the context of assistive technology over the next 5-10 years?

In the next 5-10 years (actually probably more like 2 or 3 at the current rate), machine learning and computer vision are expected to play a central role in assistive technologies.

Advancements will focus on more intuitive systems that can recognize and adapt to different human behaviors, environments, and contexts. In sign language, we may see more seamless integration between vision-based systems and natural language processing, allowing for real-time, bidirectional translation between sign language and spoken languages.

Systems will become more robust in understanding diverse sign languages, dialects, and context, enhancing accessibility across a global scale.

Aside from sign language, I expect more robust solutions for the visually impaired. We are currently working on a cutting-edge solution and we’ll hopefully have great news on this late in this year (2025) or early next year (2026).


3. The work your group has done with visually impaired navigation systems is incredibly impactful. Could you share how AI can further enhance such technologies to improve everyday life for people with disabilities?

AI can enhance visually impaired navigation systems by improving two main things:

  1. Situational awareness — understanding where the person is and what exists or is happening around them.
  2. Navigational support — bridging the gap typically filled by a sighted guide such as a person or guide dog.

The solution I spoke of in the previous point (although a bit cryptically :D) aims to address both of these issues. Of course, this is only the start. Such systems need to be developed to understand complex environments and varied contexts, making them adaptable to various public spaces like malls or airports.

Integrating AI with wearable devices can also help to provide continuous feedback through haptic or auditory cues.


4. You have successfully supervised numerous students in their research. From your perspective, what is the most important skill or mindset students should have when pursuing a career in AI and assistive technologies?

I think there are a lot of misconceptions around what AI is, and especially what it can do. I think one major gap that needs to be bridged, both in up-and-coming students, and in the general public alike, is the understanding of what AI is and what it can do as a tool. We’ve all interacted with generative AI tools quite a lot, but that is one subset of the field.


5. Given the diversity in your teaching experience, how do you adapt your methods to ensure complex technical concepts are accessible to students from diverse backgrounds?

A few things:

  • First, we start at the beginning. The very beginning. Think “What is a Computer?”
  • Everyone from every background can enroll, and within 6 weeks, they are already equipped to solve some problems. It only builds up from there.
  • Empathy plays a big role. My course materials have been tailored from the perspective of someone viewing the content for the first time.
  • I can gauge the level of understanding of students while lecturing, and in turn, I’m able to address stumbling blocks to understanding very effectively.
  • Finally, I’m extremely practical oriented. This means that I never demonstrate a concept without showing exactly what its use is and how it can be applied. This includes examples galore.

6. You’ve pioneered an “open-discussion” assessment method in your teaching. How do you think this approach contributes to preparing students for real-world applications of their knowledge?

I recognized a significant gap between how Computer Science students are trained in academia and the way they are expected to work in industry. Beyond just differences in technical skills, the stakes, goals, and overall context of academic learning versus real-world application are vastly different.

My goal was to bridge that gap more effectively. The idea of open-discussion assessments began around 2018. My tests have always focused on solving larger, complex problems rather than answering short, formulaic questions.

I introduced a system where, during an assessment, students could raise their hands and select another student they knew to be knowledgeable and willing to help. If the selected student agreed, they would move to a designated space and have a brief but entirely unrestricted discussion—supervised, but without the ability to write anything down. Once the discussion ended, they would return to their seats and continue independently, with no further talking.

This approach mirrors industry practices: when professionals encounter challenges, they seek advice from colleagues, receive guidance, and then return to their workstations to implement solutions independently.

Observing these discussions was eye-opening—I saw firsthand how difficult it was for students to articulate their thoughts clearly to get meaningful help. It became clear that verbal communication, collaboration, and problem-solving skills needed much more emphasis in traditional assessments.

Then, when we were forced into lockdowns, I faced a new challenge: administering online assessments without rampant copying. But this raised an even deeper question—why is “copying” not a concern in industry? The answer is simple: in the real world, no two people work on exactly the same thing at the same time. They may work on related tasks, but each has a unique role. In contrast, traditional academic assessments have every student solving the exact same problem, making it almost inevitable that answers will be shared. Expecting otherwise is, frankly, unrealistic.

So I redesigned my assessment model. I developed a system that takes a set of core questions and dynamically alters the values for each student. The problem-solving techniques remain the same, but every student receives a unique version of the test, with a personalised memo for grading.

Now, even if students attempt to share answers, they can only discuss how to solve the problem—not simply exchange solutions. This perfectly aligns with workplace dynamics, where collaboration revolves around strategies and methodologies rather than copying outputs.

The impact was immediate. Students were shocked to realise that every test was different. The new system fostered teamwork, adaptability, and verbal communication—key skills often neglected in conventional assessments.

I’ve successfully applied this method in my Machine Learning and Reinforcement Learning courses, and the results have been outstanding. Of course, the rise of generative AI introduces new challenges. But instead of resisting it, academia must embrace it. The question now is not whether to integrate AI into assessments, but how. That’s a conversation for another time—but rest assured, I’m already exploring ways to refine this model further in the AI-driven era.


7. Finally, as someone deeply involved in both research and practical applications of AI, what advice would you give to those looking to make an impact at the intersection of technology and social good?

I’m less inclined to provide any advice, but I do want to point out how rich and capable Computer Scientists can be when really trying to make a difference.

Computer Science is the most malleable field, in my opinion, merging seamlessly with virtually any other field. Just add “Computational” to the beginning of any other field, and it works.

If you have a good heart and you care about making a difference, Computer Science is a good field to be in.


Throughout our conversation, Professor. Ghaziasgar provided remarkable insights into the intersection of artificial intelligence and assistive technology, shedding light on the transformative potential of AI-driven solutions for individuals with disabilities. His work not only bridges the gap between academia and industry but also highlights the profound impact that innovation can have on accessibility and inclusion.

From pioneering sign language recognition systems to redefining educational assessment models, Professor Ghaziasgar’s contributions emphasize the importance of collaboration, cultural awareness, and real-world impact in AI research. His dedication to both technological advancement and mentorship ensures that the next generation of computer scientists is equipped to drive meaningful change.

A huge thank you to Professor Mehrdad Ghaziasgar for sharing his expertise and experiences with me. His work serves as an inspiring reminder that AI is not just about innovation. It’s about creating solutions that empower and transform lives.


What do you think the future holds for AI in accessibility? Which areas do you think require the most urgent innovation? Let’s discuss in the comments below!


You can learn more about Professor Ghaziasgar here

As well as on his LinkedIn profile here


Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments