Distinguished Speakers

Keynote Speakers

Prof. Khalid Elgazzar
Canada Research Chair,
(Profile, Google Scholar)

Prof. Khalid Elgazzar is a Canada Research Chair in the Internet of Things and an Associate Professor with the Faculty of Engineering and Applied Science at Ontario Tech University, Canada. He is also an adjunct professor at Queen's University. Dr. Elgazzar is the founder and director of the IoT Research Laboratory at Ontario Tech University. Prior to joining Ontario Tech, he was an assistant professor at the University of Louisiana at Lafayette and a research scientist at Carnegie Mellon School of Computer Science. Dr. Elgazzar was named the recipient of the Outstanding Achievement in Sponsored Research Award from UL Lafayette in 2017 and the Distinguished Research Award from Queen's University in 2014. He also received several recognition and best paper awards at top international venues. Dr. Elgazzar is a leading authority in the areas of the Internet of Things (IoT), intelligent software systems, real-time data analytics, and mobile computing. Dr. Elgazzar is currently an associate editor for Frontiers Internet of Things Journal, Springer Peer-to-Peer Networking and Applications, Future Internet, and others. He also chaired several IEEE conferences and symposia on mobile computing, communications, and IoT. Dr. Elgazzar is a Senior IEEE Member and an active volunteer in technical program committees and organizing committees in both IEEE and ACM events.

Abstrsact: Enhancing vehicle perception models is crucial for the successful integration of assisted and autonomous driving vehicles. By refining these perceptual capabilities of the model to accurately anticipate the actions of vulnerable road users, the overall driving experience can be significantly improved, ensuring higher levels of safety. Existing research efforts focused on the prediction of pedestrians' crossing intentions have predominantly relied on vision-based deep learning models. However, these models continue to exhibit shortcomings in terms of their robustness when faced with adverse weather conditions and domain adaptation challenges. Furthermore, little attention has been given to evaluating the real-time performance of these models. In this talk, I will present an innovative framework we developed to address these limitations and accurately predict pedestrian crossing intentions. At the core of the framework, we implement an image enhancement pipeline to enable the detection and rectification of various defects that may arise during unfavorable weather conditions. Following this, we employ a transformer-based network with a self-attention mechanism to predict the crossing intentions of pedestrians. This pipeline enhances the model's robustness and accuracy in classification tasks. We assessed our framework using the famous JAAD dataset. Performance metrics indicate that our model achieves state-of-the-art results while ensuring significantly low inference times.

Dr. Mohamed Elhoseiny
King Abdullah University of Science and Technology (KAUST),
Saudia Arabia
(Profile, Google Scholar)

Dr. Mohamed Elhoseiny is an assistant professor of Computer Science at KAUST, and is a senior member of AAAI and IEEE. Previously, he was a visiting Faculty at Stanford Computer Science Department (Oct 2019-March 2020), a Visiting Faculty at Baidu Research (March-October 2019), and a Postdoc researcher at Facebook AI Research (Nov 2016- Jan 2019). Dr. Elhoseiny earned his Ph.D. in 2016 from Rutgers University, where he was part of the art & AI lab and spent time at SRI International in 2014 and at Adobe Research (2015-2016). His primary research interest is in computer vision and especially in efficient multimodal learning with limited data in zero/few-shot learning and Vision and language (including Vision LLM). He is also interested in Affective AI and especially in understanding and generating novel visual content (e.g., art and fashion). He received an NSF Fellowship in 2014, the Doctoral Consortium award at CVPR’16, the Best Paper award at ECCVW’18 on Fashion and Design, and was selected as an MIT 35 under 35 semi-finalist in 2020. His zero-shot learning work was featured at the United Nations, and his creative AI work was featured in MIT Tech Review, New Scientist Magazine, Forbes Science, and HBO Silicon Valley. He has served as an Area Chair at major CV/AI conferences, including CVPR21, ICCV21, IJCAI22, ECCV22, ICLR23, CVPR23, ICCV'23, NeurIPS23, ICLR'24, CVPR'24, and has organized Closing the Loop Between Vision and Language workshops at ICCV’15, ICCV’17, ICCV’19, ICCV’21, ICCV'23.

Abstrsact: Most existing AI learning methods can be categorized into supervised, semi-supervised, and unsupervised methods. These approaches rely on defining empirical risks or losses on the provided labeled and/or unlabeled data. Beyond extracting learning signals from labeled/unlabeled training data, we will reflect in this talk on a class of methods that can learn beyond the vocabulary that was trained on and can compose or create novel concepts. Specifically, we address the question of how these AI skills may assist species discovery, content creation, self-driving cars, emotional health, and more. We refer to this class of techniques as imagination AI methods, and we will dive into how we developed several approaches to build machine learning methods that can See, Create, Drive, and Feel. See: recognize unseen visual concepts by imaginative learning signals and how that may extend in a continual setting where seen and unseen classes change dynamically. Create: generate novel art and fashion by creativity losses. Drive (minorly covered): improve trajectory forecasting for autonomous driving by modeling hallucinative driving intents. Feel: generate emotional descriptions of visual art that are metaphoric and go beyond grounded descriptions. Feel: generate emotional descriptions of visual art that are metaphoric and go beyond grounded descriptions, and how to build these AI systems to be more inclusive of multiple cultures. I will also conclude by pointing out future directions where imaginative AI may help develop better assistive technology for multicultural and more inclusive metaverse, emotional health, and drug discovery.
Along with these stations, I will also cover Vision Language Models. I aim to cover in this talk VisualGPT, Chatcaptioner, MiniGPT4, and MiniGPT-v2 (a newer version we finished in late September), recent models that use LLM to generate images of visual stories given their language description as well. I will also cover applications in affective vision and language, and how we build these technologies and methods to be inclusive to many languages and cultures.

Distinguished Speaker

Dr. Zag ElSayed
University of Cincinnati, USA

Dr. Zag ElSayed was born in Odessa; he is a computer engineering scientist specializing in Brain Machine Interface, Artificial Intelligence, Machine Learning, VLSI Design, Cybersecurity, I2oT as well as IoE. He received his B.Sc. and M.Sc. with Distinction degree of Honor from Alexandria University in 2005, where he implemented the earliest framework architecture for Industrial Internet of Things (IoT) implementation. Zag got his second M.Sc. and Ph.D. in Computer Engineering from the University of Louisiana at Lafayette in 2016. Currently Zag is an assistant professor at the School of Information technology the University of Cincinnati, Ohio. Zag also is and active consultant of IT automation for industrial companies in Louisiana, Oklahoma and Texas. He worked as a research engineer in Africa, Europe, and the USA. Since 2014, he has been a system designer, automation architect and developer for leading oil and gas research companies specializing. He is fluent in nine languages, a nationally recognized painting artist, and a registered Red Cross ERV volunteer. Zag believes the key to understanding the Universe is ciphered in the human brain. His talk was given at a TEDx event.

Brain-Computer Interface (BCI) systems have advanced significantly due to the convergence of technology and neuroscience in the ever-changing field of computer engineering. This keynote explores the fascinating field where computer engineering and brain waves collide, shedding light on the revolutionary possibilities that emerge when the ability of the human mind to communicate with technology naturally is tapped into. The talk will delve into the complexities of Brain-Computer Interface technology, including new developments that allow for direct brain-to-computer connection. Participants will go through the development of BCIs, from their inception as experimental devices to the present state-of-the-art solutions, and what store into the future possibilities. The investigation of brain waves as a means of improving human-computer interaction is at the heart of the keynote. Understanding the many kinds of brain waves and how important they are for creating computing systems that are responsive and adaptable will be imparted to the audience, and how the decoding and interpretation of brain signals can transform a variety of industries, from immersive virtual environments to assistive technologies. As we stand at the nexus of neuroscience and computer engineering, this keynote invites participants to envision a future where the symbiotic relationship between the human mind and machines not only enhances our capabilities but also fosters a new era of innovation. Join us for an enlightening exploration of the present and future possibilities that arise when we dare to unlock the immense potential lying dormant within the neural intricacies of the human brain.

This keynote will motivate the attendees to imagine a future where the symbiotic link between the human mind and technology not only boosts our skills but also stimulates a new era of invention, as we stand at the intersection of neuroscience and computer engineering. Come explore with us the fascinating possibilities that lie ahead when we take the risk of awakening the vast potential that lies dormant inside the intricate neuronal architecture of the human brain.

Distinguished Speaker

Dr. Rasha Gargrees
Central Michigan University,

Dr. Rasha Gargrees is an Assistant Professor in the Department of Computer Science at Central Michigan University. She received her Ph.D. in Computer Science from the University of Missouri, where she also completed master’s degrees in the same field. Following her doctoral studies, Dr. Gargees served as a Postdoctoral Fellow at the University of Missouri. Her research interests encompass smart cities, cloud and fog computing architectures, artificial intelligence, machine learning, parallel and distributed computing, medical diagnoses, geospatial intelligence, Internet of Things (IoT), computer networks, and big data. She actively participates in program and organizing committees for prestigious venues. Dr. Gargees has been recognized with several accolades, including the PFFFD Postdoctoral Fellowship, the HCED Scholarship, the Best Presenter Award at IEEE CCWC 2020, the 1907 Women in Engineering Award, and Recognized Reviewer Certificates from leading journals.

Smart cities are rapidly evolving, leveraging technological advancements to enhance urban living. Central to this evolution is the efficient processing and management of data generated by various sources. This talk presents a comprehensive framework that integrates edge and cloud computing with intelligent agents to address the challenges of data processing in smart cities. At the core of the framework is the concept of edge computing, where data is processed closer to its source, reducing latency and bandwidth requirements. Edge nodes are equipped with intelligent agents that analyze and process data, sending only relevant information to the cloud for further processing. This distributed approach optimizes resource utilization and ensures timely responses to events in the city.

The cloud component of the framework provides scalable storage and computing resources for processing large volumes of data. Machine learning models deployed in the cloud analyze aggregated data to derive insights and make informed decisions. One of the key advantages of the framework is the automation enabled by intelligent agents. These agents can autonomously perform tasks based on predefined functions and machine learning models. This automation not only improves the efficiency of city operations but also enhances the overall responsiveness and adaptability of the city's infrastructure.

The framework presented in this talk offers a holistic approach to leveraging edge and cloud computing with intelligent agents for efficient data processing, decision-making, and automation in smart cities. By embracing this framework, cities can harness the power of technology to create safer, more sustainable, and more livable urban environments.

Distinguished Speaker

Dr. Linxi Zhang
Central Michigan University,

Dr. Linxi Zhang is an Assistant Professor at Central Michigan University with a research focus on automotive cybersecurity, including intrusion detection systems, machine learning, and wireless network and mobile system security. Her current work centers on in-vehicle network security and the development of machine learning-based intrusion detection systems for Controller Area Network (CAN) bus systems. Dr. Zhang's contributions have been recognized in prestigious conferences such as IEEE Conference on Computer Communications (INFOCOM) and SAE World Congress Experience (WCX). Additionally, she actively participates in organizing and technical program committees for various professional conferences and workshops.

In today's world, cars are more than just vehicles; they are becoming smart devices on wheels, connected to the internet, and equipped with various advanced features. This progress, while exciting, brings new challenges in keeping these vehicles safe from cyber attacks. As cars get smarter, so do the threats against them, creating a pressing need for stronger security measures. This talk presents a fresh approach that combines machine learning techniques with Intrusion Detection Systems (IDS) to boost the security of car systems. We explore a unique mix of traditional detection methods and modern machine learning to create an IDS framework that is both flexible and precise in spotting threats. A key part of our strategy is using Binarized Neural Networks (BNNs), which are specially designed to quickly detect threats without needing a lot of computing power. This presentation will share insights from research and practical methods, highlighting the progress in using machine learning to detect and stop cyber threats effectively. Our goal is to develop a robust, modern security framework that can protect cars from the growing range of cyber threats, ensuring the safety and privacy of everyone in this connected world.

Distinguished Speaker

Dr. Alexander Jesser
University of Heilbronn,

Dr. Alexander Jesser holds the diploma degree in Computer Engineering from the University of Paderborn, Germany and the Ph.D. in Computer Engineering from the Johann-Wolfgang Goethe University of Frankfurt a.M., Germany. Since 2013 he is a full Professor for Embedded Systems and Communications Engineering at the University of Heilbronn, Germany. From 2019 until 2023 he was the Study Dean for the Bachelor and Master Program Electrical Engineering from the same University. In 2021 he founded the Institute of Intelligent Cyber-Physical Systems (ICPS; www.hs-heilbronn.de/icps) in Heilbronn. He is conducting research in the field of Cyber-Physical Systems with over 20 PhD Students he is supervising.

Cyber-Physical Systems are at the heart of disruptive new technologies such as Industry 4.0 (Industrial Internet) and the Internet of Things (IoT). Artificial intelligence and machine learning methods are an indispensable methodology.
Together with several industrial partners, the Institute for Intelligent Cyber-Physical Systems ICPS is researching applied intelligent solutions with the aid of artificial intelligence.
This presentation will provide an overview of the practice-oriented research carried out together with industrial partners from a wide range of sectors. Research projects from image and speech signal processing, Formal Verification of heterogeneous systems and telecommunications technology will be presented and the latest scientific findings will be outlined.

Distinguished Speaker

Dr. Fahmi Khalifa
Morgan State University,

Dr. Fahmi Khalifa Fahmi Khalifa, Ph.D., Assistant Professor of ECE, Morgan State University (MSU), received his BS and MS degrees in Electronics and Communication Engineering from Mansoura University, Egypt in 2003 and 2007, respectively, and his PhD degree in 2014 from Electrical and Computer Engineering (ECE) Department, University of Louisville (UofL), USA. Dr. Khalifa has more than 15 years of hands-on experience in the fields of artificial intelligence, image/signal processing, machine learning, biomedical data analysis, and computer-aided diagnosis with more than 180 publications appearing in top-rank journals and international conferences in addition to and five US patents. Dr. Khalifa is an associate editor for IEEE Access, IEEE JBHI and Frontiers in Neuroscience; guest edited multiple special issues; and serves as a reviewer for 50+ journals and conferences. Dr. Khalifa’s honors and awards include Mansoura University scholarship for distinctive undergraduate students for four consecutive years (1999–2002), Theobald Scholarship Award (ECE, UofL,2013), the ECE Outstanding Student award for two times (ECE, UofL, 2012 and 2014), the John M. Houchens award for the outstanding dissertation (UofL, 2014), the second-place Post-Doctoral Fellow award in Research! Louisville (UofL, 2014), PowerLIVE Award for Faculty commitment to students and their academic success (MSU, 2023), and Final list for the “Instructional Innovator of the Year” (MSU, 2023).