South Dakota professor ranks among world’s top 2 percent of scientists

Nirmalya Thakur sits in rare scientific company.
The South Dakota Mines professor’s research work recently landed him on a list of the world’s top 2 percent of scientists. Compiled by Stanford University and publishing company Elsevier, the ranking evaluates more than 9 million scientists globally based on standardized citation metrics that objectively measure the global impact of scientific research.
The analysis classifies researchers into 22 broad fields and 174 subfields, identifying those who rank among the top 2 percent in their respective subfields. Thakur’s inclusion is in the broad field of information and communication technologies, with primary recognition in artificial intelligence and image processing and secondary recognition in computation theory and mathematics.
Thakur’s work spans big data, data analysis, human-computer interaction, machine learning and natural language processing. He has co-authored more than 50 peer-reviewed publications in leading conferences and journals.
“My work leverages interaction data from people’s everyday activities, both within smart home environments and on digital platforms like social media, to address critical societal challenges,” he explained.
“In the area of AI-powered infoveillance and social media analytics, I develop computational frameworks that capture the dynamics of social media discourse across linguistic and cultural boundaries, enabling systematic analysis of how public sentiment, misinformation and health-related anxiety emerge and evolve during global crises. I also develop assistive technologies that support healthy aging and independent living, including human activity recognition, user-specific activity recommendations, indoor location detection, fall detection and the identification of cognitive impairment from user interactions.”
Thakur grew up in India and through travel gained early exposure to how in some regions of the country older adults were striving to maintain independence while navigating day-to-day difficulties that often intensify with age.
“In some regions, I also observed how information shaped decision-making: Uncertainty escalated rapidly when health guidance was unclear, inconsistent or difficult to verify, and individual decisions were shaped not only by medical evidence but also by community narratives and circulating claims,” he said. “These aging-related and information-related challenges and their impact on day-to-day lives shaped the research questions I became determined to pursue: how technology can support safety and autonomy in everyday life and how computational analysis can help understand and mitigate misinformation and anxiety during health crises.”
He holds a Ph.D. in computer science and engineering from the University of Cincinnati and worked as an assistant professor of computer science at Emory University for two years before joining South Dakota Mines in 2024 as an assistant professor in the department of electrical engineering and computer science.
“South Dakota Mines aligned strongly with the direction of my teaching and research, given its mission ‘to educate scientists and engineers to address global challenges, innovate to reach our creative potential and engage in partnerships to transform society,’ and its vision ‘to develop world-class leaders in science and engineering to benefit society,'” he said.
“These institutional priorities align closely with the focus of my research and mentorship: developing AI-driven systems to address consequential problems in health, aging and public information while preparing students to become technically excellent, socially responsible leaders who can translate research into real-world impact.”
Also recognized by the IEEE Computer Society as one of the Top 30 Early Career Professionals of 2024, Thakur recently developed the first emotion-driven navigation system for AI agents. His project, Emotional Detours, enables AI agents to recover from setbacks and continue efficiently toward their goals, mirroring how people overcome challenges in daily life.
“Dr. Thakur’s work is the type of research that puts South Dakota on a global stage,” said Joni Ekstrum, executive director of South Dakota Biotech. “The research being done in our state is world-class, and it’s exciting to see it recognized. Top talent are drawn to work like Dr. Thakur is achieving, and an honor like this will be noticed.”
We sat down with Thakur for a closer look at his work.
What sparked your interest in your field?
My interest in this field began with a simple realization: Risk and uncertainty tend to take two closely related forms. One is physical and immediate: falls, mobility limitations, cognitive changes and the everyday safety challenges that become more consequential with age. The other is informational: misinformation, fake news, anxiety and rapidly shifting narratives that influence how people interpret health guidance and respond during a crisis. Considering these vulnerabilities together convinced me that addressing high-stakes human problems requires approaches that are human-centered in design and computationally rigorous in both development and validation.
During my doctoral studies at the University of Cincinnati, this perspective shaped the direction of my research in ambient-assisted living and AI-driven smart-home technologies. I investigated several research questions central to making safety- and independence-supporting smart-home systems deployable, how to infer activities and indoor location from multimodal, time-varying behavioral data and how to detect safety-critical events such as falls with reliability that generalizes across individuals and living environments. The work required designing and evaluating AI-driven systems using behavioral and sensor data that are inherently noisy, heterogeneous across individuals and homes, and shaped by context and activities. The emphasis, therefore, extended beyond developing models that perform well in carefully controlled settings; it centered on developing intelligent systems that remain reliable under real-world variability and on establishing evaluation practices that make the resulting inferences credible for real-world safety monitoring and decision support.
In parallel, I became increasingly interested in information-related risk during global health crises, particularly the ways in which uncertainty is produced and amplified at scale through social media. During crises, social media discourse evolves rapidly; guidance changes, competing interpretations circulate and unverified claims can diffuse widely, shaping individual behavior and collective response. These dynamics motivated my interest in computational infoveillance and social media analytics, where machine learning and natural language processing, combined with temporal modeling and network-aware analysis, enable systematic characterization of how sentiment, misinformation, fake news and anxiety emerge, propagate and persist across communities, languages and cultural contexts. In summary, the spark was not a single moment, but the recognition of a shared scientific challenge across both domains: developing and validating computational systems that transform complex, heterogeneous behavioral evidence, captured through daily activity and expressed through communication, into trustworthy, evidence-based conclusions under conditions of uncertainty.
How would you describe your research to someone not in the scientific community?
I work on human-centered AI that helps people recognize and respond to risk earlier, both at the individual and population levels. In practical terms, I develop AI-driven systems that learn from two complementary sources of behavioral data: longitudinal patterns in everyday activities and large-scale public discourse on social media, to detect emerging risks and support timely action. These AI-driven systems, spanning smart-home assistance and computational infoveillance, provide systematic, evidence-based situational awareness and decision support that enable earlier detection of safety and functional risks in daily living and more timely, better-targeted crisis communication and interventions as narratives, misinformation, fake news and anxiety evolve on social media.
One major area of my work focuses on safety and independence in the home, particularly for older adults. In IoT-based smart homes, sensors can provide longitudinal observations of daily activities, but converting these data into reliable indicators of safety and functional change requires systems that can handle noise, missingness and strong context dependence. People’s everyday interactions vary widely, and sensor readings are not always perfect. Moreover, the events of most significant concern such as falls are relatively rare, making detection a low-frequency, high-stakes problem in which false alarms and missed events both carry substantial costs. My work addresses core capabilities that make practical assistance possible such as activity recognition, indoor location awareness, fall detection and interaction-based indicators linked to cognitive or functional change while emphasizing robustness, personalization and evaluation practices that reduce false alarms and support credible decision-making. The broader aim is to enable earlier recognition of emerging risk and more timely support without compromising autonomy or dignity.
A second, closely connected area of my work addresses risk during global crises by analyzing large-scale public discourse on social media platforms. During crises, what communities believe and how they behave are shaped not only by official guidance but also by how claims, conspiracy theories, fake news, false reports and misinformation circulate and gain traction. Misinformation and anxiety can spread rapidly, and the resulting shifts in sentiment and trust can influence behavior at scale. My work in computational infoveillance uses machine learning and natural language processing, paired with time-aware and diffusion-aware modeling, to characterize how narratives, sentiment, misinformation, fake news and anxiety evolve across communities, languages and cultural contexts. The goal is to provide systematic, evidence-based situational awareness that can inform crisis communication and help target interventions where they are most needed.
Across both areas of my research, the scientific focus is the same: developing AI-driven systems that yield trustworthy, actionable evidence from complex human behavior, whether that behavior is reflected in daily activities or expressed through social media, so that individuals and institutions can make better decisions when uncertainty is high and the consequences are substantial.
What sort of impact do you hope your research can achieve? What takeaways do you hope people gain from it?
The impact I hope my research will have is the development of trustworthy AI systems that measurably improve how people navigate high-stakes health contexts, both in everyday life and in the information ecosystems that shape collective responses. The emphasis is on rigorous, human-centered system development with demonstrated real-world utility: enabling earlier recognition and quantification of emerging risk, supporting timely and better-informed decisions and reducing avoidable harm. Responsible development and deployment of such systems can strengthen individual well-being while also supporting the institutions and communities that must act under uncertainty.
In healthy aging and independent living, this impact is reflected in practical benefits that older adults, caregivers and clinicians can experience directly. Many life-altering events, falls, rapid loss of functional independence or subtle cognitive change are often preceded by subtle behavioral changes that are difficult to detect consistently in real time. Assistive technologies can help by identifying meaningful deviations from routine behavioral patterns, estimating elevated risk earlier and enabling timely, personalized support rather than a one-size-fits-all approach. More broadly, the goal here is to improve early detection and prevention of avoidable harm, reduce preventable injuries and ease caregiver burden by providing clearer, evidence-based indicators of when additional support is warranted while preserving the autonomy, dignity and privacy of individuals as they age.
This commitment to early detection and decision support also informs my work on public discourse during global crises. During virus outbreaks, public understanding and behavior can shift quickly as narratives evolve across social media platforms and communities. Misinformation and emotionally charged content can decrease trust, intensify anxiety and shape decisions in ways that affect health outcomes at scale. My work in AI-powered infoveillance seeks to provide actionable situational awareness by quantifying how sentiment, anxiety and misinformation change over time and across languages, communities and platforms. The goal here is explicitly decision-oriented: enabling stakeholders to identify emerging dynamics early enough to improve the timing and targeting of communication, reduce confusion and strengthen trust when clarity is most consequential.
The takeaways I hope people draw from this work are twofold. First, well-being is shaped by both physical and informational factors, and adverse outcomes are more likely when physical risk and information quality are addressed in isolation rather than treated as interconnected drivers of well-being. Second, AI should be evaluated by the quality of support it provides: whether it improves safety and decision-making and whether it does so in ways that are transparent, privacy-conscious and consistent with human dignity. For me, success is achieved when these AI-driven systems extend beyond scholarly dissemination into practice and contribute to the practical infrastructure of care and public health, helping older adults remain safely independent, assisting communities to respond to crises with greater clarity and demonstrating that rigorous AI can deliver societal benefit at a meaningful scale.
What was your reaction on learning you’d been named to a list of the top 2 percent of scientists worldwide?
Learning that I had been included on the Stanford–Elsevier list of the top 2 percent of scientists worldwide was genuinely humbling. The first response was gratitude because recognition of this kind reflects years of sustained scholarship rather than a single moment, and it prompted me to pause and appreciate the broader trajectory of my work. The listing also felt, immediately and unmistakably, like shared recognition. The publications that contribute to visibility and influence are rarely the result of solitary effort. They reflect the commitment of students who carried out careful analyses and experiments, collaborators who strengthened the ideas and methods, mentors who shaped my development and colleagues whose feedback improved the work at critical stages.
The recognition underscored the importance of the commitments I bring to my work: technical rigor, ethical and transparent dissemination of findings and a focus on problems with clear societal and public-health relevance. It reinforced my commitment to mentoring students toward excellence in research and to advancing work that is technically rigorous, transparent in its claims and consequential for practice and public understanding.
What has kept you at South Dakota Mines? What do you enjoy about your role there?
As an assistant professor in the department of electrical engineering and computer science, I conduct research in computer science, teach undergraduate and graduate computer science courses and engage in service to the department, the university and the broader scientific community.
In my courses, I focus on equipping students with a strong foundation in core concepts while emphasizing real-world use cases that illustrate the relevance of the course topics. I find it especially rewarding to see students’ progress from understanding theoretical concepts to successfully applying them to assignments that replicate real-world challenges. Through these assignments, students improve their understanding of course concepts by approaching them from a real-world perspective, gaining insights into how theoretical knowledge translates into practical application.
In my research group, students actively participate in various stages of ongoing projects, ranging from shaping research questions to conducting data analysis and preparing manuscripts. I find mentoring students particularly rewarding when I see them achieve significant milestones such as co-authoring their first research paper or successfully presenting their research at academic conferences. Several students I have mentored have co-authored papers at different international conferences in the United States, Sweden, Denmark, Italy, China and Switzerland, demonstrating strong student outcomes and international scholarly visibility.
I find these experiences, both in the classroom and through research, highly rewarding because they highlight my students’ growth and professional development, as well as their readiness to succeed as professionals in computer science. The opportunity to combine rigorous teaching with sustained student research engagement and to see students grow through both classroom and research experiences is what has kept me at South Dakota Mines.
What’s next for you in your work and research?
In the near future, my research will prioritize trustworthy AI: systems designed not only for high performance but also for robustness, interpretability, uncertainty-aware inference and reporting, and rigorous, bias-aware evaluation so that they can be used responsibly in high-impact, real-world settings.
In ambient-assisted living, my upcoming projects will advance assistive systems for healthy aging that are dependable under day-to-day conditions. Some of the key directions will include improving fall detection and activity-aware monitoring while reducing false positives, strengthening stability across individuals and living situations and supporting personalization as routines and needs change over time. Technical performance alone is insufficient in assistive contexts because these systems succeed only when people are willing and able to use them consistently; frequent false alarms, unclear system behavior or poorly designed interactions can decrease trust and ultimately limit real-world benefit. For this reason, my work will integrate human-centered design as a core methodological component, explicitly specifying system behavior and failure modes, designing interactions that support user control and comprehension, and adopting evaluation protocols that reflect usability, burden and trust under realistic conditions.
In AI-powered infoveillance and natural language processing, my upcoming projects will apply the same commitments to reliability and accountability in the analysis of social media discourse during global crises. Beyond investigating sentiment, misinformation, fake news and anxiety, I will emphasize interpretability and uncertainty-aware reporting so that findings are communicated with appropriate qualification, particularly when narratives shift rapidly and when language and platform dynamics vary across communities. This includes improving robustness across platforms and languages, conducting bias-aware evaluations and making model limitations explicit so conclusions remain appropriately scoped. These advances are intended to support crisis communication with timely, evidence-based assessments that are both analytically rigorous and responsibly reported. In summary, my upcoming projects will focus on building trustworthy, validated AI that turns behavioral evidence into decision support, supporting safety and autonomy in everyday life and enabling more reliable responses during global crises.
I treat trustworthiness as an engineering requirement rather than an aspirational goal: Accuracy in AI systems is necessary, but it is not sufficient without robustness, transparent treatment of uncertainty and bias-aware evaluation that clarifies both strengths and limitations. I am equally committed to mentoring students to develop and uphold these standards, preparing them to pursue scholarship that is technically rigorous, ethically responsible and communicated effectively to both technical and nontechnical audiences. Alongside scholarly dissemination, I also prioritize public-facing communication of these ideas; for example, I recently delivered an invited talk as part of the IEEE Systems Council – Early Career Speakers Program on an AI-based early-warning framework for infoveillance from social media. If there is one takeaway I would emphasize, it is that AI earns trust in health-related settings only when it is developed and evaluated in ways that are reliable in practice, interpretable in context and accountable to the people and communities it is intended to serve.