Between Prompts and Principals
How PhD Researchers Navigate AI Tools, Tacit Knowledge & Ethical Verification
This project investigates how PhD scholars and HCI researchers integrate AI tools like ChatGPT, Perplexity, and Gemini into their research workflows . It explores patterns of adoption, ethical concerns, and verification practices across experience levels—novices, proficient users, and experts. The study highlights both benefits (efficiency, accessibility, language support) and risks (skill erosion, creativity loss, reliance). Through qualitative interviews and quantitative surveys, the research uncovers how ethics and emotions shape trust, usage, and accountability in academic contexts.
Skills Literature Mapping & Thematic Review , Qualitative Interviews & Transcription ,Quantitative Survey & Statistical Analysis , Thematic & Semantic Analysis
Industries TravelHuman–Computer Interaction (HCI) , Research Training , Higher Education & Academia
Industries
This project examines how PhD scholars and HCI researchers integrate AI tools like ChatGPT, Gemini, and ResearchRabbit across research stages such as literature review, writing, data analysis, and ideation. Through interviews, surveys, and thematic analysis, six key themes emerged: AI as an assistive aid, ethical sensitivity, vigilant verification, fears of skill loss, creativity risks, and varied adoption styles. While novices show enthusiasm and heavy reliance, proficient researchers are cautious, experts use AI selectively for advanced tasks. Findings highlight that ethical responsibility strongly shapes verification practices, with broader implications for research training, policy, and the design of transparent and responsible AI tools.
The study shows that AI tools are widely used in HCI research for literature reviews (78%), writing (65%), and data analysis (54%). Novices rely heavily on AI for speed, proficient researchers use it cautiously with ethical checks, and experts adopt it selectively for advanced tasks. Key concerns include skill shrinking (62%), creativity risks (49%), and the strong need for verification (88%). Importantly, ethical responsibility strongly correlates with rigorous verification, highlighting integrity as the foundation of responsible AI use in research.

