I was recently invited to give a talk to a cohort of DBA (Doctorate in Business Administration) students as part of an event looking at AI and Digital tools for research. It was a good session with plenty of interesting ideas about how to use AI and other technology in research.
I recorded the event, and though to turn it into a blogpost in a different way. I have combined the transcript of my talk, the slides I prepared, and an example of a previous post of mine. Using this I asked ChatGPT 4o to compose a blog post covering the points I made, in the style of my previous blogposts.
This also gave me a chance to use the ‘Canvas’ tool designed for editing writing and code. This allows you to highlight specific passages and areas, asking for granular changes, rather than having to rewrite entire documents.
What follows is therefore AI-generated, but with a foundation of material that was mine, and edited by me using AI-assistance. I don’t expect to write like this very often, but for a situation like this where the material exists in a different format, and the job is to convert to a different format, this may have promise.
Any comments are welcome, both on the actual content, and the method of writing.
AI Tools in Research: A Powerful (But Unreliable) Assistant
Artificial Intelligence is rapidly reshaping the research process. With tools that can summarise vast amounts of information, generate new ideas, and assist in data analysis, AI can serve as an invaluable research assistant—albeit an unreliable one. The challenge, then, is not just how to use AI, but how to use it well.
A useful way to think about AI in research is to imagine it as a highly intelligent but occasionally misleading research assistant. If you had such an assistant, what tasks would you delegate to them? More importantly, how would you supervise their work to ensure accuracy?
The key is to understand when AI is most helpful and when it is best to rely on human expertise. AI excels at processing large volumes of information quickly, summarising text, and generating ideas. However, it struggles with tasks requiring deep comprehension, guaranteed accuracy, or creative nuance.
Despite its strengths, there are times when AI should be avoided. These points are adapted from Ethan Mollick’s work on AI usage in research (link to original article):
-
When you need deep understanding: Summaries can be helpful, but AI cannot replace thorough engagement with the material.
-
When accuracy is critical: AI models still hallucinate facts and misinterpret data. Trust, but verify.
-
When the effort is the point: Learning, developing arguments, and engaging with literature are core to academic growth. Over-reliance on AI can diminish these skills.
Research generally follows a series of steps, each of which AI can support in different ways:
-
Idea Generation
AI can help brainstorm research topics, refine initial ideas, and identify gaps in the literature. By providing structured prompts, researchers can generate a range of potential questions and refine them based on feasibility and data availability. For example, a useful prompt might be:“Act as an experienced researcher in [field]. Suggest five novel, feasible research topics that fill gaps in current knowledge about [topic]. Provide a brief rationale for each, including potential methods or data sources and any anticipated challenges.”
This type of prompt directs AI towards structured, targeted idea generation while ensuring responses remain within realistic academic constraints. (See: Nature article on AI idea generation)
-
Literature Review
AI-powered search engines, such as Consensus, can summarise academic literature, highlight key findings, and even extract methodological details. A useful prompt might be:“Summarise key themes from recent research on [topic], identifying major studies, methodologies used, and consensus findings. Provide citations where possible.”
This ensures AI output remains grounded in existing literature rather than generating speculative claims.
-
Research Questions & Objectives
Once a research question is formulated, AI can evaluate its feasibility based on scope, data access, and methodological constraints. A detailed prompt could be:“Evaluate the feasibility of this research question: [insert question]. Consider factors such as data availability, methodological requirements, ethical concerns, and time constraints. Suggest refinements if necessary.”
-
Study Design & Methodology
AI can assist in selecting appropriate methodologies, outlining study designs, and suggesting potential pitfalls. This is a task particularly well suited to reasoning models such as o1 and DeepSeek, which are designed to provide structured and logical outputs when tackling complex problem-solving scenarios. A structured prompt might include:“Develop a comprehensive study design for the research question: [insert question]. Include methodology (qualitative, quantitative, or mixed methods), data collection strategies, analysis approach, ethical considerations, and limitations.”
-
Data Collection
AI can assist with finding and cleaning data. If using AI for scraping, a prompt could be:“Write a Python script to scrape weather data from the NOAA database, ensuring the data is formatted correctly for statistical analysis.” (See: Recent study on AI-generated datasets)
-
Data Analysis
AI can generate statistical outputs, visualise data, and suggest interpretations. However, it is best used as a check rather than the primary analyst. A useful prompt might be:“I have a dataset containing [briefly describe the columns and data type]. Perform a basic exploratory data analysis, including summary statistics, missing value assessment, and visualisations. Provide Python code where applicable.”
-
Interpretation of Findings
AI can assist with hypothesis testing and robustness checks. However, human insight remains critical. A prompt to guide AI might be:“Based on this dataset and analysis, what are the key findings? Are there any significant patterns or trends? Highlight possible limitations of the interpretation.”
-
Writing the Paper
AI can be a useful drafting tool, providing sentence alternatives and improving clarity. However, relying on AI-generated text risks losing the author’s academic voice. A prompt for refining writing might be:“Here is my original sentence: [insert sentence]. Provide five alternative phrasings that preserve the same meaning but differ in style and word choice.”
-
Revision & Submission
AI-powered grammar and style checkers can streamline editing. Additionally, AI can help tailor articles to specific journals by adapting tone and structure. A useful prompt might be:“Revise this abstract to align with the style and requirements of [specific journal]. Ensure clarity, conciseness, and adherence to academic conventions.” (See: AI-assisted journal submission strategies)
AI will continue to evolve, making research more efficient while also raising new challenges. Future researchers will likely work alongside AI, refining research design, collecting physical data, and ensuring replication efforts maintain integrity. (See: AI’s future role in research)
The question is not whether researchers should use AI, but how they can use it effectively while maintaining rigorous academic standards. By treating AI as an assistant—one that needs careful supervision—researchers can leverage its strengths while mitigating its weaknesses.
This post is based on a lecture delivered at De Montfort University for a cohort of DBA students in January 2025, incorporating additional material from the original slides. Links referenced in this post are taken from the presentation slides and lecture discussion.