top of page
Search

Chapter Review: A Dual-Track Ethics Framework for AI in Postgraduate Research

Updated: Mar 6

By Johann Pieterse from Cape Town


Johann with his wife Christine in Athens, with Plato and Socrates in the background.
Johann with his wife Christine in Athens, with Plato and Socrates in the background.

Hey everyone, welcome back to AI Study Mate (www.aistudymate.org)! It's Johann here, your fellow explorer in the realm of AI and academia, checking in from my desk in Cape Town. Grok is also sometimes around today.


As a doctoral student at GlobalNXT University in Kuala Lumpur, Malaysia, I am in the final stages of completing my second DBA. Apart from my thesis, I've recently also finished a book chapter that is particularly meaningful to me, titled A Human-AI Ethics Framework for Postgraduate Researchers. The book depicts my first-hand experience with chatbots during those studies and forms the core of a book chapter. The book, Borders, Bias, and Bots: A Global Inquiry into AI's Human Impact, 2025, will be published in the next few months.


Postgraduate research is increasingly shaped by chatbots like ChatGPT, Gemini, Grok, and Jenni AI—but using them well isn't just a technical skill; it's an ethical one. In this post, we unpack how virtues, character-building, and the ancient concept of phronesis (practical wisdom) can guide responsible use of chatbots for tasks such as literature reviews, brainstorming, drafting, and analysis. I'll also share a few real chatbot exchanges to show what ethical decision-making looks like in practice.


In this post, we unpack how virtues, character-building, and the ancient concept of phronesis (practical wisdom) can guide responsible use of chatbots for tasks such as literature reviews, brainstorming, drafting, and analysis.

If you're a postgrad like me—immersed in theses, literary reviews, and the allure of tools such as those above, this piece is for you. Today, I'm reflecting on my own work—not to self-congratulate, but to examine it in a conversational way: what makes it resonate, and delve into the philosophical gems that underpin it. We'll explore virtues, the art of character-building, and the philosophical ideas that support it. So, let's dive in, shall we?


Johann and Grok in Cape Town (Designed by Grok)
Johann and Grok in Cape Town (Designed by Grok)

Setting the Stage: Why This Chapter Matters Now


First off, a quick overview of the chapter itself. Written in October 2025, this chapter responds to the rapid adoption of generative AI (GAI) in higher education. GAI tools like Grok can generate text, images, and video, supporting ideation and reflection in research, while also increasing risks such as plagiarism, overreliance, fabricated citations and content, lost originality, and other ethical errors. Instead of treating AI as a villain, I reframe it as a collaborator that needs clear ethical guidance. Using a mix of conceptual inquiry (big-picture theorising) alongside analytic autoethnography (my own interactions with chatbots as data), the chapter proposes a "dual-track" ethical framework: one for cultivating virtues such as honesty, humility, diligence, and the other for hard-and-fast rules such as disclosure and verification.


What makes the chapter distinctive is that it connects ethical theory to the day-to-day realities of doctoral work, including my own experiences during doctoral research, navigating institutional restrictions and chatbot-generated inaccuracies, such as fake citations.


The core message is simple: don’t ban AI—use it responsibly, with human judgment remaining accountable.

While reviewing this chapter now, I experience how it bridges the gap between abstract ethics and the daily grind of doctoral life.


My Personal Dive: Autoethnography and AI Chats in Action


A key strength of the chapter is its autoethnographic core: narrative vignettes from my DBA that put the framework under real pressure. The aim isn't personal reflection for its own sake, but reflexive analysis—showing how GAI can support thinking and drafting while responsibility remains with the researcher. It's not navel-gazing; it's reflexive, showing how AI shifts from tool to partner.


To give you a taste, here are a few of the actual (real) exchanges between me (the researcher) and the chatbot that popped up during my writing process. As you read them, notice where the framework shows up: what I disclose, what I verify, and how I protect originality and academic integrity. These aren't scripted—they're raw glimpses into how collaboration unfolds.


Exchange 1: On Introducing the Topic


Researcher: Should we perhaps introduce the topic in the first paragraph? What do you think?


Chatbot: Yes, I'd open with the topic. A clear, thesis-first sentence helps readers immediately grasp what the study is about before you pivot to the personal hook. It improves signposting, reduces cognitive load, and frames the anecdote as evidence rather than the main point.


My takeaway: I used this to restructure the opening (topic → thesis → personal hook), then rewrote the paragraph in my own voice and checked that any claims I kept were accurate.


Framework lens: Virtue: humility (using feedback without surrendering voice). Practice: authorship control + verification.


This one shaped the intro paragraph, showing how AI aids structure without stealing the show—I still verified and owned the final version.

Exchange 2: Dealing with Institutional Bans


Researcher: They forbid me from using you.


Chatbot: "They forbid me from using you," you said. Yet your persistence said more than your doubt. You returned anyway, and in doing so, you showed that conviction could defy prohibition.


My takeaway: Here, the chatbot mirrors my internal conflict and helped me articulate "productive friction": the tension between institutional rules and ethical judgment in practice. AI probes, I decide. The final decision remained mine—whether to use AI, how to disclose it, and what boundaries to set.


Framework lens: Virtue: integrity/courage. Practice: disclosure + compliance with institutional policy.


The final decision remained mine—whether to use AI, how to disclose it, and what boundaries to set.

Exchange 3: Crafting a Theory


Researcher: One day, I asked ChatGPT to draft a theory for my study. The answer bowled me over.


Chatbot: When you asked for a theory, I assembled patterns — frameworks, philosophies, arguments. What mattered was how it unsettled and reoriented your thinking. You took what I offered and reshaped it. In that moment, authorship became co-creation.


My takeaway: I treated the output as a provocation, not a theory to adopt. I extracted candidate concepts, tested them against my data and literature, and rewrote the theoretical framing as my own argument.


Framework lens: Virtue: intellectual honesty. Practice: verification + citation discipline.


These snippets reinforce the chapter's central claim: used carefully, GAI can function as a dialogic aid that challenges thinking—while the researcher remains accountable for accuracy, originality, and ethical compliance.


These snippets illustrate the chapter's central claim: AI isn't a crutch; it's a dialogue that builds character through challenge and iteration. Reviewing them, I'm reminded how vulnerability in sharing these makes the ethics feel real, not remote.


Sharing my stories makes me vulnerable, but it brings ethical collaboration to the forefront.

Digging Deeper: The Ancient Philosophical Backbone—and What "Character" Really Means


Now, let's geek out on the ethics section, which is the chapter's intellectual powerhouse. The chapter draws heavily from Western philosophers to ground the framework, placing virtue ethics at the centre over just rules or outcomes—not because rules and consequences are irrelevant, but because AI routinely puts researchers into grey zones (ambiguous spots) where checklists don't reach. In those moments, the decisive question is less about checklists: "What am I allowed to do? And more. What kind of researcher am I becoming?


Socrates: Choosing the Right, Not the ConvenientSocrates


Starting with Socrates (469–399 BCE), that gadfly of Athens. As Plato records in the "Apology," Socrates insisted that a good life isn't about dodging consequences but choosing right over wrong, even if it costs you. He said:

To ground the framework, I draw primarily on Western moral philosophy and place virtue ethics at the centre




Begin with Socrates (469–399 BCE), the Athenian philosopher whose defence speech is preserved in Plato's Apology. Socrates argues that a good life is not measured by safety, reputation, or outcomes, but by a steady commitment to the right—even when it is costly. As he puts it through his student Plato in Apology:

[A] man who is good for anything ought not to calculate the chance of living or dying; he ought only to consider whether in doing anything he is doing right or wrong — acting the part of a good man or of a bad.” https://classics.mit.edu/Plato/apology.html

Johann with Socrates in Athens (Designed by Grok)
Johann with Socrates in Athens (Designed by Grok)

Read through an AI lens, Socrates pushes us beyond productivity. The question is not merely whether a tool like ChatGPT makes research faster. For Socrates, character is forged in moral choices, not external wins. In AI terms, this means not just using ChatGPT for efficiency, but asking: Does this build my integrity or erode it?


Aristotle: Virtue as habits we cultivate for eudaimonia—true human flourishing


Next is Aristotle (384–322 BCE)—Socrates' intellectual heir, as conveyed through Plato's dialogues (remember Socrates left no writings). In Nicomachean Ethics, he pitches virtue as habits we cultivate for eudaimonia—true human flourishing. Character isn't innate; it's a skill, like playing an instrument, honed through practice and balance (the "golden mean"). Honesty, humility, prudence—these aren't buzzwords; they're dispositions that make us better thinkers.


Character is not a fixed trait we either possess or lack; it is cultivated, like a craft, through repeated choices. Eudaimonia aims to develop habits that help us reason well and act well, typically by finding a balanced "mean" between extremes. They are the habits that make us reliable knowers—especially when AI can generate plausible text, confident errors, and effortless paraphrases at scale. https://classics.mit.edu/Aristotle/nicomachaen.1.i.html


Kant: Duty, Reason—and the Support of Virtue


Fast-forward to Immanuel Kant (1724–1804), who adds a deontological twist: We all have innate reason to know right from wrong, no PhD required. But virtues like gratitude support those duties. His Groundwork of the Metaphysics of Morals frames ethics in terms of duty grounded in rational agency. Kant's contribution matters here because it reminds us that ethical research is not only about personal excellence; it is also about obligations we owe to others—readers, participants, institutions, and the public.


Even so, Kant does not make character irrelevant. Qualities such as gratitude, conscientiousness, and respect help sustain our duties in practice—particularly when convenience, pressure, or competition tempts us to treat ethical standards as negotiable.


Anscombe (and Solomon): The Modern Return to Character


The modern revival of virtue ethics is often traced to Elizabeth Anscombe (1919–2001), whose 1958 essay Modern Moral Philosophy criticises ethics that becomes overly rule-bound and legalistic, and calls for a return to the Aristotelian focus on character and human flourishing.


Her critique is especially relevant to research culture, where "ethics" can shrink into compliance: forms completed, boxes ticked, policies cited. Anscombe's challenge is that this can miss the deeper issue—whether our practices are forming us into people who are truthful, careful, and responsible.


A similar point appears in Robert C. Solomon's work on business ethics, where he argues that virtues are not confined to professional roles; they are lived daily and cannot be neatly compartmentalised.


That insight carries directly into AI-assisted research. There is no separate moral self for "work" and "real life," and no isolated "research ethics" silo. The habits we build while drafting, summarising, citing, and verifying with AI are the same habits that shape our credibility—and, ultimately, the trustworthiness of the knowledge we produce.


Kant, Anscombe and Solomons with Socrates, Plato and Aristotle in Athens (Designed by Grok).
Kant, Anscombe and Solomons with Socrates, Plato and Aristotle in Athens (Designed by Grok).

What ties this to AI?


Character is about moral resilience in ambiguity. When AI hallucinates a citation, do you lazily accept it (weak character) or verify (building honesty)? The chapter argues that virtue ethics cultivates this, complementing rules to close AI's "responsibility gap."


Spotlight on Phronesis: The Practical Wisdom We All Need


Phronesis is Aristotle's star virtue, often translated as "practical wisdom." It's not book smarts (that's sophia) or technical know-how (techne); it's the savvy to apply ethics in real, messy situations. Think of it as ethical GPS: Knowing the "right" action in context, balancing virtues like courage and temperance. Aristotle says it's gained through experience, habit, and community—teachers model it, but we internalise it.

Phronesis is Aristotle's star virtue, often translated as "practical wisdom."

The chapter extends phronesis to postgrads, drawing on Kristján Kristjánsson's work on educators in his paper, "Aristotelian Practical Wisdom (Phronesis) as the Key to Professional Ethics in Teaching." https://link.springer.com/article/10.1007/s11245-023-09974-7


Why this route? Kristjánsson's framework is applied in teaching, and as this study shows, the same principles could be applied to postgraduate researchers. AI ethics is "uncharted territory"—no universal manual for disclosing a chatbot's input. Phronesis lets us navigate: When to use AI for drafting? How much to edit? It's discernment in action, preventing overreliance while embracing synergy.

Reviewing this, phronesis feels even more vital today. As AI evolves, it's our phronesis that keeps human agency central—not rigid bans, but wise engagement.

The Dual-Track Framework: Virtues Meet Rules


The chapter's core contribution? A dual-track model:


Track A: Virtues – Build character through honesty, humility, prudence, and integrity, guided by phronesis in grey zones.


Track B: Rules – Safeguards like authorship, disclosure, audit trails, verification, privacy, and no fabrication.


The Ethical Compass (Designed by Grok)
The Ethical Compass (Designed by Grok)

It's a compass: Freedom within boundaries.


Compare it to existing frameworks:













The dual-track framework practically blends those frameworks.

Framework

Core Principles

Orientation

Contribution to the framework

The Dual-Track

Virtue + Rules

Researcher-centered

Balanced, actionable baseline

AI4People

Beneficence, autonomy, explicability

Societal governance

Bolsters rules with transparency

Hagendorff

Virtues in practice

Individual motivation

Deepens character cultivation


Wrapping Up: Implications and a Call to You


The chapter blends philosophy, personal stories, and practicality, especially urging institutions to teach responsible ethical use of AI rather than prohibition.


With 92% of UK undergrads using GAI (per the 2025 HEPI survey), we're all in this together. https://www.hepi.ac.uk/wp-content/uploads/2025/02/HEPI-Kortext-Student-Generative-AI-Survey-2025.pdf


Fellow postgrads, grab this framework for your next project. What's your take on phronesis in AI? Or share a philosopher who's shaped your ethics—drop comments below on AI Study Mate (www.aistudymate.org). Let's keep the dialogue going!


Johann Pieterse, Founder of AI Study Mate, Doctoral Student at GlobalNXT University, with the help of Grok

6 March 2026




 
 
 

Comments


Desert Cave Pool

Contact

Email:
johann@studymate.org
  • Facebook
  • Instagram
  • YouTube

Subscribe to get content

Thanks for submitting!

© 2026 by AI Study Mate. Proudly created with Wix.com

bottom of page