Autoepistemic Conditions for Responsibility
Self-awareness conditions necessary for someone to be responsible for an action
When philosophers talk about epistemic conditions for responsibility, they’re usually referring to what an agent must know, believe, or be aware of in order to be morally or legally responsible for an action.
When they talk about autoepistemic conditions, they mean a special subset of those conditions where the agent must have some form of self-knowledge — awareness about their own mental states, intentions, or reasons — rather than just general world knowledge.
Here’s a structured breakdown:
1. Epistemic Conditions for Responsibility
The epistemic condition is one of the “standard conditions” in responsibility theory, alongside the control condition.
Core idea:
An agent can only be responsible for an action if they have sufficient epistemic access to the relevant facts at the time of acting.
Components:
Factual awareness – The agent must know (or at least justifiably believe) the relevant non-moral facts about the situation.
Example: You’re not responsible for hitting someone with your car if you genuinely and non-negligently didn’t know they had darted into the road.
Normative awareness – The agent must understand the moral/legal status of their act.
Some traditions (e.g., Aristotle, Aquinas) allow “invincible ignorance” of morality to excuse; others are stricter.
Foreseeability – The agent must be able to foresee the likely consequences of their act (foreseeability is weaker than certainty).
This is often built into negligence standards in law.
Non-culpable ignorance clause – Ignorance that is itself due to negligence or recklessness does not excuse responsibility.
2. Autoepistemic Conditions for Responsibility
Autoepistemic conditions go further:
The agent must know about their own mental states relevant to the action, or at least be in a position to know them.
Why “auto” matters:
Moral responsibility often hinges on self-ascription — the ability to say, in effect, “I am doing this intentionally for reason R” or “I am about to cause X.”
Components:
Awareness of one’s intentions – Responsibility often requires that the agent is conscious of acting with a certain aim or purpose.
Example: Accidentally pressing a button that launches missiles isn’t intentional, but pressing it while thinking “this will launch the missiles” is.
Awareness of one’s reasons – Being able to reflect on why you are acting (your motivating reasons) is crucial in many theories of accountability.
Awareness of one’s options – Knowing that alternative courses of action are available (links to Frankfurt’s principle of alternative possibilities debates).
Metacognitive capacity – The ability to form second-order beliefs (“I believe that I believe p”) about your own mental states, which supports self-control and foresight.
3. Relationship Between the Two
Epistemic conditions are about knowing the world and the moral facts relevant to the act.
Autoepistemic conditions are about knowing yourself — your own mental states, intentions, and reasons.
You can meet the epistemic condition without the autoepistemic one (e.g., you know killing is wrong and that your act will kill someone, but you’re self-deceived about your intention).
Some legal doctrines (like “willful blindness”) collapse these distinctions, treating ignorance of either type as culpable if avoidable.
4. Why This Matters in Agency & Responsibility Theory
In libertarian or compatibilist accounts, both conditions often need to be satisfied for full moral responsibility.
In collective responsibility and institutional agency contexts, epistemic conditions can be satisfied at the group level, but autoepistemic conditions become murkier.
In Sabah-specific contexts like Bobolian decision-making or Majlis Anak Negeri adjudication, the epistemic condition may include knowledge of local customary law and spiritual norms, while the autoepistemic condition may hinge on conscious acknowledgment of one’s ritual or communal obligations.
A usable STIT + epistemic modal logic frame
A frame that represents both the outer epistemic layer and the inner auto-epistemic (self-knowledge) layer, plus principled links between them. I’ll give:
the syntax (operators),
a semantic frame (branching time + choices + epistemic relations),
truth conditions,
a set of axioms / bridge principles that capture the E→A links in your diagram, and
some example formulas and short proofs/reads for common patterns and failures.
I’ll keep notation light and explicit.
1. Syntax
Let Prop be atomic propositions. $Agents i∈Ai\in\mathcal{A}.” $$
Formulae $varphi$ generated by:
φ::=p∣¬φ∣φ∧φ∣□φ∣[i stit]φ∣Kiφ∣Biφ∣Inti(φ)\varphi ::= p \mid \lnot\varphi \mid \varphi\wedge\varphi \mid \Box\varphi \mid [i\,\text{stit}]\varphi \mid K_i\varphi \mid B_i\varphi \mid \mathrm{Int}_i(\varphi)
Where:
□φ\Box\varphi: historic necessity (true on all histories through the current moment).
[i stit]φ[i\,\text{stit}]\varphi: “agent ii sees to it that φ\varphi” (the classical STIT operator).
KiφK_i\varphi: agent ii knows φ\varphi (epistemic layer).
BiφB_i\varphi: agent ii believes φ\varphi (optional — useful for culpable ignorance).
Inti(φ)\mathrm{Int}_i(\varphi): agent ii has the intention that φ\varphi (autoepistemic content; can be treated primitive or reduced to stit + commitment).
You can add further operators (e.g., RiR_i for reasons-ascriptions) but these suffice to illustrate the two layers and links.
2. Semantic frame
A model M=(T,<,H,Choice,A,V,R)\mathcal{M} = (T,<,H,Choice,\mathcal{A},V,R) where:
TT is a set of moments.
<< is a tree-like ordering (branching time). For each m∈Tm\in T, HmH_m is the set of histories passing through mm. Write pairs (m,h)(m,h) with h∈Hmh\in H_m.
Choice:A×T→P(P(Hm))Choice: \mathcal{A}\times T \to \mathcal{P}(\mathcal{P}(H_m)) gives, for each agent ii and moment mm, a partition of HmH_m into the available choice cells (i.e., Choicei(m)Choice_i(m) is a partition of HmH_m). For convenience write Choicei(m,h)Choice_i(m,h) for the cell containing hh.
Intuitively: at moment mm, agent ii cannot distinguish among histories inside the same cell — their act picks a whole cell.
V:Prop→P{(m,h)}V: Prop \to \mathcal{P}\{(m,h)\} is valuation.
For each agent ii, Ri⊆(T×H)×(T×H)R_i\subseteq (T\times H)\times(T\times H) is an epistemic accessibility relation on index points: (m,h)Ri(m′,h′)(m,h)R_i(m',h') means that at (m,h)(m,h), agent ii considers (m′,h′)(m',h') epistemically possible.
(You could constrain RiR_i to only relate points with the same moment mm if you want no cross-temporal epistemic uncertainty, but I’ll keep it general.)
Optionally add SiS_i as a self-epistemic (autoepistemic) relation if you want self-knowledge to be modelled by a separate accessibility relation.
3. Truth conditions
Let M,(m,h)⊨φ\mathcal{M},(m,h)\models\varphi be the satisfaction relation.
Boolean clauses as usual.
Historic necessity:
M,(m,h)⊨□φiff∀h′∈Hm M,(m,h′)⊨φ.\mathcal{M},(m,h)\models\Box\varphi \quad\text{iff}\quad \forall h'\in H_m\; \mathcal{M},(m,h')\models\varphi.
STIT (Chellas-style):
M,(m,h)⊨[i stit]φiff∀h′∈Choicei(m,h) M,(m,h′)⊨φ.\mathcal{M},(m,h)\models [i\,\text{stit}]\varphi \quad\text{iff}\quad \forall h'\in Choice_i(m,h)\; \mathcal{M},(m,h')\models\varphi.
(So ii ensures φ\varphi by selecting the choice cell.)
Knowledge:
M,(m,h)⊨Kiφiff∀(m′,h′) if (m,h)Ri(m′,h′) then M,(m′,h′)⊨φ.\mathcal{M},(m,h)\models K_i\varphi \quad\text{iff}\quad \forall (m',h')\; \text{if }(m,h)R_i(m',h')\text{ then }\mathcal{M},(m',h')\models\varphi.
Belief BiB_i similarly, but with a (possibly weaker) relation or weaker frame constraints.
Intention (primitive):
M,(m,h)⊨Inti(φ)iff(agent i has committed to bring about φ from m).\mathcal{M},(m,h)\models \mathrm{Int}_i(\varphi) \quad\text{iff}\quad \text{(agent i has committed to bring about φ from m)}.
Formally one can define Inti(φ)\mathrm{Int}_i(\varphi) as a commitment to choose only among histories in which φ\varphi holds, but keep it primitive for clarity:
Inti(φ) ⇒ ∃C∈Choicei(m) ∀h′∈C M,(m,h′)⊨φ.\mathrm{Int}_i(\varphi)\ \Rightarrow\ \exists C\in Choice_i(m)\; \forall h'\in C\; \mathcal{M},(m,h')\models\varphi.
(This is the standard intention-as-commitment reading.)
4. Axioms / frame constraints (two layers + bridges)
Below I give modal constraints for the epistemic layer, introspective axioms for auto-epistemic, and bridge principles that capture the E→A edges from your diagram.
(A) Epistemic layer axioms (standard S5 or some weaker system)
$Ki(φ→ψ)→(Kiφ→Kiψ)K_i(\varphi\to\psi)\to(K_i\varphi\to K_i\psi) (K)$
$Kiφ→φK_i\varphi\to\varphi$ (Truth) — if you want knowledge to be factive (S5).
$Kiφ→KiKiφK_i\varphi\to K_iK_i\varphi$ (Positive introspection).
$¬Kiφ→Ki¬Kiφ\lnot K_i\varphi\to K_i\lnot K_i\varphi$ (Negative introspection — optional; strong).
(Choose S4/S5 depending on how much introspection you want in the epistemic layer.)
(B) Auto-epistemic / intention axioms
Intention consistency: $Inti(φ)→¬Inti(¬φ)\mathrm{Int}_i(\varphi)\to\lnot\mathrm{Int}_i(\lnot\varphi).$
Intention leads to stit (if intentions are efficacious commitments):
$Inti(φ)→⟨i stit⟩φ\mathrm{Int}_i(\varphi)\to\langle i\,\text{stit}\rangle\varphi
where ⟨i stit⟩φ:=¬[i stit]¬φ\langle i\,\text{stit}\rangle\varphi := \lnot[i\,\text{stit}]\lnot\varphi.$ (Agent can bring about what she intends.)
(C) Bridge principles (formalising E→A links)
These are the formal versions of your diagram arrows:
(E1 → A1,A2): Factual knowledge supports accurate self-ascription of intentions and reasons
Factual knowledge → self-awareness of action
$Kip∧⟨i stit⟩q ⟶ Ki(⟨i stit⟩q).K_i p \wedge \langle i\,\text{stit}\rangle q \;\longrightarrow\; K_i\big(\langle i\,\text{stit}\rangle q\big)$.
Reading: if ii knows the relevant facts pp and can see to qq, then ii knows she can bring about qq (or knows she is doing so). This formalises E1 enabling A1.
Normative knowledge → knowledge of reasons
$Ki(wrong(a))∧Inti(a) ⟶ Ki(Inti(a)∧wrong(a)).K_i(\text{wrong}(a))\wedge \mathrm{Int}_i(a)\;\longrightarrow\$; $K_i\big(\mathrm{Int}_i(a)\wedge\text{wrong}(a)\big)$.
Reading: knowing the normative status helps the agent correctly ascribe reasons to her intention (E2 → A2).
(E3 → A3): Foreseeability supports awareness of options
3.$Ki(likely(r))∧¬□r ⟶ Ki(¬□¬r)K_i(\text{likely}(r)) \wedge \neg\Box r \;\longrightarrow\; K_i(\lnot\Box\lnot r)$
(Or more usefully:)
$Ki(possible(o)) ⟶ Ki(¬[i stit]¬o)K_i(\text{possible}(o))\;\longrightarrow\; K_i(\lnot[i\,\text{stit}]\lnot o)$
Reading: if agent ii knows an option oo is possible, she knows that it is not impossible (i.e., aware of alternatives).
(E4 → A4): Non-culpable ignorance supports trustworthy metacognition
4.$(¬Biφ∧non_culpable_ignorancei(φ)) ⟶ Ki¬Biφ\big(\neg B_i\varphi \wedge \text{non\_culpable\_ignorance}_i(\varphi)\big)\;\longrightarrow\; K_i\neg B_i\varphi$
Reading: if an agent’s ignorance about $φ\varphi$ is non-culpable, they can be expected to know that they do not believe $φ\varphi$ (supporting metacognitive clarity).
These are schematic bridge principles: you will want to adapt the concrete predicates $(wrong(⋅)\text{wrong}(\cdot), likely(⋅)\text{likely}(\cdot), non_culpable_ignorancei(⋅)\text{non\_culpable\_ignorance}_i(\cdot))$ to the substantive facts of the domain you’re modelling (e.g., customary norms in Sabah, Bobolian ritual facts).
(D) Responsibility as a compound formula
You can define a responsibility predicate $Respi(φ)Resp_i(\varphi) as:
Respi(φ):=[i stit]φ∧Ki(relevant facts for φ)∧KiInti(φ).Resp_i(\varphi) := [i\,\text{stit}]\varphi \ \wedge\ K_i(\text{relevant facts for }\varphi)\ \wedge\ K_i\mathrm{Int}_i(\varphi)$.
This says: to be responsible for $φ\varphi$ the agent must bring φ\varphi about (or have brought it about), know the facts that make $φ\varphi$ morally significant, and know that she intended $φ\varphi$.
5. Examples, failure modes, and short derivations
Example 1 — Successful responsibility
Assume at $(m,h)$(m,h):
$[i stit]φ [i\,\text{stit}]\varphi$. (i sees to it that $φ\varphi$.)
$KipK_i p$. (i knows relevant fact pp making $φ\varphi$ significant.)
$Inti(φ)\mathrm{Int}_i(\varphi)$ and by bridge (E1→A1) you have $KiInti(φ)K_i\mathrm{Int}_i(\varphi)$.
Then by definition $Respi(φ)Resp_i(\varphi)$ holds.
Example 2 — Epistemic success but auto-epistemic failure
Suppose $KipK_i p$ but $¬KiInti(φ)\neg K_i\mathrm{Int}_i(\varphi)$ (i is self-deceived about her own aims).
Then $Respi(φ)Resp_i(\varphi)$ fails because the auto-epistemic conjunct is missing though the epistemic layer holds.
This captures cases where someone knows what consequences an act will have but denies/doesn’t acknowledge their own intention (self-deception).
Example 3 — Willful blindness (culpable ignorance)
Model culpable ignorance by dropping E4 constraints: if the agent deliberately avoids knowledge, then BiφB_i\varphi is false but the ignorance is culpable, so you block the E4→A4 bridge. This reproduces legal doctrine where willful blindness is treated as satisfying epistemic responsibility even without K_i facts.
6. Implementation notes & modeling choices
Choice of introspection axioms: Whether knowledge satisfies negative introspection matters for modeling metacognitive capacity (A4). For many psychological realistic accounts, you may adopt only positive introspection $(Kiφ→KiKiφK_i\varphi\to K_iK_i\varphi)$ and avoid negative introspection.
Intention as primitive vs derived: You can make $Inti\mathrm{Int}_i$ primitive (cleaner auto-epistemic statements like $KiInti(φ)K_i\mathrm{Int}_i(\varphi))$ or reduce it to STIT + global commitments (e.g., $Inti(φ):=[i stit]φ∧¬□φ\mathrm{Int}_i(\varphi) := [i\,\text{stit}]\varphi \wedge \lnot \Box\varphi$ plus persistence). For normative work, making intention explicit helps.
Two accessibility relations: If you want auto-epistemic awareness to be structurally different from worldly epistemic uncertainty, introduce a second relation $SiS_i$ for self-ascription (so $KiInti(φ)K_i \mathrm{Int}_i(\varphi)$ uses $SiS_i$ rather than $RiR_i$). This is helpful if agents sometimes know worldly facts but systematically misrepresent or have special opacity about their own states.
7. Short formal lemma (sketch)
Lemma (E1→A1 formalised): Under the frame conditions:
$if (m,h)⊨Kip(m,h)\models K_i p and (m,h)⊨⟨i stit⟩q(m,h)\models \langle i\,\text{stit}\rangle q$ and epistemic access respects the partition induced by $Choicei(m)Choice_i(m)$ (i.e., Ri$R_i$ does not identify histories from distinct choice cells at mm), then $(m,h)⊨Ki⟨i stit⟩q(m,h)\models K_i\langle i\,\text{stit}\rangle q$.
Sketch: If $RiR_i$ doesn’t relate $(m,h)(m,h)$ to a point where the agent’s choice cell for ii differs, then the agent’s knowledge that pp holds across all epistemic alternatives plus the fact that across her choice cell φ\varphi holds means the agent can rule out epistemic alternatives in which she does not bring about qq. So she knows that she can (or will) bring about qq. (Make precise by expanding semantics.)
This lemma precisely formalises the “facts enable accurate self-ascription” arrow in your diagram.
8. Quick checklist for formalising a case
Identify the atomic facts p,q,…p,q,\dots in the case.
Decide whether agents know them (Kip$K_i p$) or merely believe (Bip$B_i p$).
Encode the agent’s choices at the relevant moment as cells in Choicei(m)Choice_i(m).
State intentions Inti(φ)$\mathrm{Int}_i(\varphi)$ explicitly when self-knowledge matters.
Evaluate the bridge axioms: is the E→A link satisfied? if not, you can point to exactly which bridge fails (E1 absence, E4 culpability, etc.).
Conclude Respi(φ) $Resp_i(\varphi)$ or its failure.


