Use of DoView Planning for AI Systems
Viewing AI systems and agents as simply examples of ‘outcomes systems’
Quick Overview
Outcomes theory, the theory underpinning DoView Planning, was developed as a generic theory applicable to any type of agent identifying its outcomes and the steps leading to them and then taking action in the world. In outcomes theory, all such endeavors are known as ‘outcomes systems’. Because of this, now that AI systems and agents have entered the world and are starting to take action in ways analogous to how humans and human organizations operate, we can expect that the broad principles of outcomes theory apply to AI systems and agents.
The basic point is that DoView strategy diagrams, the use of which is central to the outcomes theory approach, can be used as a way of more efficiently and transparently identifying AI systems and agents’ high-level outcomes and the steps they are planning to take to achieve them. They could be used as a way for users to interact with an AI agent about what it is doing, not only text prompts, but by directly interacting with a DoView diagram created by the AI system.
In addition, DoView diagrams could be used by AI system and agent creators as a way of describing what an AI system is trying to do and then used for auditing AI systems to see if their behavior is actually conforming to their DoView. Below is an example illustrating the use of DoViews to help deal with the classic AI alignment, AI Paperclip Problem. This is where an AI system consumes all of the world’s resources in the blind pursuit of just creating paperclips. Links to pages from the DoView Planning and Outcomes Theory Handbook, particularly relevant to AI systems being viewed as outcomes systems are provided below. Lastly there is a paper written by AI looking at how DoView Planning and outcomes theory could be used in an AI context.
Better transparency and alignment with AI system DoView diagrams
The AI Paperclip Problem
The AI Paperclip Problem is where an AI system which has the overall outcome of making paperclips, gets out of control and, in its single-minded pursuit of this outcome, consumes all of the world’s resources in doing so. From an outcomes theory point of view this is a simple problem of the AI system having a faulty DoView diagram of its outcomes and the steps it is using to get to them. A safe AI Paperclip Agent would be one that has an aligned DoView, which is audited and enforced by either humans or independent AI watchdog agents to make sure that it is conforming to it. There are various ways to try to describe an AI system’s outcomes and the steps it is taking to achieve these. For instance, the use of AI system charters, model cards, and requirements specifications. These are analogous to the variety of ways in which human organizations have tried to articulate their outcomes and the steps they are using to get to them. Experience has shown that a visual representation in the form of a DoView strategy diagram is a very efficient and transparent way of quickly communicating this to complement whatever other documentation is being used. Therefore, it is likely that a similar approach to transparently communicating what AI systems and agents are trying to do would be fruitful.
Below is a ‘Bad AI Paperclip Agent DoView Strategy Diagram’ and a ‘Good AI Paperclip Agent DoView Strategy Diagram’.
2. Using a DoView Strategy Diagram for Planning AI Agent Development
In the same way that DoView strategy diagrams can assist with planning any type of initiative, they can be used to plan and implement AI projects such as developing an AI agent. Here is an example of using a DoView diagram for this purpose. At the moment, this specifies the final outcomes and the steps that need to be taken to achieve these, including steps that need to be taken by humans, the AI agent swarm, and both working together. It is possible to develop a lower-level DoView diagram spelling out exactly what the AI agent will do, just as above we used one to spell out what an AI Paperclip Agent would do.
3. Use of DoView diagrams as a way of interacting with AI agents
DoView strategy diagrams could potentially be used as a way of improving our ability to manage the behavior of AI agents. At the moment much of this is being done by text prompts. However, an AI could produce a DoView diagram of what it thinks a user wants it to do and the user could use this as a way of interfacing with the AI system. Here is the relevant section from the DoView Planning and Outcomes Theory Handbook.
4. Use of AI-generated DoView strategy diagrams for managing all of a person’s communications
DoView strategy diagrams are simply a way of articulating a set of outcomes and the steps that are being taken to achieve them. From a conceptual point of view this means that they can potentially be used to spell out any type of action being taken in the world. When people communicate in a work and many other contexts they are usually at the end of the day, communicating about actions they want to take. This suggests the possibility that DoView diagrams could be used to structure a person’s inward and outward communication by organizing it around the actions that they are planning to take.
While this would have been too complex to achieve in practice in the past due to the time it would take to build and amend the relevant DoView diagrams, the wide availability of AI opens up the intriguing possibility that such an approach could be experimented with now. Here is the relevant section from the DoView Planning and Outcomes Theory Handbook.
5. Sections from the DoView Planning and Outcomes Theory Handbook on using DoView Planning for AI
Part J: Using AI for DoView Planning and DoView Planning for AI
In what way does DoView Planning and outcomes theory apply to what AI systems do when they act in the world?
Tool J1: Areas Where Outcomes Theory and DoView Planning May Apply to AI Agents Explainer.
How can DoView strategy/outcomes diagrams potentially be used to build AI agents and systems?
Tool J2: Five Step DoView AI Management.
How can AI be leveraged to speed up DoView Planning?
Tool J3: Using AI to Speed Up DoView Planning.
How could DoView strategy/outcomes diagrams potentially be used to improve AI prompting?
Tool J4: Using DoView Prompts as a Complement to Everyday Language Prompts for AI Systems and Agents.
How could DoView strategy/outcomes diagrams potentially be used to structure all your email, text and chat communication?
Tool J5: DoView Communication Mapping.
How can outcomes theory and DoView Planning be improved by mathematization?
Tool J6: Mathematization of Outcomes Theory and DoView Planning.
How can AI DoViewing assist dialog processes where groups discuss differing perspectives?
6. How DoView Planning and Outcomes Theory Can Be Used For AI Transparency, Alignment and User Communication With AI Systems and Agents
DoView Planning and Outcomes Theory Handbook
Relevance for Artificial Intelligence
(Created by ChatGPT 2025-12-10)
This document analyses how Dr Paul Duignan’s DoView Planning and Outcomes Theory Handbook (https://doviewplanning.org/book ) can be used in the design, operation and governance of AI systems and agents. It focuses on Part J – "Using AI for DoView Planning and DoView Planning for AI" – and other tools that are relevant to AI communication, transparency, monitoring, attribution and impact evaluation.
Where specific tools are discussed, the full URL for the corresponding page in the Handbook is included so that the connection is explicit.
1. DoView as a shared human–AI thinking tool and interface
DoView strategy/outcomes diagrams are explicitly described as shared thinking tools for human planning and decision-making (DoView Tool B2 – "DoView Strategy/Outcomes Diagrams as ‘Shared Thinking Tools’ Explainer": https://www.doviewplanning.org/b2doviewtool ). The core idea is that a diagram provides a transparent map of outcomes, intermediate steps, risks and assumptions which people can jointly inspect, edit and use to drive action.
If AI systems and agents are given a DoView-capable interface, the same diagrams can become shared thinking tools between humans and AI. The AI system would:
· Convert user inputs (text, speech, images, video) into draft DoView diagrams – boxes for outcomes, strategies, activities, assumptions and risks, connected by arrows.
· Use the diagram as an explicit representation of what it proposes to do, and how those proposals connect to user-valued outcomes.
· Update the diagram over time as it takes actions or receives new information, so the user always has a live "strategy map" of the AI’s behaviour.
This proposal aligns naturally with the Five Steps in DoView Planning for human work (Tool A1 – "The Five Steps in DoView Planning": https://www.doviewplanning.org/a1doviewtool ). An AI-enabled DoView interface would effectively walk through these steps, but with the AI doing much of the mechanical work of extracting, organising and updating the diagram while humans provide judgement and priorities.
Tool J4 – "Using DoView Prompts as a Complement to Everyday Language Prompts for AI Systems and Agents" (https://www.doviewplanning.org/j4doviewtool ) makes this explicit. It suggests that, alongside natural language, DoView structures can be used as a complementary prompt and representation format for AI. A simple transparent interface would therefore always maintain two parallel representations of a task:
· The conversational history in natural language.
· The structured DoView diagram showing how that conversation has been translated into concrete outcomes, steps and risks.
Tool J5 – "DoView Communication Mapping" (https://doviewplanning.org/j5doviewtool ) extends this idea to all communication: emails, chat messages and documents can be mapped to specific boxes or subsections of a DoView diagram. An AI assistant could automatically link each message or instruction to the outcome(s) it supports, giving users an always-up-to-date map of how their communications relate to strategic goals.
2. Using AI to create and maintain DoView diagrams
Tool J3 – "Using AI to Speed up DoView Planning" (https://doviewplanning.org/j3doviewtool ) explicitly envisages AI helping with the practical work of building diagrams. Combined with Tool A1 (https://www.doviewplanning.org/a1doviewtool ), a plausible workflow for a transparent AI interface is:
· User provides initial material – free-text description, recordings of meetings, documents, or even domain data.
· AI proposes a draft DoView strategy/outcomes diagram, including top-level outcomes, intermediate outcomes, strategies, activities and relevant risks/assumptions.
· User reviews the diagram, asking questions like "Why is this outcome here?" or "Where did you put this risk?", and the AI revises the structure.
· Over time the AI keeps the diagram synchronised with executed actions, new indicators and evolving context.
The drill-down and layering concepts in DoView (for example, Tool B5 – "Breaking Up DoView Strategy/Outcomes Diagrams Into Drill-Down Layers": https://www.doviewplanning.org/b5doviewtool, and the general framework in Tool D1 – "DoView Planning Framework (Outcomes System Components/Building Blocks Diagram": https://www.doviewplanning.org/d1doviewtool ) provide a natural way to manage complexity. An AI system can keep fine-grained detail in lower layers while exposing high-level summaries to human users.
Tool B23 – "DoView 'What-If' Planning" (https://www.doviewplanning.org/b23doviewtool ) is particularly relevant to AI planning. The AI can:
· Generate multiple "what-if" variants of a DoView diagram for different policies or strategies.
· Estimate (or simulate) the likely consequences of following each path, including potential tipping points and risks.
· Present these alternatives visually for human selection, rather than directly committing to one opaque plan.
Tool B13 – "DoView Strategy Diagrams Helping to Extend Strategic Foresight Explainer" (https://www.doviewplanning.org/b13doviewtool ) reinforces that DoView is about extending the range of futures we can think about. AI systems are well-placed to systematically generate and explore alternative futures, but the DoView representation forces them to express those futures in structured, outcome-based diagrams humans can critically inspect.
3. DoView for AI outcomes transparency and alignment
Part J of the Handbook (https://doviewplanning.org/book, Part J listing) reframes AI systems and agents as just another class of initiatives operating within an outcomes system. Tool J1 – "Areas Where Outcomes Theory and DoView Planning May Apply to AI Agents Explainer" (https://doviewplanning.org/j1doviewtool ) asserts that AI agents acting in the world can and should be described using outcomes theory and DoView diagrams in the same way as government programs or corporate initiatives.
Tool J2 – "Five Step DoView AI Management" (https://doviewplanning.org/j2doviewtool ) suggests a structured management cycle for AI systems built around DoView diagrams. While the diagrams themselves are image-based on the site, the title and the general DoView pattern imply something like:
· Define the AI’s intended high-level outcomes and beneficiaries.
· Map intermediate outcomes, strategies and activities that the AI system can influence.
· Specify constraints, assumptions and risks explicitly on the diagram.
· Attach indicators and evaluation plans to key outcomes.
· Regularly review and revise the diagram in light of observed impacts.
Tool B7 – "DoView Drawing Rules" (https://www.doviewplanning.org/b7doviewtool) lays out rules for drawing fit-for-purpose diagrams. Dr Duignan’s description of DoView emphasises that diagrams should include not only desired outcomes and strategies but also risks and assumptions as explicit boxes. When those drawing rules are followed in the context of AI systems, a DoView becomes a comprehensive, human-readable strategy map of what an AI system is trying to do and the conditions under which that strategy is expected to work.
This lends itself directly to AI outcomes transparency and alignment:
· AI system producers publish a DoView diagram for each system as a formal statement of its intended outcomes, main strategies, key dependencies, risks and assumptions.
· Regulators, auditors and users can check whether important constraints and values are actually present on the diagram.
· The diagram can be used as a reference when designing reward functions, control mechanisms and monitoring plans.
The classic AI "paperclip problem" illustrates this. A pure paperclip-maximising agent has only "maximize paperclips" as its top-level outcome. A DoView diagram for such an agent would contain boxes about increasing paperclip production but, if mis-specified, no boxes about not exhausting global resources, preserving human wellbeing, or respecting other constraints. The absence of such constraint boxes is immediately visible on the diagram and can be challenged before deployment. Conversely, a well-aligned specification would explicitly include outcomes such as "Maintain environmental sustainability" and "Avoid catastrophic resource depletion" linked to risk boxes like "Risk: paperclip activities crowd out essential resource use".
Tools B12 and B25 add further depth:
· Tool B12 – "Four Types of DoView Strategy/Outcomes Diagrams With Differing Evidential Status" (https://www.doviewplanning.org/b12doviewtool) distinguishes diagrams based on how strong the evidence is for the assumed causal links. For AI systems, this would mean explicitly labelling whether a claimed connection (e.g. between a content-moderation model and reduction in harm) is theoretical, based on weak empirical evidence or strongly supported.
· Tool B25 – "Risk Estimation and Acting Before the Tipping Point Prevention Window Has Closed Explainer" (https://www.doviewplanning.org/b25doviewtool ) emphasises tipping point risks and the need to act before prevention windows close. For AI safety, the diagram can explicitly show tipping points such as "Loss of effective human oversight" or "Capability growth beyond existing safeguards" and connect them to preventative strategies.
4. DoView for AI auditing, monitoring, evaluation and attribution
Part G of the Handbook focuses on evaluating implementation, process and impact. Tool G1 – "Types of Evaluation Mapped Onto the DoView Planning Framework DoView" (https://www.doviewplanning.org/g1doviewtool) maps different evaluation types across the DoView framework. For AI systems, this suggests that every major box in the diagram can have corresponding evaluation questions and evidence requirements.
Relevant tools include:
· Tool G5 – "When Impact Evaluation Is Needed Rather Than Just Measuring High-Level Outcome Indicators" (https://www.doviewplanning.org/g5doviewtool ): for AI deployments with high stakes, just monitoring outcome indicators (e.g. "number of harmful posts") is insufficient; structured impact evaluation is needed to attribute changes to the AI system itself rather than to external factors.
· Tool G12 – "Impact Evaluation Suitability Check" (https://www.doviewplanning.org/g12doviewtool ): provides a way to choose evaluation designs that are appropriate, feasible and credible; this is directly applicable to evaluating different AI systems and policy experiments.
· Tool G25 – "DoView Monitoring and Evaluation (M&E) Plan" (https://www.doviewplanning.org/g25doviewtool ): shows how to build an integrated monitoring and evaluation plan around a DoView diagram – a natural template for AI governance plans.
Part H then deals explicitly with reporting and attribution. Tool H2 – "Including Impact Information in Indicator Reporting" (https://www.doviewplanning.org/h2doviewtool ) and Tool H3 – "Deriving a Balanced Scorecard Format From a DoView Strategy Diagram" (https://www.doviewplanning.org/h3doviewtool ) recognise that indicators are often not fully controllable, and attribution is non-trivial. This is exactly the challenge with AI systems that operate in complex environments.
In an AI setting:
· The DoView diagram identifies which parts of the causal chain the AI system directly affects and which parts remain under human or environmental control.
· Evaluation tools from Part G are used to estimate the AI’s contribution to observed outcomes, rather than simply crediting or blaming the AI for everything that happens.
· Reporting tools from Part H ensure that these attribution judgements are made explicit, not hidden inside model metrics.
Taken together, these tools provide a comprehensive framework for AI auditing and monitoring: every AI outcome claim, every risk mitigation step and every performance indicator can be positioned on a common diagram and tied to specific evidence and evaluation designs.
5. Can AI systems use DoViews for internal reasoning?
Modern large language models and many other AI systems operate internally through high-dimensional vector representations and gradient-based learning, not through explicit symbolic diagrams. They do not literally "think" in DoView diagrams. However, DoViews can still be used in several realistic ways as part of an AI system’s overall reasoning architecture.
First, DoViews can serve as an external working memory and control structure. A language model can be prompted to maintain a DoView-like graph (stored in a structured format such as JSON) where each node corresponds to a box in the diagram and each edge to a relationship. The model’s generative outputs (plans, actions, explanations) are constrained to reference existing nodes or to propose new nodes explicitly. This is analogous to existing work on "graph-of-thought" or planning with structured task lists, but using DoView’s outcome-centric ontology.
Second, Tool J6 – "Mathematization of Outcomes Theory and DoView Planning" (https://doviewplanning.org/j6doviewtool ) points towards translating DoView diagrams into formal mathematical objects: causal graphs, structural equation models, optimization problems or Markov decision processes. In such an architecture, the DoView diagram is the human-facing representation, while the mathematical model derived from it is used by AI algorithms for internal optimisation and planning.
Third, for multi-component AI systems (for example, those combining planners, language models and specialised tools), the DoView diagram can act as a coordination structure. Each component is responsible for specific boxes or sub-diagrams (e.g. "data collection", "model training", "human review"), and interfaces between components are defined by the arrows between these boxes.
However, there are clear limitations:
· The deep internals of current LLMs are not easily forced into a DoView-like structure; DoView is better suited as an explicit planning and communication layer sitting on top of these models.
· Diagrams can become large and complex; careful use of layering and drill-down (Tool B5: https://www.doviewplanning.org/b5doviewtool ) is needed to keep them manageable.
· If the initial DoView specification is misaligned or incomplete, the AI will faithfully pursue the wrong diagram; DoView improves transparency but does not automatically solve alignment.
So the realistic view is that AI systems will not replace their low-level internal mechanisms with DoViews, but they can use DoViews as an explicit, inspectable planning and coordination representation, and as a bridge between human understanding and machine optimisation.
6. AI DoViewing in dialogue and multi‑stakeholder governance
Tool J7 – "Using AI DoViewing in Dialog Processes" (https://doviewplanning.org/j7doviewtool) explicitly suggests using AI-assisted DoView diagrams in group dialogue processes. For AI governance, this could mean:
· Different stakeholder groups (developers, regulators, affected communities) each articulate their understanding of desired outcomes, risks and trade-offs.
· The AI assistant synthesises these into one or more shared DoView diagrams, highlighting overlaps, disagreements and missing perspectives.
· The group iteratively edits the diagram, with the AI proposing alternative structures or clarifications, until there is a negotiated strategy map for the AI system or policy in question.
By grounding discussion in a shared diagram, AI DoViewing can make disagreements about values, priorities and risk tolerance explicit, rather than leaving them submerged inside technical documentation or code.
7. Sketch of a simple transparent DoView–AI interface
Bringing these strands together, one can imagine a concrete implementation of the interface you describe – a system where every significant AI task is accompanied by a DoView diagram that both humans and AI manipulate.
A high-level architecture might look like this:
· Input layer: users provide free-text prompts, voice recordings, documents, images or video. The AI extracts candidate outcomes, strategies, activities, risks and assumptions.
· Initial DoView construction: using patterns from Tool A1 (https://www.doviewplanning.org/a1doviewtool ) and Part B, the AI drafts a DoView diagram. Boxes are typed (outcome, step, risk, assumption, indicator), and drill-down layers are created automatically when the structure becomes complex.
· User–AI co-editing: users inspect the diagram, ask for explanations ("why is this arrow here?"), and correct or extend it. The AI updates the diagram and keeps a change history.
· Execution and monitoring: as the AI system takes actions in the world, it logs each action against particular boxes/steps in the diagram and updates indicators using the M&E logic from Tools G1 and G25.
· Evaluation and reporting: the impact evaluation and attribution tools from Part G (e.g. G5, G12) and reporting tools from Part H (H1–H3) are used to structure regular reviews of whether the AI is in fact achieving the outcomes claimed in its published DoView.
· Governance and assurance: regulators and independent auditors review the DoView diagrams, check that key societal constraints (such as those missing in the paperclip example) are present, and assess whether the evidential status labels from Tool B12 are accurate.
In such a system, DoView diagrams become:
· The primary way users understand what an AI system is trying to do.
· The primary way developers and providers formally state outcomes, assumptions, risks and evidence for their systems.
· The primary artefact used by evaluators and regulators to monitor behaviour over time.
8. Summary
The DoView Planning and Outcomes Theory Handbook, and particularly Part J, provides a rich conceptual toolkit for using outcome-focused diagrams as a shared planning and governance medium between humans and AI systems. While current AI models do not literally reason in DoView diagrams, the tools outlined in the Handbook are highly relevant for:
· Designing transparent, outcome-based interfaces between users and AI agents (Tools B2, A1, J3, J4, J5).
· Formally specifying what AI systems are meant to achieve, including their constraints, risks and assumptions (Tools B7, B12, B23, B25, J1, J2, J6).
· Building robust monitoring, evaluation and attribution frameworks for AI deployments (Tools D1, G1, G5, G12, G25, H1–H3).
· Supporting multi-stakeholder dialogue and governance around AI interventions (Tool J7).
Used in this way, DoView diagrams could provide a relatively low-cost, highly visible and conceptually coherent layer of AI outcomes transparency and alignment, sitting between technical implementations and societal oversight.
Key URLs referenced
· Main book index: https://doviewplanning.org/book
· Tool A1 – The Five Steps in DoView Planning: https://www.doviewplanning.org/a1doviewtool
· Tool B2 – DoView Strategy/Outcomes Diagrams as ‘Shared Thinking Tools’ Explainer: https://www.doviewplanning.org/b2doviewtool
· Tool B5 – Breaking Up DoView Strategy/Outcomes Diagrams Into Drill-Down Layers: https://www.doviewplanning.org/b5doviewtool
· Tool B7 – DoView Drawing Rules: https://www.doviewplanning.org/b7doviewtool
· Tool B12 – Four Types of DoView Strategy/Outcomes Diagrams With Differing Evidential Status: https://www.doviewplanning.org/b12doviewtool
· Tool B13 – DoView Strategy Diagrams Helping to Extend Strategic Foresight Explainer: https://www.doviewplanning.org/b13doviewtool
· Tool B23 – DoView 'What-If' Planning: https://www.doviewplanning.org/b23doviewtool
· Tool B25 – Risk Estimation and Acting Before the Tipping Point Prevention Window Has Closed Expainer: https://www.doviewplanning.org/b25doviewtool
· Tool D1 – DoView Planning Framework (Outcomes System Components/Building Blocks Diagram): https://www.doviewplanning.org/d1doviewtool
· Tool G1 – Types of Evaluation Mapped Onto the DoView Planning Framework DoView: https://www.doviewplanning.org/g1doviewtool
· Tool G5 – When Impact Evaluation Is Needed Rather Than Just Measuring High-Level Outcome Indicators: https://www.doviewplanning.org/g5doviewtool
· Tool G12 – Impact Evaluation Suitability Check: https://www.doviewplanning.org/g12doviewtool
· Tool G25 – DoView Monitoring and Evaluation (M&E) Plan: https://www.doviewplanning.org/g25doviewtool
· Tool H1 – Reporting Indicator and Evaluation Results Back Against a DoView Strategy Diagram: https://www.doviewplanning.org/h1doviewtool
· Tool H2 – Including Impact Information in Indicator Reporting: https://www.doviewplanning.org/h2doviewtool
· Tool H3 – Deriving a Balanced Scorecard Format From a DoView Strategy Diagram: https://www.doviewplanning.org/h3doviewtool
· Tool J1 – Areas Where Outcomes Theory and DoView Planning May Apply to AI Agents Explainer: https://doviewplanning.org/j1doviewtool
· Tool J2 – Five Step DoView AI Management: https://doviewplanning.org/j2doviewtool
· Tool J3 – Using AI to Speed up DoView Planning: https://doviewplanning.org/j3doviewtool
· Tool J4 – Using DoView Prompts as a Complement to Everyday Language Prompts for AI Systems and Agents: https://www.doviewplanning.org/j4doviewtool
· Tool J5 – DoView Communication Mapping: https://doviewplanning.org/j5doviewtool
· Tool J6 – Mathematization of Outcomes Theory and DoView Planning: https://doviewplanning.org/j6doviewtool
· Tool J7 – Using AI DoViewing in Dialog Processes: https://doviewplanning.org/j7doviewtool