Mind of Apollo project
Growing a collective rational reasoner
What is a collective rational reasoner?
Belief - a statement or assumption that an individual accepts as true. It can describe facts about the world, interpretations of evidence, or general principles about how things work.
Belief system - the organized set of such beliefs that together form a person's understanding of reality. It includes not only what they believe but also how those beliefs support and relate to one another, creating a structured and coherent picture of the world.
What distinguishes a collective reasoner from an individual is that its belief system is stored in a database rather than a brain. Updates to this belief system arise from a collaborative workflow involving numerous editors. This workflow ensures that the belief system remains coherent and that every update is guided by principles of rationality and critical thinking.
Tell me more
A collective rational reasoner is a system designed to think, learn, and update its beliefs through a structured collaboration among many human contributors. Unlike an individual reasoner, whose beliefs are stored in a single mind, the collective reasoner's beliefs are explicit, written, and stored in a shared database. This makes them open to inspection, critique, and refinement by others.
Its reasoning process is governed by a workflow, a set of clear rules, procedures, and guidelines that determine how new information is added, how conflicting arguments are evaluated, and how coherence is maintained across the entire belief system. Each contributor, or editor, can propose arguments, counterarguments, or revisions. These proposals are then assessed according to rational standards such as logical consistency, evidential support, and clarity of reasoning rather than popularity or authority.
Over time, as more editors engage with it and as its rules of reasoning are themselves refined, the collective rational reasoner becomes increasingly accurate and sophisticated. It evolves as a distributed mind - one whose understanding of the world grows not through the experience, intuition or emotion of a single person, but through the disciplined interaction of many critical thinkers working within a rational framework.
In essence, the collective rational reasoner aims to embody the ideal of reason itself: a system that continuously corrects its own errors, integrates diverse perspectives into a coherent whole, and seeks to align its beliefs with reality as closely as possible.
Where is the part that does the reasoning?
At the end of the day, it's the individual editors that do the reasoning. However, they do it in a special way. When they are making reasoning steps like evaluating arguments and deriving conclusions, editors have to assume the existing beliefs of the collective reasoner (the ones that are in the database) as true.
That way the worldview of the collective reasoner remains consistent and it makes sense to think about it as if it was one virtual person rather than a mix of separate perspectives. It's like an individual editor is reasoning on behalf of the collective reasoner and they share their results with the others. Then the next person picks them up and makes the next reasoning step and so on and so on.
So is this like an oppressive cult?
It is in no way necessary for the editors to actually agree with the positions of the collective reasoner. They only need to assume its current position when making changes to its beliefs.
Let's consider a person who doesn't agree with the collective reasoner about assumptions that a particular argument is based on. It's possible that that would make them uncomfortable or unmotivated to judge the strength of the argument from the perspective of the collective reasoner. In that case another editor who is more comfortable with those assumptions will make the judgement.
There is no requirement to conform to one opinion. On the contrary, diversity of perspectives of editors improves collective reasoner's rationality.
I'm not sure I understand. Could you provide an example?
Let's say an editor picks up the task of evaluating the strength of the following argument. "Drinking sweet drinks with sugar substitutes is healthy because it helps to prevent diabetes."
Let's also suppose that the database of the beliefs that the collective reasoner currently holds contains the following statement. "Consuming drinks with artificial sweeteners causes diabetes." Because of this, the editor should judge the argument as weak. This is regardless of whether they personally agree with the statement or not.
Of course the editor can choose to challenge this underlying belief by providing arguments against it. But until their own arguments take effect and change the collective reasoner's belief, the editor can't make updates on the basis of it not being true.
Who are the editors?
The editors are people who want to participate in the collective reasoning process on their own accord. This could be because of one of the following motivations:
- supporting the project's mission
- convincing others
- getting closer to truth
- debating as a sport
What are the principles of critical thinking and rationality?
This list outlines general principles that apply to any rational reasoner, not just the collective one. It expresses the guiding intentions and aspirations for the project; while the exact methods for realizing each principle are still in development, they provide a direction for how the collective rational reasoner should ultimately function.
Foundations: aim for truth over comfort, separate truth from desire, define terms clearly, distinguish facts from interpretations, keep an open but selective mind.
Framing the problem: state the question precisely, surface and examine assumptions, note stakes and context, decompose complex issues, specify what would count as an answer.
Evidence and sources: seek relevant evidence before judging, prefer primary and high-quality sources, check credibility and conflicts of interest, triangulate independent lines of evidence, distinguish data from anecdotes.
Causation and explanation: don't infer causation from correlation, look for plausible mechanisms, compare competing explanations, prefer simpler explanations when other things are equal, test whether explanations make novel predictions.
Reasoning and inference: use valid logical steps, avoid contradictions, generalize only as far as evidence warrants, apply rules consistently across similar cases, quantify when possible.
Uncertainty and updating: express degrees of belief, update with new evidence, run sensitivity checks on key assumptions.
Counter-checks and error finding: seek disconfirming cases, consider strong alternatives before choosing, test predictions against reality, probe edge cases and exceptions, replicate or re-derive important results.
Coherence and integration: connect claims into a consistent model, flag and resolve tensions with established knowledge, maintain dependency maps between beliefs, avoid double-counting evidence, separate descriptive claims from value claims.
Decision and action: weigh expected benefits against costs and risks, consider opportunity costs and reversibility, choose experiments that maximize learning, time-box analysis when returns diminish, plan contingencies.
Communication and collaboration: show your reasoning steps, cite sources and uncertainties, invite and steelman critique, distinguish factual from value disagreements, use shared rules for revision and conflict resolution.
Reflection and improvement: keep a log of predictions and outcomes, analyze past errors and biases, keep heuristics that work and drop those that don't, pre-commit to standards before seeing results, iterate the process regularly.
Why would the editors follow those principles?
There are several sources of motivation for editors to follow the principles of critical thinking and rationality: internal, structural, and social.
Firstly, there are internal motivations. Many editors genuinely want to be good reasoners and to seek truth, motivations that most people share, though for some they are especially strong.
Secondly, there are structural motivations built into the system itself. The platform is designed to organise reasoning in a methodical way that constantly demands justification. For example, you can't simply post long paragraphs of text, you must make your arguments explicit, separate your claims from your evidence, and show how one set of premises follows from another. The workflow itself enforces clarity, coherence, and accountability in reasoning.
Thirdly, there are social motivations. Editors are kept in check by their critics. Both those who hold opposing views and those who simply enjoy scrutinising arguments. If an editor fails to follow rational and critical thinking principles, their arguments will receive low strength scores, which is discouraging. Moreover, they will fail to persuade the collective rational reasoner, and that matters, because its aggregated position is what ultimately gets published and recognised. As the collective rational reasoner gains credibility as an epistemic authority, it becomes desirable to have one's reasoning validated by it. Even if someone manages to "win" temporarily through poor reasoning, their success will soon be undone when a more competent critic examines their arguments.
Are you talking about an AI?
No, this is not an artificial intelligence. It's the individual human editors who do the actual reasoning. The system that pulls their reasoning together is fairly simple in technical terms. It doesn't attempt to understand the text that is fed into it. It's more of a framework with some formulas and simple algorithms built in.
How is it different from ...?
Wikipedia, Kialo, etc.
These platforms serve only as repositories of knowledge and arguments. Unlike them a collective reasoner reaches its own conclusions. This means that it actually does the work of inferring its interconnected beliefs from arguments.
ChatGPT
The reasoning process of a collective rational reasoner is transparent and intelligible. It's done by real people in the open. It's up for examination and improvement by anyone who joins and follows the rules.
The reasoner is purpose-built for rationality and critical thinking. Its content is structured as a network of statements, arguments, critical questions etc. Disputes over strengths of arguments are settled by referring to critical thinking guidelines rather than by voting.
Why do we need it?
Imagine a mind devoted entirely to the pursuit of truth and wisdom and devoid of ego. Imagine a thinker whose cognitive capacity greatly surpasses any individual. Now picture that every belief held by this entity, and every step of its reasoning, is open to anyone, anytime. Unlike a politician or influencer, it never deflects or obscures its logic. Instead, its reasoning is clear, accessible, and shaped openly by all who engage with it.
Its rational, open and well informed judgment will gain public trust. Policy decisions made under its influence will have better outcomes. Many disagreements will be resolved by deferring to the collective rational reasoner, reducing disinformation, polarization and violence.
Why do you think a group of random people on the internet will be particularly smart?
The collective rational reasoner relies on structure rather than the brilliance of individuals. It provides a collaborative workflow where people contribute through small, well-defined tasks, and the system integrates these into coherent reasoning. To help participants contribute effectively, there will be learning materials and clear guidelines for each microtask.
The system is designed to encourage editors to follow the rules of critical thinking and rationality. By rewarding clarity and consistency, it makes emotional manipulation and other less noble tactics ineffective, ensuring that honest reasoning prevails. Although it isn't meant to be creative on its own, its structure allows the best ideas to rise to the surface.
Why not leave this up to the scientists and other qualified professionals?
The scientific process is rigurous but also very time consuming. We are forced to make important decisions under conditions of uncertainty, before all the relevant research is ready. If it's there, great. The collective rational reasoner should make use of that knowledge. Otherwise, it has to piece the evidence together by relying on the cognitive resources of its editors.
Is there one truth, or does everyone have their own?
In the context of this project the word "truth" refers to objective truth. It corresponds to the way things actually are in the world. Objective truth is independent of anyone's beliefs, opinions or feelings.
Wouldn't the collective reasoner be just as biased as its editors?
Editors have different and often competing perspectives, so arguments from all sides are added to the collective reasoner's database. The relative weight of each argument is determined by how well it meets standards of critical thinking, not by popularity or voting. As long as there are editors representing diverse viewpoints, the stronger arguments, regardless of who presents them, will prevail.
Wouldn't editors' bias affect the weighing of the arguments?
Collective reasoner's knowledge is structured granularly and its reasoning is fine-grained. Different statements are pulled apart and evaluated separately starting from basic assumptions, moving up the reasoning chain step by step.
By dissecting arguments into specific sub-points, logic and evidence are applied one piece at a time. Objectivity is easier when judging narrowly defined points against explicit criteria. When these vetted pieces recombine into a conclusion, the result may diverge from the initial opinions of the majority of the editors.
If the collective conclusion conflicts with majority sentiment, it either prompts a re-examination for mistakes in the collective reasoning process or signals that the majority should update its view in light of a more rigorously justified position.
Different people want different things. How would it decide who deserves what?
Let's take an example: one person wants to ban high-sugar foods because they harm health, while another opposes the ban because it would take away a source of joy. Reason alone can't decide whether health is more important than joy. Their relative importance differs from person to person.
What reason can do is predict how health, joy, and other factors would actually be affected by the proposed ban. It might turn out that the outcomes are very different from what either side expected, prompting them to reconsider. It might also reveal alternative policies that achieve the best of both worlds. And finally, reason might show that the disagreement really boils down to fundamental differences in values, in which case, it's better to stop arguing and focus on more productive pursuits.
So, will the collective reasoner stay silent on the most important questions of what should be done?
No, but with a caveat. It may give different answers to different people depending on their fundamental values. Once the collective rational reasoner is given a specific moral basis it will determine if something is good or bad, should or should not be done. Please find more details on how this can be accomplished below.
Wouldn't people just calibrate their "moral basis" to produce the conclusions they already agree with, rather than to discover what's actually right?
There is definitely such a danger. One positive limiting factor here is that the same moral basis is used to decide many different questions. If a person calibrates their basis to get a desired answer to one question, they might get an undesired answer to another question. Then the person will be forced to choose which moral basis is truly theirs and which unwelcome answer they will have to confront.
Conflict is part of human nature. Can it really be solved? Should it?
This project is not an attempt to eliminate conflict. Its goal is to change the rules of conflict, to change how it is conducted.
The mind of the collective rational thinker is a place designed to elevate rational arguments and make less noble tactics, such as emotional manipulation, ineffective. Within such an environment, disagreement between its editors becomes a productive activity. It generates deeper knowledge and understanding of the issues, helps to reveal which solutions are truly best, clarifies what is at stake for all parties, and may lead to the discovery of “best of both worlds” solutions.
Such an environment can even serve as a fair and satisfying arena for settling disputes through reason before they escalate into violence.
Isn't it dangerous to create such an authority?
Like with any innovation, there is a potential for it to go wrong. Too many people may start trusting the collective reasoner too much.
However since the reasoning will happen completely in the open, outside critics of the system will have a good view of its flaws. They will limit the influence of the collective reasoner showing that its conclusions are not beyond doubt.
If one collective reasoner gets corrupted, hopefully another will spring up showing what went wrong with it and pulling the resources and influence away from it.
At the end of the day it's a fallible human system and people should remember that. The collective rational reasoner should have a powerful voice, but never unquestionable.
How will we do it?
An early version of the platform is already up and running at mindofapollo.org. Please watch the ▶️ Quick Start Guide before giving it a try.
The platform is free and open source. At this stage the most important thing is to use it and share your feedback. Let Dante know what you think!
Why should I join?
Here's what editors can look forward to when joining the Mind of Apollo at this stage of the project:
The thrill of intellectual battle. Choose your side and defend your view. Try to convince Apollo of your reasoning better than others can.
A community of clear and deep thinkers. Engage with people who value truth, critical thinking, and intellectual honesty, people who cultivate the virtues of good reasoning in themselves.
The novelty of being part of a collective brain. Become one of the first contributors to an entirely new kind of mind — a collective rational reasoner. Help it grow from its humble beginnings into something with superhuman reach and insight.
The chance to make a real difference. By helping the project gain momentum, you'll be contributing to a more rational, wise, and intelligent world, and the sooner that happens, the more harm can be prevented.
The joy of meaningful interaction. Experience the satisfaction of seeing thoughtful, rational responses to what you write. Feel heard, understood, and challenged — surprised by new perspectives, and sharpened by constructive criticism.
What content is allowed on this platform?
The goal of the platform is to grow a collective rational reasoner. In general, censoring information or fixing certain beliefs in stone would go against this goal. However, there are valid reasons to maintain some limitations.
1. Compliance with the law. The platform must not host illegal content. This includes messages that “incite a violation of the law that is both imminent and likely.”
2. Maintaining a positive public face. Like any public platform, or any individual, Apollo has a public face that shapes first impressions. Since its growth depends heavily on public perception, it should present itself thoughtfully. Statements that most people would consider highly offensive should not be placed in prominent, public-facing areas (for example, on the homepage) unless there is a strong reason to do so. Such content should still remain accessible to those specifically seeking it, but it shouldn't confront people who didn't ask for it.
3. Respectful communication. Criticism and commentary on others' contributions should be expressed politely. Rude or hostile behaviour discourages participation and undermines the collaborative spirit the project aims to foster.
How do I get access?
The registration is currently open! Please register here.
I watched the video and have some questions.
Who owns the arguments?
Arguments are not owned by anyone similar to how no one owns a Wikipedia article. Every editor can edit any argument as long as they follow the rules.
What is a critical question?
Critical questions are used to test strength of arguments. For example consider an argument that is based on an expert's opinion. One of the critical questions to ask is whether their expertise is relevant to the opinion that they are giving.
What is a critical statement?
A critical statement is a statement that is used to answer a critical question. For example if the critical question is "Does the source have relevant expertise?" a critical statement could be "Richard J. Evans is an expert on 19th and 20th century history of Germany."
Who will judge the arguments?
Any editor can judge any argument as long as the editor follows the rules.
Will judgements be any good?
The quality of the judgements will depend on the quality of the rules that govern them. If a judgement violates those rules, editors who disagree can correct it to ensure compliance.
Over time, the rules themselves will improve through philosophical reflection, probabilistic analysis, and the accumulation of edge cases and failures that reveal their weaknesses. Whenever a rule is misunderstood or proves difficult to interpret unambiguously, it will be revised to make its meaning clearer.
Even though the early judgements are likely to be seriously flawed, the system will inevitably improve through this continuous process, eventually reaching a very high standard of objectivity and accuracy.
What if editors disagree about a judgement?
Wikipedia has a very similar challange and they have built a dispute resolution process that addresses it. Mind of Apollo will take a similar approach.
There are two reasons why a judgement can be wrong. Either it doesn't comply with the rules of judgement or the rules themselves are wrong or ambiguous. If an editor sees a mistake they can make the change themselves as long as they provide a valid reason.
If otheres disagree with this change, they can revert it and provide their reasons. If the disagreement continues, there should be a reasoned discussion between them with the goal of getting to the core of the disagreement and resolving it. The outcome of this could be a consensus, clarification of the rules, seeking external input, escalation etc.
Where are you getting those arguments strengths numbers from?
Each argument type comes with a scale that helps with this process. The scale is divided into multiple brackets - segments on the scale with lower and upper bounds. The first step is to look at the descriptions of the brackets and determine which of them describes best the argument that you are judging. If you can't decide between two neighbouring brackets, pick the percentage point that divides them.
Otherwise, look at the surrounding brackets of the bracket that you have picked and assess how similar their descriptions are to the argument that you are evaluating. If the similarity is about the same pick the percentage in the middle of your chosen bracket. Otherwise move the percentage point towards the neighbouring bracket that seems more similar proportionally to how much more similar it is.
Where did you get those scales?
The scales were generated by ChatGPT and I checked that they roughly make sense. This was done to save time and effort. My goal was to create something workable that editors can try out and improve on in the future. Eventually the method of using scales to measure agument strengths may be replaced with something more advanced and specific for each argument type.
How are the confidences in statements calculated?
The algorithm for calculating confidences from argument strengths can be found in statementConfidence.ts. There is also a confidence calculator you can play with and check if it agrees with your intuitions.
Why not use Bayes Nets instead?
When I experimented with formal, probabilistic approaches, I quickly ran into their limitations. For example, trying to formalise a complex question like gun control turned out to be extremely challenging, even for those well-trained in probability theory. The diversity and complexity of real-world situations demand the creation of countless custom mathematical models, which in turn require serious funding, research programmes, and long-term commitment. Because of this, very few volunteers are both able and willing to engage at that level of formality.
Instead, I decided to focus on a less formal but more attainable approach, one that makes Apollo's reasoning scalable and accessible. The field of informal logic suits this much better: it is not only more flexible but also more familiar and intuitive to people. Its reasoning steps resemble how humans naturally think and argue, making it easier for new editors to participate meaningfully.
Once the community grows and the workflow becomes stable, the sophistication, precision, and theoretical grounding of Apollo's reasoning can be gradually increased, potentially incorporating Bayesian networks in certain contexts. This mirrors how a human mind develops, starting with intuitive reasoning and refining it over time through structure and theory.
Is there an example of judging a prescriptive claim / policy?
There is "everyone should tip" example prescriptive claim. You will need to select a moral profile from the dropdown below the statement to see the result of the judgement.
Judging a prescriptive claim is quite different from judging a descriptive claim. First, we determine what are the likely outcomes of following this prescription. These need to be reduced to the most fundamental level of human values, to changes in well-being, equality, autonomy etc. These outcomes need to be assessed numerically, which is not a straightfoward task. Finally the outcomes need to be weighed against each other. The relative weights of the fundamental values are captured in a moral profile. These differ for different people. Hence the answer to whether the prescription is net beneficial may change depending on which profile is applied.
Hasn't Apollo reached its conclusion prematurely?
At this early stage of the project all of Apollo's conclusions are tentative. As the project matures Apollo's reasoning process will become more robust and its knowledge of certain topics will become comprehensive. At that stage the collective reasoner will start making official conclusions. Those conclusions will be presented to everyone, not just the editors. Apollo should then be judged by how justified and accurate its official conclusions will turn out to be.
Who is Apollo?
Apollo is a collective rational reasoner in the making. The Mind of Apollo is an online platform that hosts this virtual being and lets editors interact with it.
Why did you choose that name?
The name Apollo was inspired by its philosophical meaning as a symbol of reason, clarity, harmony, and self-knowledge. In philosophy, Apollo represents the rational and ordered aspect of human nature, the pursuit of understanding through light, structure, and truth. From the ancient Greek saying "Know thyself," written at Apollo's temple in Delphi, to Nietzsche's idea of the Apollonian spirit, the name evokes disciplined intellect and the harmony between thought, art, and morality, symbolizing the human drive to bring order and meaning to the world.
Any exciting features you haven't mentioned yet?
On Wikipedia, if you read an article about a contentious issue, you'll often notice that its slant changes depending on the language version. Having Apollo think different things in different languages would be like giving it a multiple-personality disorder.
That's why all content on the platform is automatically translated into all the popular languages. This allows editors from around the world, not just English-speaking countries, to contribute their reasoning and perspectives to Apollo. This collective reasoner is free from the bubble of a single language and less biased as a result. Looking from a different angle, this feature opens the doors to productive, cross-cultural discussion and debate within the Mind of Apollo.
Who are you?
Hi! My name is Dante. I am a software dev with a Maths & Computer Science degree, living in the UK. I am a philosophy enthusiast who spent a whole year full time exploring these concepts as well as several years iterating on them in my spare time.
Why are you doing this?
1) I want the horrible things that are going on in the world to stop.
2) I want to be able to work full time on a meaningful, innovative and creative project like this.
3) I'm excited to use my programming, philosophical and critical thinking skills.
How will the project be funded?
It will be funded from donations, similar to Wikipedia.