core choice logging and self-improvement readiness
Current state
- AGInt (agents/core/agint.py): Logs perception, rule-based decision, orient (LLM) response, and BDI delegation via
memory_agent.log_process()to per-agentprocess_trace.jsonlunderdata/memory/.../agent_workspaces/<agent_id>/process_trace.jsonl. No single stream of “choices” for auditing. - BDI in backend (mindx_backend_service/main_service.py): Sets
chosen_agentfrom keyword rules, updatesbdi_stateand in-memoryactivity_log, and appends text lines todata/logs/agint/agint_cognitive_cycles.log. No structured choice record. - mindXagent (agents/core/mindXagent.py):
_log_action_choices()only appends to in-memoryself.action_choices(for UI). No persistent log. Improvement loop uses rule-based_identify_improvement_opportunitiesand_prioritize_improvements(no LLM); Ollama is used for chat/inject_user_prompt and is notified at startup. - StartupAgent (agents/orchestration/startup_agent.py): After Ollama connect, calls
_notify_mindxagent_startup(ollama_result).autonomous_startup_improvement()uses Ollama (ollama_api.generate_text) to analyze startup and suggest improvements; results are logged vialog_processbut not as a normalized “core choice” with rationale/outcome.
To show that mindX is or is not a Gödel machine, we need a single, accurate log of core choices: what was perceived, what options were considered, what was chosen, why, and (when available) outcome.
1. Gödel choice schema and global log
- Schema (one JSON object per line in a single file):
source_agent,cycle_id(orrequest_id),timestamp_utcchoice_type: e.g.agint_decision|bdi_agent_selection|mindx_improvement_selection|startup_ollama_improvement|mindx_action_choiceperception_summary: short description of input/contextoptions_considered: list of options (e.g. agent names, improvement goals)chosen_option: the selected optionrationale: why this was chosen (rule name, LLM excerpt, or fixed string)outcome: optional, filled when known (e.g. success/failure, or deferred)- Optional:
llm_modelwhen the choice involves Ollama/LLM output
- Location:
data/logs/godel_choices.jsonl(single global file, append-only). - Implementation: Add
log_godel_choice(self, choice_record: Dict[str, Any]) -> Optional[Path]on MemoryAgent that:- Ensures
data/logs(and optionallydata/logs/agint) exists. - Appends one JSON line to
data/logs/godel_choices.jsonl. - Optionally also calls existing
log_process("godel_core_choice", choice_record, {"agent_id": choice_record.get("source_agent", "system")})so choices remain queryable via existing process_trace/timestamped memory.
- Ensures
2. Instrument core decision points
- AGInt (agents/core/agint.py): After the rule-based decision in
_orient_and_decide(and after LLM orient if used), build a Gödel choice record from:source_agent= self.agent_idchoice_type=agint_decisionperception_summary= truncated perception or directiveoptions_considered= list of decision types (e.g. BDI_DELEGATION, RESEARCH, etc.) relevant to the rulechosen_option= the actualdecision_type(and target if any)rationale= rule reason or orient response summary Callmemory_agent.log_godel_choice(record)(if memory_agent exists).
- Backend BDI (mindx_backend_service/main_service.py): Immediately after
chosen_agentis set and beforeupdate_bdi_state, build a Gödel choice record:source_agent= e.g.bdi_directive_handlerchoice_type=bdi_agent_selectionperception_summary= directive (truncated)options_considered=list(available_agents.keys())chosen_option= chosen_agentrationale= bdi_reasoning string Obtainmemory_agentfrom the same place as other backend logging (e.g. command_handler.mastermind.memory_agent) and callawait memory_agent.log_godel_choice(record).
- mindXagent (agents/core/mindXagent.py): In
_log_action_choices(), after appending toself.action_choices, build a Gödel choice record from the samecontextandchoices(options_considered = goals/reasons, chosen_option = first item), and callawait self.memory_agent.log_godel_choice(record)so every improvement selection is persisted. - mindXagent autonomous improvement loop: When an improvement is executed (success or failure), append a second record (or extend outcome) so the log has “selected” and “executed” with outcome (e.g. success/failure or error message).
- StartupAgent (agents/orchestration/startup_agent.py): In
autonomous_startup_improvement(), when Ollama returns analysis and suggestions and the agent applies or rejects them, log a Gödel choice:choice_type=startup_ollama_improvementperception_summary= e.g. “startup log analysis”options_considered= list of suggested improvements (from Ollama response)chosen_option= the one applied (or “none” if none applied)rationale= excerpt from Ollama analysisllm_model= model used Useawait self.memory_agent.log_godel_choice(record).
3. Ollama-driven self-improvement readiness
- Already in place: StartupAgent connects to Ollama on startup, notifies mindXagent via
_notify_mindxagent_startup(ollama_result); mindXagent uses Ollama for chat and inject_user_prompt. StartupAgent’sautonomous_startup_improvement()uses Ollama to analyze startup and suggest improvements. - No code change required to “start” self-improvement from Ollama beyond ensuring the above logging is in place so that when you run autonomous startup improvement (or mindXagent improvement loop), every core choice is recorded.
- Optional: If you want the mindXagent improvement loop to use Ollama for prioritization (instead of only rule-based priority), that would be a separate enhancement: call Ollama in
_prioritize_improvementsor after_identify_improvement_opportunitiesand log that as a Gödel choice (choice_type e.g.mindx_ollama_prioritization).
4. API and UI (optional)
- API: Add a read-only endpoint, e.g.
GET /godel/choices?limit=50&source_agent=..., that reads the last N lines fromdata/logs/godel_choices.jsonl, parses JSON lines, and returns them (newest first). This allows external auditing and dashboards. - UI: Small widget on Platform or Admin tab: “Last N core choices” table (source, type, chosen option, rationale, time). Fetches from the new API.
5. File and dependency summary
| Area | File(s) | Change |
|---|---|---|
| Core directive | docs/survive.md | New file: inference-first mandate, mindX.sh commands, API/rate limits, find inference, free to paid, THOT/DAIO |
| Schema + global log | agents/memory_agent.py | Add log_godel_choice(), ensure data/logs and godel_choices.jsonl |
| StartupAgent | agents/orchestration/startup_agent.py | When _auto_connect_ollama fails, call ensure_ollama_available() then retry; log startup_ollama_bootstrap; in autonomous_startup_improvement log Gödel choice |
| AGInt | agents/core/agint.py | After decide, build and log Gödel choice |
| BDI backend | mindx_backend_service/main_service.py | After choosing agent, get memory_agent and log Gödel choice |
| mindXagent | agents/core/mindXagent.py | In _log_action_choices, persist via log_godel_choice; in improvement loop log execution outcome |
| API | mindx_backend_service/main_service.py | Add GET /godel/choices (optional) |
| UI | mindx_frontend_ui/components/platform-tab.js or admin tab | “Last core choices” widget (optional) |
6. Order of implementation
- Core directive: Create docs/survive.md with the inference-first mandate and mindX.sh commands (see section 0).
- StartupAgent as inference controller: Add bootstrap-on-failure in StartupAgent: when
_auto_connect_ollama()returnsconnected: False, callensure_ollama_available()then retry; log Gödel choicestartup_ollama_bootstrap/inference_acquired. (Requireslog_godel_choice()to exist — so step 3 may be done first or in parallel.) - Add
log_godel_choice()on MemoryAgent and createdata/logs/godel_choices.jsonlon first write. - Instrument StartupAgent
_auto_connect_ollama(success and bootstrap), thenautonomous_startup_improvement; then AGInt, BDI in main_service, mindXagent_log_action_choicesand improvement execution. - Add
GET /godel/choicesand UI widget if desired.
After this, all core choices (including inference acquisition and Ollama-driven startup/improvement) are logged in one place, so you can audit whether mindX behaves like a Gödel machine (inference as lifeblood; self-improvement decisions and rationale visible and traceable).
