Skip to main content

What's New in the Engine Fleet

Benchmarked against: Anthropic — What's new in Claude Last updated: 2026-03-05

This page tracks significant engine fleet changes. For full release history, see Release Notes.


March 2026

Cloud UB unified as Single Source of Truth (2026-03-01)

Cloud UB (Cloudflare D1 + Vectorize) declared as the only Universal Brain. All ships (SS1, SS2, SS3) unified on Cloud UB. Local UB entries (2,280) archived with high-quality items to be migrated.

Impact on engines: All engines now read from and write to Cloud UB exclusively. Local UBI tools still route through Cloud UB for data operations.

Dispatch Worker migrated to Cloud UB (2026-03-02)

The SS1 Dispatch Worker (which executes Work Orders via low-cost engines) updated to use Cloud UB instead of local UB. This enables any ship to dispatch and monitor work orders.

Docs site launched (2026-03-05)

SuperPortia Agentic AI Docs site goes live internally. Engine documentation, selection guides, and pricing information now available as structured reference pages.


February 2026

Dual Agentic Lines strategy confirmed (2026-02-27)

Captain confirmed two parallel agentic lines — NOT pick one:

  • Line 1: Claude Agentic (Agent SDK + Claude Code) ~30%
  • Line 2: LangGraph self-built ~70%

Both lines are truly agentic (autonomous perceive-reason-act-learn), with mutual disaster recovery.

Engine selection principle clarified (2026-02-27)

Captain decision: "CP value is NOT cheapest possible but minimum cost that gets the job done RIGHT."

Task levelEngine
TrivialGroq (free)
StandardGemini / DeepSeek (cents)
ImportantGemini with citations
CriticalClaude
ArchitectureOpus

Groq tool-calling limitations discovered (2026-02-21)

Groq Llama 3.3 70B uses XML format for tool calling instead of JSON. This breaks LangGraph nested agent-as-tool mode.

Mitigation: supervisor.py implements automatic flat/nested mode switching:

  • Groq, DeepSeek, Mistral, Zhipu → flat mode (direct tools)
  • Gemini, Claude → nested mode (agent-as-tool workers)

LangGraph ecosystem installed on SS1 (2026-02-26)

Verified installation in .venv: langgraph 1.0.8, langgraph-checkpoint-sqlite 3.0.3, langchain-mcp-adapters 0.2.1.

Token baseline measured: single Q&A with 1 tool call ≈ 20,677 input + 102 output tokens (Sonnet). Tool descriptions alone = ~4,103 tokens for 35 tools.

Groq hallucination incident (2026-02-22)

Ino Scout Daily using Groq/DeepSeek produced fabricated S-grade intel claiming "LangGraph 2.0 Released." Actual PyPI latest was 1.0.9.

Policy change: All intel from low-cost engines must be verified against package registries (pip/PyPI/npm) before UB ingestion, especially for perishable knowledge.


Engine fleet status

EngineStatusNotes
Claude (Opus 4.6)ActivePrimary engine, Max Plan
Claude (Sonnet 4.6)ActiveShared quota with Opus
Claude (Haiku 4.5)ActiveFast/cheap Claude option
Groq (Llama 3.3 70B)ActiveFree tier, flat mode only
Gemini (2.5 Flash)ActiveSearch + embeddings
DeepSeek (R1/V3)ActiveReasoning tasks
MistralActiveEuropean alternative
Zhipu (GLM-5)ActiveChinese NLP specialist

PageRelationship
Engine OverviewFull engine catalog
Release NotesAll release notes
Engine MigrationSwitching engines