Conceptual STEAM · learning sciences

Wonderment.

Where AI waits for your thinking first.

A learning platform that pulls students into conceptual depth before technical detail. Built for any subject, any age, any pedagogy — configurable in eight steps, grounded in thirty years of learning sciences.

Most AI tutors give you the answer.
Wonderment makes you ask the question first — then sketch your model, defend your claim, debate three peers, and only then meet the canonical answer. Concepts before procedures. Always.

The wonderment-question principle, after Scardamalia & Bereiter (1994); Chin & Osborne (2008)

01
How it works

Four stages. One discipline: think before you ask the AI.

Every learning session moves through the same four cognitive stages. The third stage — the struggle gate — is the design's centre of gravity. Until the learner externalises a sketch, a prediction, and a claim, the AI's canonical answer stays locked.

i.
Wonder

Five candidate questions appear, drawn from the wonderment taxonomy: mechanism, limit, analogy, contradiction, transfer. Pick one — or write a better one.

ii.
Model

Sketch the forces. Predict the limit case. State your claim and your reason. The AI's answer is gated until you commit to a model of your own.

iii.
Debate

A creative peer, a skeptic, and a mediator respond — not to give the answer, but to pressure-test yours. You adjudicate.

iv.
Refine

Now the canonical answer appears, alongside what your model got right and what shifted. Then: what extension question would you ask next?

02
The signature move

What would the most creative student ask?

Wonderment generates candidate questions in five canonical types. They aren't the questions the curriculum trains learners to ask. They're the questions that change how the problem looks.

mechanism

If forces are equal and opposite, why doesn't the skater stop the moment she pushes?

limit case

What if the wall were made of paper? Or what if it were a planet?

analogy

Is this like firing a cannon, or more like jumping off a boat?

contradiction

The wall pushes me back, but I don't feel it pushing — does it really?

transfer

Does this also explain how rockets work in empty space?

03
For everyone in the school

One platform, four ways in.

Each role sees a workspace shaped to their work — the student thinks, the teacher monitors, the school admin governs, the platform admin scales. Same product, four different surfaces.

For learners

Student

A workspace that rewards your thinking. Pick a creative question, sketch your model, debate three peers, then meet the canonical answer.

  • Wonderment journal of every question you've asked
  • Personal concept map — see what you've mastered
  • Multi-agent peer debate with mediator
  • Bilingual EN / 中 with on-grade reading support
For instructors

Teacher

An eight-step wizard configures any subject, any pedagogy, any age. Live monitor every learner's stage; review artefacts; adjust scaffolding mid-session.

  • Parametric session authoring across 7 dimensions
  • Live monitor with stage-level telemetry
  • Artefact review — sketches, claims, peer transcripts
  • Class analytics: mastery, equity, wonderment depth
For institutions

School Admin

Govern the platform at the school level. Approve presets for school use, organise teachers and classes, watch equity dashboards, align with curriculum standards.

  • Teacher and student roster management
  • Preset approval and school-level forks
  • Curriculum alignment to NGSS, IB, HKDSE, etc.
  • Termly mastery and equity reports
For platform operators

Super Admin

Multi-tenant control plane. Provision schools, configure AI-provider routing and failover, curate the master preset library, audit every event.

  • Tenant lifecycle: provision, suspend, archive
  • AI provider routing across Claude / GPT / Gemini / local
  • Master preset library and curriculum standards registry
  • Immutable audit log and billing oversight
04
Configurability

Eight steps. Any subject.

A wizard exposes seven parametric dimensions plus a launch step. Not just STEM. Not just one pedagogy. The configuration becomes a JSON document the platform turns into a live session.

01.
Domain & topic

STEM · Humanities · Arts · Social sciences · Languages · Interdisciplinary

02.
Learner profile

Age, prior knowledge, language, reading level, target misconceptions

03.
Pedagogical mix

Weighted blend across Socratic, scaffolded, productive failure, self-explanation, analogical, conceptual change

04.
Learning format

Scenario · Problem · Case · Project · Phenomenon · Game · Design · Debate

05.
Agent ensemble

1–4 active agents from creative peer, skeptic, naïve partner, mediator, expert, reflection coach

06.
Wonderment types

Mechanism · Limit · Analogy · Contradiction · Transfer · Counterfactual · Aesthetic

07.
Scaffolding & assessment

Gate artefacts, hint delay, reveal threshold, O×A involvement weighting

08.
Preview & launch

Save as preset · export JSON · assign to a class · or fork an existing one

05
Research foundation

Built on what learning sciences actually know.

Wonderment is not vibes. Every design choice maps to a published finding. Below: the empirical anchors that determined how the workspace works.

  • Wonderment questions

    Self-generated "I wonder why…" questions predict deeper conceptual gain. Most classrooms see <0.2 student questions per hour. Scardamalia & Bereiter; Chin & Osborne, 2008.

  • Productive failure

    Struggling on a novel problem before canonical instruction improves transfer. The struggle gate is the workspace's operationalisation. Kapur, 2008, 2014; meta-analysis Sinha & Kapur, 2021.

  • Self-explanation

    Prompting learners to explain why drives conceptual change. The "claim + reason" cell at the gate is exactly this prompt. Chi et al., 1989; meta-analysis Bisra et al., 2018.

  • Conceptual change

    Misconceptions are coherent p-prims, not gaps. They yield to dissatisfaction plus a plausible alternative — the work the skeptic agent does. diSessa, 1993; Chi, 2008.

  • Multi-agent ITS

    AutoTutor's lineage of dialogic tutoring systems shows that multiple agent voices outperform a single tutor on transfer tasks. Graesser, Person, McNamara, et al.

  • Conceptual ↔ procedural

    Conceptual and procedural knowledge co-develop iteratively — neither comes strictly first. Wonderment's pedagogy mix lets instructors balance the two. Rittle-Johnson & Schneider, 2015.

06
What changes

When learners think first, everything else moves.

+8%
mastery gain at the gate-to-debate transition

Largest single learning gain in the session occurs after the struggle gate, consistent with productive-failure literature.

4.1 / 5
average wonderment-question depth

After three weeks, learners reach for limit and contradiction questions on their own — the deepest types in the taxonomy.

0.66
average O×A — observable × authentic

Learner-visible work balanced with AI co-construction; full transparency for teachers, parents, and the learner themselves.

Pilot Wonderment
at your school.

We're working with a small set of partner schools across Hong Kong, mainland China, and Southeast Asia. Get the full demo, a sample preset library for your subject area, and a forty-five-minute walkthrough with the team.

Pilot inquiry