Enrich test.assert-eq / test.assert failure output with actual vs expected values #422

Closed
opened 2026-04-24 16:16:22 +00:00 by navicore · 1 comment
navicore commented 2026-04-24 16:16:22 +00:00 (Migrated from github.com)

Motivation

When a Seq test fails today, the output reports only the failing test's name:

test-mutual ... FAILED

========================================
Results: 0 passed, 1 failed

TEST FAILURES:

/var/folders/.../test-05-mutual.seq::test-mutual
  test-mutual ... FAILED

A failing test word typically has several test.assert-eq calls. From this output, the caller can't tell which assertion failed, what value was actually on the stack, or which line of source to look at.

This is especially painful in Seqlings (https://github.com/navicore/seqlings), where the whole value proposition is "quick, educational feedback as you iterate." Right now a failure just tells the learner "something is off" — they retry blindly or fall back to the hint. Richer failure output would let the learner reason about their mistake directly.

Proposal

Have test.assert-eq and test.assert emit actual vs expected values (and a source line number when feasible) on failure. Approximate shape:

test-mutual ... FAILED
  at line 23: expected 8, got 13

For test.assert (boolean), something like:

test-fizzbuzz ... FAILED
  at line 17: expected true, got false

Multiple failures within a single test word should each get a line. If multiple failures would overwhelm output, emitting just the first (with a "+N more failures" footer) is also fine.

Who benefits

  • Seqlings learners get immediate, specific feedback: they know which comparison they missed and the actual value their code produced.
  • Anyone writing Seq test suites gets the same ergonomic win that Rust's assert_eq! (with its `left: ... right: ...` output), Go's `t.Errorf`, or Python's pytest provides.
  • Seqlings-as-acceptance-suite-for-Seq benefits: when an exercise regression turns out to be a Seq compiler bug, the assertion line tells us exactly where divergence happens instead of requiring local bisection.

Implementation notes (non-prescriptive)

  • The source line number is the hardest part; the test runner needs it plumbed through from the AST. If that's too invasive for a first pass, shipping just `expected X, got Y` without the line would still be a huge win.
  • `test.assert-eq` already has both values on the stack when it decides to fail — formatting them is the whole lift.
  • Output format should stay plain text (Seqlings parses stdout). Structured output (JSON) would be nice but isn't required.

Downstream follow-up

Seqlings has a design note for how it would consume this: `docs/design/FAILED-ASSERTION-DETAILS.md`. Seqlings would pick up the richer output with no code change (it forwards `seqc test` stdout verbatim), and may add a complementary source-scanning bridge regardless.

## Motivation When a Seq test fails today, the output reports only the failing test's name: ``` test-mutual ... FAILED ======================================== Results: 0 passed, 1 failed TEST FAILURES: /var/folders/.../test-05-mutual.seq::test-mutual test-mutual ... FAILED ``` A failing test word typically has several `test.assert-eq` calls. From this output, the caller can't tell *which* assertion failed, what value was actually on the stack, or which line of source to look at. This is especially painful in **Seqlings** (https://github.com/navicore/seqlings), where the whole value proposition is "quick, educational feedback as you iterate." Right now a failure just tells the learner "something is off" — they retry blindly or fall back to the hint. Richer failure output would let the learner reason about their mistake directly. ## Proposal Have `test.assert-eq` and `test.assert` emit actual vs expected values (and a source line number when feasible) on failure. Approximate shape: ``` test-mutual ... FAILED at line 23: expected 8, got 13 ``` For `test.assert` (boolean), something like: ``` test-fizzbuzz ... FAILED at line 17: expected true, got false ``` Multiple failures within a single test word should each get a line. If multiple failures would overwhelm output, emitting just the first (with a "+N more failures" footer) is also fine. ## Who benefits - **Seqlings learners** get immediate, specific feedback: they know which comparison they missed and the actual value their code produced. - **Anyone writing Seq test suites** gets the same ergonomic win that Rust's `assert_eq!` (with its \`left: ... right: ...\` output), Go's \`t.Errorf\`, or Python's pytest provides. - **Seqlings-as-acceptance-suite-for-Seq** benefits: when an exercise regression turns out to be a Seq compiler bug, the assertion line tells us exactly where divergence happens instead of requiring local bisection. ## Implementation notes (non-prescriptive) - The source line number is the hardest part; the test runner needs it plumbed through from the AST. If that's too invasive for a first pass, shipping just \`expected X, got Y\` without the line would still be a huge win. - \`test.assert-eq\` already has both values on the stack when it decides to fail — formatting them is the whole lift. - Output format should stay plain text (Seqlings parses stdout). Structured output (JSON) would be nice but isn't required. ## Downstream follow-up Seqlings has a design note for how it would consume this: [\`docs/design/FAILED-ASSERTION-DETAILS.md\`](https://github.com/navicore/seqlings/blob/main/docs/design/FAILED-ASSERTION-DETAILS.md). Seqlings would pick up the richer output with no code change (it forwards \`seqc test\` stdout verbatim), and may add a complementary source-scanning bridge regardless.
navicore commented 2026-04-24 16:48:21 +00:00 (Migrated from github.com)
https://github.com/navicore/patch-seq/pull/423
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
navicore/patch-seq#422
No description provided.