Record once. Replay. Export to CI.
Interact with the task app below, hit Stop, then Play —
the app rewinds to your starting state and replays every action with visual highlights.
Hit Export Playwright to get a ready-to-commit .spec.ts
with real expect() assertions for DOM, classes, styles, hidden,
disabled, aria-*, network requests, and WebSocket messages.
What is captured — and what is not
The recorder uses a DOM event listener + a scoped MutationObserver.
Understanding the boundaries helps you write zero-maintenance tests.
- ✓ Click, input value, checkbox state, change, scroll
- ✓ DOM nodes added / removed inside the observed scope
- ✓ CSS class changes (JS-driven) e.g.
task-item--done - ✓ Inline style changes e.g.
display:none → block - ✓
aria-*,hidden,disabledattribute changes - ✓ Network requests via
fetch→ mocked in Playwright export
-
✗
Pure CSS
:hover/:focusappearance — no DOM mutation fires
Fix: toggle a class in amouseoverhandler, or usepage.screenshot()visual regression in Playwright - ✗ Canvas / WebGL visual output
-
✗
Browser-native UI: file picker, date picker,
alert()
Recording approach for assessors
Use this workflow to speed up assessor checks and generate tests from real user flows.
- Open the app (or staging URL) you need to verify.
- Call
o.startRecording(observeRoot)— e.g.o.startRecording('#task-app')oro.startRecording('#app')to scope recording to the relevant root. - Perform the user flows to be verified (clicks, inputs, scrolls, submit, keydown, focus, blur).
- Call
o.stopRecording()and store the result. - Export:
o.exportTest(recording)for Objs tests, oro.exportPlaywrightTest(recording)for Playwright. - Optional: in dev builds,
o.playRecording(recording)to verify replay before exporting.
Best practices
- Use stable selectors (
data-qa/o.autotag) so exports survive CSS and layout changes. - Pass an
observeselector to limit MutationObserver and assertion noise to the app container. - Review auto-generated assertions in the exported file before committing.
Live demo
Scope: o.startRecording('#task-app') — only mutations inside the task
app are observed, so the dev panel and page chrome produce no noise in assertions.
Try adding tasks, toggling checkboxes, pressing Enter to submit — all events, attribute changes,
and assertions are captured. After replay you get an optional manual check (e.g. hover effects).
No actions recorded yet.
Try the test overlay
The 🧪 Tests overlay (bottom-right) shows results of all test runs: auto steps and manual checks. For assessors: after replay, open the overlay to see if all auto tests passed and which manual checks failed. Run the example below — then click the overlay button to see pass/fail for each step.
Test function examples
Complete test functions you can paste into your app or get from Export Objs test. Run any example to see pass/fail.
Unit test — o.addTest
o.addTest('Task list sanity', [
['list container exists', () => !!document.querySelector('#task-app')],
['store has tasks array', () => Array.isArray(taskStore.tasks)],
]);
With before/after hooks
let fixture;
o.addTest('With hooks', [
['fixture is set', () => fixture === 1],
], { before: () => { fixture = 1; }, after: () => { fixture = 0; } });
From recording — o.exportTest() style
o.addTest('Recorded test', [
['click on [data-qa="task-add-btn"]', () => {
const el = document.querySelector('[data-qa="task-add-btn"]');
if (!el) return 'element not found';
el.click(); return true;
}],
], () => { /* teardown */ });
Manual check — o.testConfirm (dev-only)
const r = await o.testConfirm('Manual check', ['Item verified']);
// r.ok === true if all checked; r.errors = unchecked labels