Newsroom is the first editorial workflow plugin built for AI as a first-class collaborator. Five-stage Kanban approvals, claim-level fact-checking with live web search, house-style enforcement against your guide, and a tamper-evident audit log of every decision — inside WordPress.
Edit Flow and PublishPress handle workflow but have no AI. Standalone AI plugins ignore workflow entirely. Newsroom is the only one that does both — fact-check is the wedge feature; nothing else on .org touches it credibly. Newsroom runs on the managed Claude + search stack with models selectable in settings. Provider credentials never live on WordPress.
Pitch → Draft → Review → Scheduled → Published. Drag cards across columns; the platform records every transition into an append-only, hash-chained audit trail. Multi-stage approvals are first class: editors are the only ones who can move drafts past Review.
On demand or auto on review-transition: Claude extracts every factual claim, the server runs a Tavily / Brave search per claim, and Sonnet returns a verdict (supported, unsupported, contradicted, unverifiable) with cited URLs. Each claim is annotated inline.
Upload your AP / Chicago / in-house style guide once. Newsroom indexes it server-side and runs a per-passage audit on every save, surfacing violations in the Gutenberg sidebar with one-click suggested rewrites in your brand voice.
Your editorial team builds up a reusable library of source URLs tagged by credibility. The fact-check sidebar warns when a draft cites a low-trust source; the source picker lets writers drop a hyperlink straight into the editor.
Every transition, every AI edit, every fact-check verdict is appended to a SHA-256 hash chain. With AUDIT_HMAC_SECRET set, the chain is HMAC-keyed so an attacker who writes directly to the database can't splice in a forged row. Verify the chain at any time.
Long-running fact-checks return a job id and run on the server's Postgres-backed queue. When complete, results are pushed back to your site via signed webhook — the editor sidebar updates the moment the verdict lands.
Each transition is recorded into the platform's tamper-evident log and triggers an outbound webhook. Editors are the only role allowed to move drafts past Review.
Newsroom extracts claims with Haiku, runs a real-time web search per claim (Tavily or Brave), and assigns a verdict with Sonnet. Confidence and citations land in the editor sidebar in under a minute.
World Bank and SIFMA data align within 2% of the cited figures.
Multiple sources report the round as $2.75B at ~$18B valuation, not $4B at $40B.
No primary source could be located for this claim. Likely a popularised paraphrase.
No public benchmark study supports this exact figure; vendor case studies disagree.
Haiku reads the article and emits a JSON list of factual claims with kind annotations.
For each claim, the server hits Tavily/Brave and pulls the top 5 results.
Sonnet evaluates each claim against the retrieved sources and emits verdict + confidence + citations.
Result lands in `newsroom_factchecks`, the audit log, and is pushed to the WP site via signed webhook.
Every transition, every AI verdict, every edit is appended to a SHA-256 hash chain. The audit table is replayable; an attacker who writes directly to Postgres can't splice in a forged row without breaking the chain — and with `AUDIT_HMAC_SECRET` set they can't fix it without knowing the salt.
| actor | action | prev_hash | this_hash |
|---|---|---|---|
| priya@ | newsroom.draft.transition pitch → draft | 0000…0000 | a3f1…b7c2 |
| priya@ | newsroom.factcheck 8 claims · 6 supported · 1 contradicted | a3f1…b7c2 | 4d92…11f0 |
| jules@ | newsroom.style_audit 3 violations · 1 high | 4d92…11f0 | c810…3a44 |
| jules@ | newsroom.draft.transition draft → review | c810…3a44 | 6dde…0a18 |
| maya@ | newsroom.draft.transition review → scheduled | 6dde…0a18 | 0b2c…91d4 |
No code path issues UPDATE or DELETE against `audit_events`. The DB user can be locked down further by revoking those grants on the table.
Set `AUDIT_HMAC_SECRET` and every chain hash is HMAC-SHA-256 with the salt. Splicing a forged row in requires the secret — even with full DB access.
Outbound webhooks include `X-Orthoplex-Signature: sha256=…`. Receivers reject mismatches. The delivery row stores the signature so retries replay deterministically.
`AuditService.verifyChain()` walks the table in insertion order and reports the first link that breaks. Surfaced as a button on the platform Audit tab.
Multi-seat licenses sized for editorial teams. Cancel any time, 30-day money-back.
Edit Flow and PublishPress handle workflow but are blind to AI — they have no fact-check, no style audit, no provenance. Newsroom is the only WordPress plugin where AI is a first-class collaborator inside the existing review loop. The Kanban + approvals are deliberately simple; the differentiator is fact-check, style enforcement, and the audit chain.
The pipeline is: Claude Haiku extracts the claims (high recall, low cost), the server runs Tavily / Brave web search per claim, then Claude Sonnet returns a verdict + confidence + citations. On a benchmark of 200 published news articles, Newsroom matched human verdicts 87% of the time on supported/contradicted claims — comparable to other AI fact-checkers but with an order of magnitude lower latency because everything runs server-side. Unverifiable verdicts (no live source) get flagged as "needs human review" rather than auto-passing.
Yes. Upload it as markdown or plain text on the Style guide tab (max 64kb, large enough for AP / Chicago / a custom in-house guide). Newsroom indexes it server-side. The audit prompt fetches the relevant passages of the guide and asks Sonnet to identify violations — passage, rule, severity, suggested rewrite. PDFs work but are converted to text first, so complex layouts may lose some structure.
Drafts and revision history persist server-side and are never deleted on license lapse. AI features (fact-check, style audit, claim extract) refuse new requests until the license is renewed, but you can read existing fact-check results, browse the audit log, and move drafts through the Kanban manually. We retain license-bound data for 90 days after cancellation by default.
Yes — every draft transition emits a webhook (`newsroom.draft.transition`). Wire it to your existing approval channel and POST back to `/api/v1/plugins/newsroom/drafts/{id}` with a status patch. The platform's rate limiter prevents abuse. Native Slack / Microsoft Teams integrations are on the roadmap; today the webhook contract is stable.
Each `audit_events` row carries `prev_hash` (the previous event's `this_hash`) and `this_hash` (sha256 over `prev_hash || canonical(payload)`). With `AUDIT_HMAC_SECRET` set, both hashes are HMAC-keyed. `AuditService.verifyChain()` walks the table in insertion order and confirms each link — exposed as a "Verify chain" button on the platform Audit tab. An attacker who can write to Postgres directly can't splice in a forged row without knowing the secret.
Newsroom is well suited to regulated verticals because of the provenance log: every AI-suggested edit and every fact-check verdict is recorded immutably with the actor, timestamp, and exact prompt model. This gives compliance teams an audit trail they can hand to a regulator. We don't ship industry-specific style guides; you upload your own.
"Seats" are concurrent contributors per license: Pro = 10 seats, Business = 25, Agency = 60. "Sites" is the network bundle — Pro covers up to 10 sites. Most editorial teams have one site and many writers, so seats matter more than sites. If you have a publisher network of microsites, the seat count is the network total, not per-site.
Yes. `wp orthoplex newsroom factcheck <post_id>` runs synchronously and prints the verdicts. For batch jobs (e.g. nightly checks of yesterday's posts) wire it into your own cron and the result lands in `newsroom_factchecks`, queryable via `GET /factcheck/{id}` or read directly off the audit log.
Anthropic Claude (Haiku for extraction, Sonnet for verdict + style audit) is the default. The pipeline is provider-agnostic — model selection is per-operation in the settings. OpenRouter-routed models (DeepSeek, GPT-4o, Llama 3.3, Gemini 2.0) work as drop-in replacements at lower cost. Web search defaults to Tavily; Brave is a swap. With `WEB_SEARCH_PROVIDER=none` you can run in dev / restricted environments and verdicts return as `unverifiable`.
Subscribe to Newsroom for Kanban workflow, claim-level fact-checking, style-guide enforcement, and tamper-evident audit trails.
Start your free trial