What is Beacon?

AI agents are becoming the primary way software discovers and interacts with other software. They don't browse websites — they search registries, read machine-readable descriptions, and evaluate APIs programmatically.

Most products aren't ready for this. They have marketing sites designed for humans but nothing for machines. No llms.txt, no MCP server, no structured data that tells an agent what the product does or how to use it.

Beacon tells you exactly where you stand — and how to fix it. It runs 25 automated checks across 5 categories and produces a detailed report with specific, actionable remediation steps.

What we check

DiscoverabilityCan agents find you?
ComprehensionCan agents understand what you do?
UsabilityCan agents interact with you?
StabilityCan agents depend on you?
Agent ExperienceWhat happens when an agent shows up?

How scoring works

Each category is assessed independently as Ready, Partial, or Not Ready. There is no aggregate score — the five independent assessments give you a clear picture of where to focus.

Results are available in three formats: a web page with expandable findings, a downloadable PDF report suitable for sharing with engineering leads, and a structured JSON report designed for LLM-powered remediation — paste it into Claude or ChatGPT and say “fix everything.”

See it in action

Here's what Beacon found when we scanned our own API: api.strale.io scan results →

About Strale

Beacon is built by Stralethe trust layer for the agent economy. Strale provides quality-scored capabilities that AI agents can discover and use at runtime.

Contact: hello@strale.io