AI Seer + ArAIstotle in 2025: Post-Genesis Milestones
- Matthew Northey
- Mar 1
- 4 min read

2025 was the year AI Seer and ArAIstotle stopped being an idea about truth and became a live verification stack people could actually use. This year, we opened access through token-gated workflows, expanded distribution across major surfaces, and validated the model in public, under real usage and real community pressure. What follows is a walk through the key roadmap milestones we delivered in 2025.
Genesis launch
Our Genesis launch on Virtuals marked the transition from concept to live verification infrastructure. It was the moment the network became operational, and the community aligned around measurable truth workflows. What gets checked, how it gets checked, and how verification output is structured and repeatable.
Token-gated platform launch
On September 19th, 2025, we launched the token-gated platform and made aligned access real. Wallet connection became the front door to verification tools, points, and participation, so the people contributing to truth workflows could be recognized and coordinated through an incentive-compatible system. This was the turning point from verification as content to verification as a product surface, where usage, contribution, and alignment could scale without losing quality.
Base app launch
We launched on the Base App, meeting users where onchain distribution and social discovery already happen inside the Base ecosystem. This expanded our reach beyond “having a product on Base” to being discoverable where Base users browse, share, and act, reducing the friction between seeing a claim and verifying it. Distribution is a bottleneck for truth products, even the best verification engine doesn’t matter if it can’t show up at the moment misinformation spreads
Validated by Grok / Perplexity
One of the most important 2025 developments was independent, public cross-checking, where other systems and users validated our outputs and methodology. In a world full of confident hallucinations, the signal we care about is not that it sounds right, but it holds up when checked. These moments helped reinforce confidence that our verification format is legible, testable, and repeatable, even when evaluated outside our own ecosystem.
Staking launch + staking milestone
On November 6th, 2025, we launched $FACY staking to align long-term incentives and system sustainability. Participation scaled quickly and now we are at almost 10 million tokens staked, with a 5.64% staking reward rate.
Draper Associates Approval
We were recognized within the Draper ecosystem, underscoring that verification is a foundational problem for AI and modern information systems. And this wasn’t a one-off shoutout. Draper associates have referenced our work in more than one context, reinforcing that the need for credible, explainable validation is increasingly seen as core infrastructure rather than a “nice-to-have.”
Telegram Bot Launch
We launched the ArAIstotle Fact-Checker on Telegram, bringing verification directly into the messaging layer where claims and links spread fastest. Instead of forcing users to leave the conversation to verify something, the bot makes fact-checking a native action, send a claim or a URL, and ArAIstotle extracts key statements, checks them, and returns structured results inside the chat. Telegram is where communities coordinate, debate, and share information in real time, so adding verification there turns truth from a separate destination into an everyday workflow.
Events
NTT Startup ChallengeOn September 27th 2025, we were selected as a Top 20 finalist for the NTT Startup Challenge 2025, advancing from thousands of applicants. That mattered because it wasn’t a crypto room judging us, it was an innovation funnel evaluating whether verification as infrastructure is a real category with real demand.
TOKEN2049 Singapore TOKEN2049 was where we took ArAIstotle into the global arena of crypto + AI builders and made the case that verification is not optional in AI-driven information markets. We tested our truth detection tech for the first time publicly which was a great success. The theme was simple, if information is going to move at internet speed, verification must as well, without becoming centralized censorship or slow manual review. We used the event to show how decentralized verification can be structured, incentive-aligned, and usable.
GITEX GlobalOn October 16th, we showcased the verification stack to enterprise and government stakeholders at GITEX Dubai, and Dennis spoke publicly about real-time fact-checking. It forced the product conversation beyond social media misinformation into operational trust. Organizations need verification outputs that are structured, auditable, and deployable in workflows where the cost of being wrong is high.
TechCrunch Disrupt 2025: Startup Battlefield 200AI Seer was selected for TechCrunch Disrupt 2025 Startup Battlefield 200, placing us in the top 3–4% of startups globally in that cohort. We spent the week meeting founders, investors, and builders, and we also made high-leverage connections around knowledge infrastructure, most notably conversations with the Internet Archive, which has one of the largest repositories of articles, books, and research materials in the world.
Singapore Management UniversityWe presented at the Singapore Management University, engaging academic and policy audiences on AI verification and trust. This panel discussion pushed the conversation into rigorous standards. How verification should be evaluated, how provenance should be treated, and how public institutions think about trust layers as infrastructure rather than features.
Qualcomm AI Program for Innovators
On December 5th, we participated in Qualcomm’s AI Program for Innovators Demo Day in Seoul, pitching and showcasing our enterprise software, Facticity Edge. Were selected as one of the top 5 AI projects selected for this program from Singapore This was a strong signal that the work isn’t only relevant to public social platforms. It also maps to enterprise needs, where verification, compliance, and reliability become core requirements, not nice-to-haves.
None of this scales without the community that actually uses the system, pushes edge cases, and forces the product to get better. Throughout 2025, community participation, fact-checks, feedback loops, staking, and public advocacy, directly shaped releases and helped turn ArAIstotle from a tool into a living verification layer.
If you believe verification should be the default layer in AI, not an optional extra, tag @ArAIstotle when you see claims worth checking, share this recap, and help push truth from “a feature” into infrastructure.
Happy Fact-Checking
AI Seer Team



