AI-Likelihood Score
Estimates whether text may be AI-generated using writing-pattern analysis, repeated phrasing, and low-specificity signals.
Digital trust intelligence for the AI-generated internet.
An AI-powered credibility engine that helps users evaluate links, articles, reviews, profiles, and media for possible misinformation, scams, fake expertise, fake reviews, fake identities, deepfake risk, and AI-generated manipulation.
TrustLens AI was created by Aziz Firdaus as part of a broader vision: AI for Public Infrastructure & Digital Trust. The platform explores how artificial intelligence can help people, institutions, and communities evaluate online content more safely in an era of scams, synthetic media, fake expertise, misinformation, and AI-generated manipulation.
Founder + systems thinker + public-impact innovator.TrustLens AI scores warning patterns while avoiding certainty claims. Results are educational signals for deeper human verification.
Estimates whether text may be AI-generated using writing-pattern analysis, repeated phrasing, and low-specificity signals.
Extracts risky claims and estimates possible misinformation indicators, vague sourcing, and unsupported certainty language.
Checks urgency, fake authority, guaranteed profit wording, phishing language, impersonation, and suspicious payment requests.
Detects repeated wording, unnatural praise, generic testimonials, suspicious sentiment patterns, and AI-generated review style.
Flags unverifiable credentials, vague institution claims, exaggerated expertise, suspicious affiliations, and credibility gaps.
Planned image upload, metadata/provenance analysis, and deepfake risk estimation for manipulated media review.
Coming SoonLoad safe sample content into the relevant analyzer to see how TrustLens AI presents risk signals, possible indicators, and verification guidance.
Analyze an urgent investment offer claiming guaranteed returns.
Check whether a profile uses vague credentials, inflated authority, or unverifiable claims.
Evaluate a viral post with emotional claims and missing evidence.
Detect repeated wording, generic praise, or suspicious AI-like testimonials.
Choose a checker, paste content, and receive structured scores, warning signs, verification steps, a safer interpretation, and an educational disclaimer.
Submit content to generate scores, warning signs, safer interpretation, and verification steps.
TrustLens AI is an applied digital-trust experiment focused on misinformation risk, AI-generated content signals, scam patterns, review authenticity, and online credibility indicators.
AI-generated content, synthetic media, impersonation, and high-speed misinformation can make online decisions harder. TrustLens AI explores practical risk signals that support safer human verification.
The platform estimates AI-likelihood, possible misinformation indicators, scam pressure, suspicious review language, profile credibility concerns, source credibility, and future media provenance signals.
Outputs are probabilistic, transparent, cautious, and educational. The system avoids certainty claims and encourages comparison with trusted sources, official records, and qualified experts.
Risk scores can miss context and may produce false positives or false negatives. TrustLens AI does not determine truth, identity, fraud, authorship, or authenticity with certainty.
0-25 Low Risk, 26-50 Moderate Risk, 51-75 High Risk, and 76-100 Critical Risk.
TrustLens AI is not a fact-checking authority. It provides credibility signals that require trusted human verification.
AI detection, scam analysis, review scoring, and profile checks can produce false positives and false negatives.
Image upload, provenance metadata, and deepfake risk analysis are planned and marked as Coming Soon.