AI-Rating in Real-Time; Why Are We Not More Concerned?
- Jaymie Johns
- Jan 26
- 5 min read
Updated: 4 days ago
Announced on January 14, 2026, a relatively unheard of company, Datavault AI, gave us a somewhat closer look into their latest innovation: a real-time broadcast “bias-checker”.
While it may sound appealing on the surface (finally, a chance to get rid of the “fake news”, which saw its peak in recent years), if one investigates, it is evident that this technology is not only severely flawed, but immensely dangerous. When we look closer at the methods and implications, it becomes clear that this is one frightening step toward an Orwellian Group Think system.
What is AI Rating Technology?

The title is inherently misleading; what one would assume “AI Rating” means is that an AI system trained on innumerable datasets would scan available information and compare a claim against those datasets to determine what is most likely to be true. In reality, Datavault AI is instead leaving it up to users.
When reading the patent, it is blatantly stated that “The user may also rate the truthfulness of the content from 0% true (or false) to 100% true (or completely true). In one embodiment, a sliding scale may be utilized to select the truthfulness of the content as determine by the user.”
In essence, this means that the viewers of the program are deciding what qualifies as truth, rather than the actual truth. Fact by popular vote.
Further on in the patent text, the methodology is revealed to be even more insidious: “In some cases, users may be given added weight because of past successful history in categorizing content, education, profession, or so forth.”
The document blatantly admits that it will give priority to the opinions of those with more education, particular professions, etc., thus creating a hierarchy of voters.
Projections / Expansion

The press release from January 14 read: “The AI driven bias meter and ADIO®-enabled polling will launch in Fintech TV's programming pilot season, which runs from January through April with plans for broader rollout to enhance global fintech discourse, viewership and interaction.”
"Broader rollout" directly implies scaling beyond the pilot, meaning these “bias-checkers” not only could be implemented in areas outside of the cryptocurrency and fintech industry, but that it is intended to be.

The press release also stated “…monetization across industries including sports & entertainment, biotech, education, fintech, real estate, healthcare, and energy”; Devault isn’t attempting to hide the fact that they intend to expand into other industries – they’re hoping we don’t notice the fact that we’re living in The Orville’s ‘Majority Rule’ episode, where people vote on what medicines to take, until we’re already three flu seasons in.
Fintech
Fintech is technology that focuses on monetary applications, such as banking, transactions (Venmo, CashApp, etc.), aiming to cut costs with the goal of maximizing the fintech company's profit.
Datavault AI is a fintech company focused on cryptocurrency and secure monetary transactions. So why does the patent behind their FintechTV 'bias-checker' contain 126 mentions of political terminology — 'liberal' (35), 'conservative' (34), 'political' (21), 'far left' (3), 'far right' (3), and dozens more?

Pairing this startling count with their stated plans for broader rollout, one can't help but conclude that Datavault AI is positioning this tool for use in politics.
The patent mentions 'fake news' 7 times — always as a category for unreliable content that needs crowd polling for verification. In a supposed cryptocurrency bias detector, this is another sign the tool was built for general political news, not financial purposes.
The text even spells out how it wants to visually reinforce politics:
"For example, content perceived as having a liberal bias may be shown under a liberal heading, in blue, on the left-hand side of a webpage whereas content perceived as having a conservative bias may be shown under a conservative heading in red, on the right-hand side of a webpage."
A 'bias-checker' for cryptocurrency news has zero reason to default to dividing everything into left/liberal (blue) and right/conservative (red) sides of the screen. This is a damning admission directly from the patent itself.
Seamless or Surreptitious?
Here’s where the plot thickens to become gelatinous:
This “Financial technology”, “bias-checker” is going to prompt users phones to bring up a rating question via inaudible sounds. The
press release stated “Embedded in Fintech TV broadcasts, these inaudible audio signals trigger mobile quick responses on viewers' devices, allowing real-time polls, feedback, and engagement without interrupting the viewing experience.”
This innovative technology (ADIO® Inaudible Tones®, patented by Datavault AI themselves, is a concept that is nefarious to its core: send out a frequency that only technology can detect to prompt a survey on users’ phones without users even realizing it.

Many studies have been done over the years that demonstrate the effect audio frequencies can have on mood, showing that inaudible high-frequency sounds can subtly affect mood. They have found that higher frequencies — above ~22 kHz — tend to make people feel more relaxed and pleasant, while slightly lower ranges — below ~32 kHz — induce more negative feelings and higher stress (Oohashi et al., 2000; Fukushima et al., 2014).
This information raises the very serious concern: could those ADIO® Tones be used to sway voters to rate certain topics toward the viewpoint the company prefers?
If so, how can we ensure that the company isn’t manipulating voters, nudging them toward opinions the company favors? With Datavault AI indicating their intention to expand Fintech TV into various industries outside of finances, this adds just one more layer to the pile of reasons to be skeptical, if not outright suspicious, of this technology.
The Reality
Datavault AI’s “real-time bias-checker” is not the neutral, truth-seeking tool they claim. Instead, it is a profit-driven system built on crowd-sourced subjectivity, weighted voter hierarchies, partisan red/blue sorting, and subtle inaudible audio nudges, all packaged in fintech marketing and vague promises of “responsible AI.”

The patent behind it is full of U.S. political binaries from page one. The company openly plans to scale it beyond cryptocurrency news and into healthcare, education, entertainment, and more. Datavault AI admits they will give extra power to “adept” voters based on education and profession. In addition to all of this, they will embed inaudible tones that could quietly tilt mood and responses before anyone even votes. They frame the whole thing as a defense of free speech while designing a machine that manufactures consensus and divides content into tribal camps.
They boast transparency, yet, in their openness, Datavault AI reveals that the system is not merely fundamentally flawed, but that it is intentionally flawed, making it crowd-voted reality with a corporate thumb on the scale. When a micro-cap company starts tokenizing and monetizing public judgment on truth, bias, medicine, politics, and more, while hoping we don’t notice until it’s already deployed, we’re not just facing flawed tech. We’re being led to engineered groupthink, where popularity (and profit) decides fact.
We shouldn’t be merely concerned; we should be alarmed.
And we should demand answers before votes decide truth.




Comments