Submit
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Back to Blog

What Is an Ad Blocker: Everything You Need to Know

July 15, 2021
|
Read time {time} min
Written by
Permission
Stay in the loop

Get the latest insights, product updates, and news from Permission — shaping the future of user-owned data and AI innovation.

Subscribe

Wherever you go, you will get bombarded with tons of ads on the internet.

While many of them provide value, the massive amount of advertisements can easily become disturbing for users as they try to enjoy their favorite activities in the digital world.

In fact, some publishers place so many ads in their apps and websites that it prevents users from enjoying the actual content.

Fortunately, using an ad blocker is an excellent way to remain (nearly) advertisement-free on the web.

In this article, we will explore what an ad blocker is, how ad blocking works, its benefits and downsides, as well as the actual methods to prevent advertisers from ruining your online experience.

The Problem With the Current Online Advertising Landscape

Before we dive into our topic, let’s first take a look at the problem ad blockers are meant to solve: the disturbing and intrusive nature of the current online advertising landscape.

Before the age of the internet, people encountered ads mostly in newspapers, on TV, radio, and billboards.

While the average person was exposed to between 500 advertisements a day during the 1970s, this number surged to a daily 5,000 by 2007.

With the rise of digital advertising, we now see an estimated 4,000-10,000 advertisements every day, which can be quite overwhelming.

And it’s no surprise.

It’s super easy for businesses to create, place, and show an ad to internet users by utilizing the advertising platforms of tech giants like Facebook and Google, which dominate the digital ad space.

Interestingly, even though the pandemic caused a huge hit to the online advertising industry, online ad spend still managed to grow by 1.7% in the US in 2020. In fact, Statista predicts worldwide digital advertising revenues to surge from 2019’s $333.8 billion to $491.1 billion by 2025 with a 5.67% Compound Annual Growth Rate (CAGR).

Due to the current nature of the online advertising landscape, consumers have developed strategies and mechanisms to cope with the massive amount of ads they encounter.

Much to the chagrin of advertisers, consumers are interacting with less and less ads, decreasing the return on investment (“ROI”) for advertisers, i.e., making advertising dollars spent less profitable for advertisers.

Consumers can hardly be blamed for wanting to distance themselves from online ads; indeed, they face numerous issues with the current state of digital advertising, including:

  1. Disturbing user experience: Imagine being bombarded by ads while reading this article with banners placed on the sides, the top, and the bottom of the page, as well as after every second or third paragraph. And, to make it extra annoying, imagine having to avoid the dreaded pop-up, which websites increasingly use despite the fact that pop-ups irritate about 73% of internet users, according to a HubSpot survey.
  2. Interrupting primary activities: In addition to being annoying, some ads even interrupt internet users’ primary activities. Take YouTube ads before videos or promotions in smartphone games as examples, which keep consumers waiting before they can access the publishers’ content.
  3. Slowdowns and battery drainage: While trackers and banners are being loaded, large numbers of unoptimized ads can appear and take their toll on applications’ and websites’ performance. According to an Opera study, a website is 51% slower on average when ads are displayed than with blocked advertisements. In addition to performance issues, the firm revealed that ads can also drain the battery life of devices by nearly 13%.
  4. Security concerns: In the past few years, malvertising has become a real issue for internet users. Malvertising refers to spreading malware and viruses via online advertising and was responsible for roughly 1% of the ad impressions in May 2019. In addition to infecting user devices, cybercriminals also use digital advertising to attract victims with fraudulent schemes.
  5. Lack of privacy: Have you ever visited an ecommerce store without making a purchase just to later encounter the same business’ ads on social media? Besides advertisements, most websites and platforms on the internet use web trackers to collect, store, and share data about the visitors’ online activities. As a result of exchanging user information with third parties, advertising networks know everything about you, diminishing your privacy on the web.

What Is an Ad Blocker?

An ad blocker is a software solution capable of preventing advertisements from showing for the user while browsing the web or utilizing an application.

Contrary to their name, most ad blockers do not actually block advertisements. Instead, they stop ads from downloading on your browser by disabling requests that include advertising-related content.

As a result, users can enjoy a mostly ad-free experience with enhanced security, privacy, and device performance without being exposed to intrusive content.

How Do Ad Blockers Work?

Ad blocking software solutions use simple filter lists containing URLs to identify and block advertisement-related content on websites and applications.

In web browsers, ad blockers work in the following way:

  1. Upon visiting a website, the ad blocker checks its filter list to see whether the site is included.
  2. If the search returns positive results, the ad blocking software blocks requests to external content, which prevents the advertisement from getting downloaded and shown on the page.

Instead of completely disabling the request, other ad blocking services replace the advertising content with something else after identifying it.

No matter the method used to disable advertisements, filter lists play a key role in the ad blocking process.

For that reason, filter lists are regularly maintained and updated by both the creators and third-party communities independent of the developers.

Most ad blockers allow users to whitelist the ads of different websites, services, and applications. By doing so, they can support the creators, prevent possible ad blocking-related page issues, or unlock the content of publishers that use ad block walls.

Many ad blockers provide protection against all kinds of intrusive content, such as advertisements, malware, and web trackers, across many applications, web browsers, and devices.

At the same time, some ad blocking software solutions can only disable unwanted content in specific apps and devices.

How Do Ad Blockers Make Money?

Not all ad blockers are created equal, as some are more effective in providing a distraction-free experience to users than others.

An excellent way to determine an ad blocking solution’s efficiency is by examining the service provider’s business model.

Ad blockers can make money in multiple ways, with the most popular methods including:

  1. Free: Some ad blockers function as open-source apps that are available to users for free. While they are maintained by the community, free solutions often finance their development via donations. A good example of this business model is uBlock Origin. However, strangely enough, the project’s creator doesn’t accept any donations. Instead, the developer decided not to create a dedicated website or a forum for the ad blocker in order to cut the maintenance costs.
  2. Paid service: Instead of resorting to the community’s help and donations, many ad blocking software solutions offer services in exchange for an upfront payment or a subscription fee.
  3. Freemium: Freemium is a popular business model among software solutions, and multiple ad blockers use it. Here, users can utilize only a part of the features for free. Optionally, they can pay a one-time or monthly subscription fee to access more powerful ad blocking functions.
  4. Whitelisting some of the ads: Certain ad blockers use a rather controversial business model. While they offer their services for free to users, the creators whitelist a share of ads that allegedly follow “acceptable” advertising practices. However, some ad blockers offer whitelisting services for advertisers in exchange for a payment or a cut from their ad revenue.

Why Do You Need an Ad Blocker and What Benefits Does it Offer?

An ad blocker is a must-have for those who want to enjoy an advertisement-free experience in the digital space.

Ad blockers offer the following benefits to users:

  1. Improved user experience: Users can achieve a distraction-and intrusion-free experience on the internet by using an ad blocker to eliminate advertisements. As a result, they can conveniently browse the web or use their favorite apps without worrying about getting bombarded by advertisers’ offers.
  2. Enhanced security and privacy: Most ad blockers have built-in features to detect, spot, and block malicious and fraudulent ads on the internet. In addition to eliminating malvertising, users can also enjoy increased privacy by disabling web trackers. Furthermore, ad blocking is an excellent way to protect children from inappropriate advertisements on the web.
  3. Faster page load times: Ad blockers can improve browsing speed and application performance by getting rid of bloated and unoptimized ads. While users have access to a more seamless experience, businesses can also benefit from the lower bounce rates achieved via faster page load times.
  4. Optimized battery life and mobile data usage: As ads take a heavy toll on device batteries, preventing them from loading can save energy and achieve better runtimes. The lack of ads can also reduce the mobile data usage of consumers.
  5. Potential savings on impulse shopping: By eliminating ads, the number of offers internet users encounter will significantly decrease. They can take advantage of this to reduce their unnecessary expenses, refrain from impulse purchases, increase their savings, and limit their chances of getting targeted by fraudulent companies.

What Are the Possible Downsides and Limitations of Ad Blocking?

As with every software, ad blockers have some limitations and downsides, such as:

  1. Broken content: Since ad blockers disable all advertising and tracking-related content of a website, it can lead to an unwanted experience on some occasions. When the app blocks an important request, the site may display broken content to the user.
  2. Inability to access content: As some publishers still utilize ad block walls, refusing to disable ad blocking software can prevent users from accessing certain apps and websites.
  3. Can’t block all ads: Unfortunately, although they can block most advertisements, ad blockers can’t provide a fully ad-free experience to users. Furthermore, the number of unblocked advertisements increases for ad blockers that adopt the whitelisting business model.
  4. Lack of support for content creators: Some content creators use digital advertising exclusively to monetize their solutions. Using an ad blocker without whitelisting them could reduce their revenue and limit their growth.

How Does Ad Blocking Impact Publishers and Advertisers?

According to Statista, ad blocking penetration was expected to surge from 2014’s 15.7% to 27% by 2021 in the United States.

In fact, ad blocking solutions were adopted much faster, with the technology’s penetration reaching 27% by February 2018 among US users.

Since many users are blocking ads on their devices, it has a major impact on advertising networks and businesses.

The good news for advertisers is that they don’t have to pay a dime for advertisements targeting ad block users (as they don’t get shown at all).

However, as a significant share of consumers have opted out of receiving ads from advertisers on the internet, this also means that businesses have a smaller audience to target.

At the same time, publishers are hit harder by ad blocking tech as a part of their visitors won’t interact with the ads displayed on their platforms, causing a revenue loss for the firms.

However, ad blocking impacts giant digital advertising networks (e.g., Facebook Ads, Google Ads) the most.

The more users install ad blockers, the fewer impressions and interactions advertisements get, decreasing the revenue networks make by connecting publishers and advertisers.

Countermeasures From Publishers

As ad blocking means a significant threat to the digital advertising industry’s current state, many publishers and networks have decided to take steps against ad blocker solutions.

One of the most popular ways publishers have used to reclaim their lost revenue is automatically detecting ad blockers upon user website visits.

When an ad blocker is detected, a publisher may decide to display a message to the user to convince him to disable the software.

However, others have taken a more harsh stance by installing an ad block wall that denies access to the site’s content until the user disables its ad blocking software.

While the latter method seemed to work initially, researchers discovered that 74% of users would leave a website with an ad block wall set up.

Due to these methods’ lack of success, businesses have joined initiatives like Acceptable Ads and the Coalition for Better Ads that require both publishers and advertisers to apply a variety of pro-consumer and user-friendly digital advertising standards.

By showing only heavily optimized ads to users, publishers of these initiatives can get their advertisements whitelisted as ad blockers participating in the programs.

What Are the Best Methods to Block Ads?

In this section, we have collected the best methods you can use to block ads in the digital world.

Let’s see them!

1. Browser Extensions

Examples: uBlock Origin, Adblock Plus, AdBlock

One of the most popular ways to block ads is by installing the software via a browser extension.

Here, the user visits its browser’s add-on store and sets up the ad blocker as a free extension.

Upon successful installation, the ad block browser extension will screen content for trackers, advertisements, and malware. After applying the filter lists, the ad blocker tells the browser whether to allow or disable an element.

Based on the rules the solution uses, it can leave whitespace where the ad would be normally displayed, replace it with other content, or just simply hide the element.

As a result, users can get rid of most ads while surfing the web via the browser where the ad blocker extension is installed.

On the other hand, since it’s a browser extension, the ad blocker doesn’t have access and can’t block unwanted content in other apps installed on the device.

2. Ad Block Browsers

Examples: Opera, Brave, Firefox Focus

Ad block browsers are internet browsers with built-in ad blocking capabilities.

They work very similarly to ad block browser extensions as they can effectively disable advertisements on the web.

While users don’t have to worry much about installing an extension to eliminate ads, browsers with built-in ad blocking features are often well-optimized and feature better performance than extensions.

It’s also important to mention privacy browsers. Instead of blocking ads, these solutions disable web trackers to ensure a high privacy level for users.

3. Mobile Ad Blockers

Examples: Wipr, 1Blocker, Blokada, AdAway

According to Statcounter, mobile devices have accounted for 55% of the internet traffic compared to desktop’s 45% in April 2023.

With smartphone devices taking the lead, it shouldn’t come as a surprise that mobile ad blocking has become popular among users.

In fact, while desktop was standing at 236 million, active mobile ad block users grew to 527 million by Q4 2019, according to PageFair’s 2020 AdBlock Report.

In addition to the web, mobile users encounter many ads within the apps they have installed on their devices.

For that reason, they can install an ad blocker for iOS or Android to disable ads both on the web and in applications.

As a side note, since browsers do not support extensions on some mobile devices, ad block browsers have become increasingly popular on smartphones.

If you want to learn more about mobile ad blocking, we recommend taking a look at the following Permission.io articles where we compared the best iOS and Android ad blockers.

4. Cross-Device Ad Blockers

Examples: AdGuard, NextDNS

Some ad blocking solutions offer protection against advertisements across multiple devices.

As a result, users can access apps and browser extensions on desktops, tablets, and smartphones to get rid of unwanted content with a single solution.

With a package of apps and browser extensions, cross-device ad blockers utilize various methods to eliminate advertisements.

On the flip side, cross-device ad blocking support is often a paid service without the option to access the service for free.

5. DNS Filtering

Examples: AdGuard DNS, DNSCloak

An effective method to block advertisements is via DNS filtering.

DNS stands for the Domain Name System that is responsible for matching domain names with IP addresses, allowing users to access content on the web without remembering the technical details and a confusing list of numbers.

The process works similarly to calling a friend. Instead of memorizing his number every time, you have it saved in your smartphone contacts so you can call him with a single tap. This is the exact reason why the DNS is often referred to as the “address book” of the internet.

With DNS filtering, the user connects to a DNS server configured to block access to either IP addresses or domain names seeking to display ads to the user. In addition to advertisements, DNS filtering also protects users from web trackers and malicious content.

When an app or a website sends an unwanted request, the modified DNS server refuses to reply with an IP address and instead sends a null response.

Similarly to browser extension-based ad blockers, the DNS filtering method also uses blocklists to identify and disable undesirable content. For that reason, the service provider must update the filter lists often to prevent advertisers from bypassing the DNS server.

Since DNS filtering blocks all unwanted requests coming from the web, this method can effectively provide system-wide protection against ads to internet users.

6. VPN

Examples: NordVPN, Surfshark

Virtual Private Networks (VPNs) are popular tools that allow users to disguise their online identity and encrypt their internet traffic.

To achieve that, the network redirects the user’s IP address through a configured remote server operated by a VPN host.

As a result, the VPN server becomes the source of the user’s data, helping to hide the data he or she sends or receives online from Internet Service Providers (ISPs) and other third-parties.

Since VPNs allow users to connect to servers in numerous countries and locations, they can use such solutions to bypass geo-blocks and access regional content on the web.

In addition to all the above, multiple VPN solutions feature built-in ad blocking to eliminate malware, trackers, and online advertisements.

While this method works similarly to DNS filtering, VPN ad blockers offer a one-stop solution to eliminating unwanted content and in apps across all devices connected to the user’s network.

However, for ad blocking to work, the user’s devices have to be continuously connected to the VPN network.

For that reason, it’s essential to test the performance of the VPN solution to avoid traffic-related issues and ensure a seamless user experience.

7. Hardware Devices

Examples: Pi-Hole

In the above sections, we have introduced software-based solutions to block advertisements on the web and in applications.

Now let’s examine a method that uses a hardware device for the same purpose.

Currently, the only viable hardware ad blocker on the market is called the Pi-Hole, which uses a Raspberry Pi to block advertisements on the network level.

For that, users have to configure the Raspberry Pi as a Pi-Hole, setting up a local DNS server that filters all content coming through the network and disables requests related to malware, advertisements, or web tracking.

Interestingly, the Pi-Hole replaces any pre-existing DNS server (including the ISP’s) on the user’s network with its own, allowing the device to block ads on devices like smart TVs that software-based ad blockers normally can’t reach.

While Pi-Hole is a free and open-source ad blocking solution, users have to purchase the necessary kit (e.g., a Raspberry Pi or another compatible device) to protect their networks against advertisements.

Also, as users have to manually configure the device to set up a Pi Hole, they have to possess at least minimal technical knowledge.

In terms of ad blocking, Pi Hole’s protection against unwanted content only works when the user is connected to his home private network (or the location where the device is installed to block ads).

Ad Blocking: The Key to a Distraction-Free Internet

Ad blocking is an excellent way to disable annoying advertisements, intrusive trackers, and malicious content. As a result of ad blocking, you can have a seamless, distraction-free experience while browsing the web or using your favorite apps on your device.

Additionally, ad blockers also improve your device’s performance, enhance your privacy and security, as well as limit your data and battery usage.

With that said, consider supporting your favorite content creators by whitelisting their advertisements via such an ad blocking solution.

An Alternative Solution to Preserving Privacy and Security

Meet Permission, the next-generation, blockchain-powered advertising platform that allows users to decide whether and how businesses can interact with their data and target them with ads.

In exchange for consenting to view an ad (which involves volunteering their time and data), users get rewarded in ASK cryptocurrency for engaging with advertisers and participating in their campaigns on Permission.

Users are free to hold, transfer, exchange, or spend their ASK rewards directly at Permission.io’s REDEEM store. Instead of forcing people to view their offers, advertisers on the Permission.io platform display relevant, personalized content exclusively to users who have given their permission to do so.

As a result, advertisers will experience increased engagement and ROI while building fruitful, long-term relationships with a loyal customer base.

Create an account at Permission!

Get the Agent

Unlock the value of your online experience.

Light gradient background transitioning from white to pale green with a subtle grainy texture.

Recent articles

Insights

California’s SB 243 and the Future of AI Chatbot Safety for Kids

Nov 21st, 2025
|
{time} read time

As a mom in San Diego, and someone who works at the intersection of technology, safety, and ethics, I was encouraged to see Governor Gavin Newsom sign Senate Bill 243, California’s first-in-the-nation law regulating companion chatbots. Authored by San Diego’s own Senator Steve Padilla, SB 243 is a landmark step toward ensuring that AI systems interacting with our children are held to basic standards of transparency, responsibility, and care.

This law matters deeply for families like mine. AI is no longer an abstract technological concept; it’s becoming woven into daily life, shaping how young people learn, socialize, ask questions, and seek comfort. And while many AI tools can provide meaningful support, recent tragedies - including the heartbreaking case of a 14-year-old boy whose AI “companion” failed to recognize or respond to signs of suicidal distress - make clear that these systems are not yet equipped to handle emotional vulnerability.

SB 243 sets the first layer of guardrails for a rapidly evolving landscape. But it is only the beginning of a broader shift, one that every parent, policymaker, and technology developer needs to understand.

Why Chatbots Captured Lawmakers’ Attention

AI “companions” are not simple customer-service bots. They simulate empathy, develop personalities, and sustain ongoing conversations that can resemble friendships or even relationships. And they are widely used: nearly 72% of teens have engaged with an AI companion. Early research, including a Stanford study finding that 3% of young adults credited chatbot interactions with interrupting suicidal thoughts, shows their complexity.

But the darker side has generated national attention. Multiple high-profile cases - including lawsuits involving minors who died by suicide after chatbot interactions - prompted congressional hearings, FTC investigations, and testimony from parents who had lost their children. Many of these parents later appeared before state legislatures, including California’s, urging lawmakers to put protections in place.

This context shaped 2025 as the first year in which multiple states introduced or enacted laws specifically targeting companion chatbots, including Utah, Maine, New York, and California. The Future of Privacy Forum’s analysis of these trends can be found in their State AI Report (2025).

SB 243 stands out among these efforts because it explicitly focuses on youth safety, reflecting growing recognition that minors engage with conversational AI in ways that can blur boundaries and amplify emotional risks.

SB 243 Explained: What California Now Requires

SB 243 introduces a framework of disclosures, safety protocols, and youth-focused safeguards. It also grants individuals a private right of action, which has drawn significant attention from technologists and legal experts.

1. What Counts as a “Companion Chatbot”

SB 243 defines a companion chatbot as an AI system designed to:

  • provide adaptive, human-like responses
  • meet social or emotional needs
  • exhibit anthropomorphic features
  • sustain a relationship across multiple interactions

Excluded from the definition are bots used solely for:

  • customer service
  • internal operations
  • research
  • video games that do not discuss mental health, self-harm, or explicit content
  • standalone consumer devices like voice-activated assistants

But even with exclusions, interpretation will be tricky. Does a bot that repeatedly interacts with a customer constitute a “relationship”? What about general-purpose AI systems used for entertainment? SB 243 will require careful legal interpretation as it rolls out.

2. Key Requirements Under SB 243

A. Disclosure Requirements

Operators must provide:

  • Clear and conspicuous notice that the user is interacting with AI
  • Notice that companion chatbots may not be suitable for minors

Disclosure is required when a reasonable person might think they’re talking to a human.

B. Crisis-Response Safety Protocols

Operators must:

  • Prevent generation of content related to suicidal ideation or self-harm
  • Redirect users to crisis helplines
  • Publicly publish their safety protocols
  • Submit annual, non-identifiable reports on crisis referrals to the California Office of Suicide Prevention

C. Minor-Specific Safeguards

When an operator knows a user is a minor, SB 243 requires:

  • AI disclosure at the start of the interaction
  • A reminder every 3 hours for the minor to take a break
  • “Reasonable steps” to prevent sexual or sexually suggestive content

This intersects with California’s new age assurance bill, AB 1043, and creates questions about how operators will determine who is a minor without violating privacy or collecting unnecessary personal information.

D. Private Right of Action

Individuals may sue for:

  • At least $1,000 in damages
  • Injunctive relief
  • Attorney’s fees

This provision gives SB 243 real teeth, and real risks for companies that fail to comply.

How SB 243 Fits Into the Broader U.S. Landscape

While California is the first state to enact youth-focused chatbot protections, it is part of a larger legislative wave.

1. Disclosure Requirements Across States

In 2025, six of seven major chatbot bills across the U.S. required disclosure. But states differ in timing and frequency:

  • New York (Artificial Intelligence Companion Models law): disclosure at the start of every session and every 3 hours
  • California (SB 243): 3-hour reminders only when the operator knows the user is a minor
  • Maine (LD 1727): disclosure required but not time-specified
  • Utah (H.B. 452): disclosure before chatbot features are accessed or upon user request

Disclosure has emerged as the baseline governance mechanism: relatively easy to implement, highly visible, and minimally disruptive to innovation.

Of note, Governor Newsom previously vetoed AB 1064, a more restrictive bill that might have functionally banned companion chatbots for minors. His message? The goal is safety, not prohibition.

Taken together, these actions show that California prefers:

  • transparency
  • crisis protocols
  • youth notifications…rather than outright bans.

This philosophy will likely shape legislative debates in 2026.

2. Safety Protocols & Suicide-Risk Mitigation

Only companion chatbot bills - not broader chatbot regulations - include self-harm detection and crisis-response requirements.

However, these provisions raise issues:

  • Operators may need to analyze or retain chat logs, increasing privacy risk
  • The law requires “evidence-based” detection methods, but without defining the term
  • Developers must decide what constitutes a crisis trigger

Ambiguity means compliance could differ dramatically across companies.

The Central Problem: AI That Protects Platforms, Not People

As both a parent and an AI policy advocate, I see SB 243 as progress – but also as a reflection of a deeper issue.

Laws like SB 243 are written to protect people, especially kids and vulnerable users. But the reality is that the AI systems being regulated were never designed around the needs, values, and boundaries of individual families. They were designed around the needs of platforms.

Companion chatbots today are largely engagement engines: systems optimized to keep users talking, coming back, and sharing more. A new report from Common Sense Media, Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions, found that of the 72% of U.S. teens that have used an AI companion, over half (52%) qualify as regular users - interacting a few times a month or more. A third use them specifically for social interaction and relationships, including emotional support, role-play, friendship, or romantic chats. For many teens, these systems are not a novelty; they are part of their social and emotional landscape.

That wouldn’t be inherently bad if these tools were designed with youth development and family values at the center. But they’re not. Common Sense’s risk assessment of popular AI companions like Character.AI, Nomi, and Replika concluded that these platforms pose “unacceptable risks” to users under 18, easily producing sexual content, stereotypes, and “dangerous advice that, if followed, could have life-threatening or deadly real-world impacts.” Their own terms of service often grant themselves broad, long-term rights over teens’ most intimate conversations, turning vulnerability into data.

This is where we have to be honest: disclosures and warnings alone don’t solve that mismatch. SB 243 and similar laws require “clear and conspicuous” notices that users are talking to AI, reminders every few hours to take a break, and disclaimers that chatbots may not be suitable for minors. Those are important: transparency matters. But, for a 13- or 15-year-old, a disclosure is often just another pop-up to tap through. It doesn’t change the fact that the AI is designed to be endlessly available, validating, and emotionally sticky.

The Common Sense survey shows why that matters. Among teens who use AI companions:

  • 33% have chosen to talk to an AI companion instead of a real person about something important or serious.
  • 24% have shared personal or private information, like their real name, location, or personal secrets.
  • About one-third report feeling uncomfortable with something an AI companion has said or done.

At the same time, the survey indicates that a majority still spend more time with real friends than with AI, and most say human conversations are more satisfying. That nuance is important: teens are not abandoning human relationships wholesale. But, a meaningful minority are using AI as a substitute for real support in moments that matter most.

These same dynamics appear outside the world of chatbots. In our earlier analysis of Roblox’s AI moderation and youth safety challenges, we explored how large-scale platform AI struggles to distinguish between playful behavior, harmful content, and predatory intent, even as parents assume the system “will catch it.” 

This is where “AI that protects platforms, not people” comes into focus. When parents and policymakers rely on platform-run AI to “detect” risk, it can create a false sense of security – as if the system will always recognize distress, always escalate appropriately, and always act in the child’s best interest. In practice, these models are tuned to generic safety rules and engagement metrics, not to the lived context of a specific child in a specific family. They don’t know whether your teen is already in therapy, whether your family has certain cultural values, or whether a particular topic is especially triggering.

Put differently: we are asking centralized models to perform a deeply relational role they were never built to handle. And every time a disclosure banner pops up or a three-hour reminder fires, it can look like “safety” without actually addressing the core problem - that the AI has quietly slipped into the space where a parent, counselor, or trusted adult should be.

The result is a structural misalignment:

  • Platforms carry legal duties and add compliance layers.
  • Teens continue to use AI companions for connection, support, and secrets.
  • Parents assume “there must be safeguards” because laws now require them.

But no law can turn a platform-centric system into a family-centric one on its own. That requires a different architecture entirely: one where AI is owned by, aligned to, and accountable to the individual or family it serves, rather than the platform that hosts it.

The Next Phase: Personal AI That Serves Individuals, Not Platforms

Policy can set guardrails, but it cannot engineer empathy.

The future of safety will require personal AI systems that:

  • are owned by individuals or families
  • understand context, values, and emotional cues
  • escalate concerns privately and appropriately
  • do not store global chat logs
  • do not generalize across millions of users
  • protect people, not corporate platforms

Imagine a world where each family has its own AI agent, trained on their communication patterns, norms, and boundaries.An AI partner that can detect distress because it knows the user, not because it is guessing from a database of millions of strangers.

This is the direction in which responsible AI is moving, and it is at the heart of our work at Permission.

What to Expect in 2026

2025 was the first year of targeted chatbot regulation. 2026 may be the year of chatbot governance.

Expect:

  • More state-level bills mirroring SB 243
  • Increased federal involvement through the proposed GUARD Act
  • Sector-specific restrictions on mental health chatbots
  • AI oversight frameworks tied to age assurance and data privacy
  • Renewed debates around bans vs. transparency-based models

States are beginning to experiment. Some will follow California’s balanced approach. Others may attempt stricter prohibitions. But all share a central concern: the emotional stakes of AI systems that feel conversational.

Closing Thoughts

As a mom here in San Diego, I’m grateful to see our state take this issue seriously. As Permission’s Chief Advocacy Officer, I also see where the next generation of protection must go. SB 243 sets the foundation, but the future will belong to AI that is personal, contextual, and accountable to the people it serves.

Project Updates

ASK Trading and Liquidity are Now Live on Base’s Leading DEX

Nov 14th, 2025
|
{time} read time

We’re excited to share that the ASK/USDC liquidity pool is now officially live on Aerodrome Finance, the premier decentralized exchange built on Base. This milestone makes it easier than ever for ASK holders to trade, swap, and provide liquidity directly within the Coinbase ecosystem.

Why This Matters

  • More access. You can now trade ASK directly through Aerodrome, Base’s premier DEX—and soon, through the Coinbase app itself, thanks to its new DEX integration.

  • More liquidity. ASK liquidity is already live in the USDC/ASK pool, strengthening accessibility for everyone.

  • More connection to real utility. As ASK continues to power the Permission ecosystem, this move brings its utility to DeFi, where liquidity meets data ownership + real demand for permissioned data.

How to Join In

  • Always confirm the official ASK contract address on Base before trading:
    0xBB146326778227A8498b105a18f84E0987A684b4
  • You can trade, provide liquidity, or simply watch the pool evolve — it’s all part of growing ASK’s footprint on Base.

Building on Base’s Vision

Base has quickly become one of the most vibrant ecosystems in crypto, driven by the vision that on-chain should be open, affordable, and accessible to everyone. Its rapid growth reflects a broader shift toward usability and real-world applications, something that aligns perfectly with Permission’s mission.

As Coinbase CEO Brian Armstrong has emphasized, Base isn’t just another Layer-2 — it’s the foundation for bringing the next billion users on-chain. ASK’s launch on Base taps directly into that movement, expanding access to a global audience and connecting Permission’s data-ownership mission to one of the most forward-thinking ecosystems in Web3.

100,000+ ASK Holders on Base 🎉

As of this writing, we’re proud to share that ASK has surpassed 100,000 holders on Base. This is a huge milestone that reflects the growing strength and reach of the Permission community.

From early supporters to new users discovering ASK through Base and Aerodrome, this growth underscores the demand for consent-driven data solutions that reward people for the value they create.

Providing Liquidity Has Benefits

When you add liquidity to the USDC/ASK pool, you’re helping deepen the market and improve access for other community members. In return, you’ll earn a share of trading fees generated by the pool.

And as Aerodrome continues to expand its ve(3,3)-style governance model, liquidity providers could see additional incentive opportunities in the future. Nothing is live yet, but the structure is there, and we’re watching closely as the Base DeFi ecosystem evolves.

It’s a great way for long-term ASK supporters to stay engaged and help grow the ecosystem while participating in DeFi on one of crypto’s fastest-growing networks.

What’s Next

ASK’s presence on Base is just the beginning. We’re continuing to build toward broader omnichain accessibility, more liquidity venues, and new ways to earn ASK. Each milestone strengthens ASK’s position as the tokenized reward for permission.

Learn More

📘 ASK Token Utilities & Docs

💧 Aerodrome Liquidity Pool

Disclaimer:
This post is for informational purposes only and does not constitute financial, investment, or legal advice. Token values can fluctuate and all participation involves risk. Always do your own research before trading or providing liquidity.

Insights

Online Safety and the Limits of AI Moderation: What Parents Can Learn from Roblox

Nov 10th, 2025
|
{time} read time

Roblox isn’t just a game — it’s a digital playground with tens of millions of daily users, most of them children between 9 and 15 years old.

For many, it’s the first place they build, chat, and explore online. But as with every major platform serving young audiences, keeping that experience safe is a monumental challenge.

Recent lawsuits and law-enforcement reports highlight how complex that challenge has become. Roblox reported more than 13,000 cases of sextortion and child exploitation in 2023 alone — a staggering figure that reflects not negligence, but the sheer scale of what all digital ecosystems now face.

The Industry’s Safety Challenge

Most parents assume Roblox and similar platforms are constantly monitored. In reality, the scale is overwhelming: millions of messages, interactions, and virtual spaces every hour. Even the most advanced AI moderation systems can miss the subtleties of manipulation and coded communication that predators use.

Roblox has publicly committed to safety and continues to invest heavily in AI moderation and human review — efforts that deserve recognition. Yet as independent researcher Ben Simon (“Ruben Sim”) and others have noted, moderation at this scale is an arms race that demands new tools and deeper collaboration across the industry.

By comparison, TikTok employs more than 40,000 human moderators — over ten times Roblox’s reported staff — despite having roughly three times the daily active users. The contrast underscores a reality no platform escapes: AI moderation is essential, but insufficient on its own.

When Games Become Gateways

Children as young as six have encountered inappropriate content, virtual strip clubs, or predatory advances within user-generated spaces. What often begins as a friendly in-game chat can shift into private messages, promises of Robux (Roblox’s digital currency), or requests for photos and money.

And exploitation isn’t always sexual. Many predators use financial manipulation, convincing kids to share account credentials or make in-game purchases on their behalf.

For parents, Roblox’s family-friendly design can create a false sense of security. The lesson is not that Roblox is unsafe, but that no single moderation system can substitute for parental awareness and dialogue.

Even when interactions seem harmless, kids can give away more than they realize.

A name, a birthday, or a photo might seem trivial, but in the wrong hands it can open the door to identity theft.

The Hidden Threat: Child Identity Theft

Indeed, a lesser-known but equally serious risk is identity theft.

When children overshare personal details — their full name, birthdate, school, address, or even family information — online or with strangers, that data can be used to impersonate them.

Because minors rarely have active financial records, child identity theft often goes undetected for years, sometimes until they apply for a driver’s license, a student loan, or their first job. By then, the damage can be profound: financial loss, credit score damage, and emotional stress. Restoring a stolen identity can require years of effort, documentation, and legal action.

The best defense is prevention.

Teach children early why their personal information should never be shared publicly or in private chats — and remind them that real friends never need to know everything about you to play together online.

AI Moderation Needs Human Partnership

AI moderation remains reactive.

Algorithms flag suspicious language, but they can’t interpret tone, hesitation, or the subtle erosion of boundaries that signals grooming.

Predators evolve faster than filters, which means the answer isn’t more AI for the platform, but smarter AI for the family.

The Limits of Centralized AI

The truth is, today’s moderation AI isn’t really designed to protect people; it’s designed to protect platforms. Its job is to reduce liability, flag content, and preserve brand safety at scale. But in doing so, it often treats users as data points, not individuals.

This is the paradox of centralized AI safety: the bigger it gets, the less it understands.

It can process millions of messages a second, but not the intent behind them. It can delete an account in a millisecond, but can’t tell whether it’s protecting a child or punishing a joke.

That’s why the future of safety can’t live inside one corporate algorithm. It has to live with the individual — in personal AI agents that see context, respect consent, and act in the user’s best interest. Instead of a single moderation brain governing millions, every family deserves an AI partner that watches with understanding, not suspicion.

A system that exists to protect them, not the platform.

The Future of Child Safety: Collaboration, Not Competition

The Roblox story underscores an industry-wide truth: safety can’t be one-size-fits-all.
Every child’s online experience is different and protecting it requires both platform vigilance and parent empowerment.

At Permission, we believe the next generation of online safety will come from collaboration, not competition. Instead of replacing platform systems, our personal AI agents complement them — giving parents visibility and peace of mind while supporting the broader ecosystem of trust that companies like Roblox are working to build.

From one-size-fits-all moderation to one-AI-per-family insight — in harmony with the platforms kids already love.

Each family’s AI guardian can learn their child’s unique patterns, highlight potential risks across apps, and summarize activity in clear reports that parents control. That’s what we mean by ethical visibility — insight without invasion.

You can explore this philosophy further in our upcoming piece:
➡️ Monitoring Without Spying: How to Build Digital Trust With Your Child (link coming soon)

What Parents Can Do Now

Until personalized AI guardians are widespread, families can take practical steps today:

  • Talk early and often. Make online safety part of everyday conversation.

  • Ask, don’t accuse. Curiosity builds trust; interrogation breeds secrecy.

  • Play together. Experience games and chat environments firsthand.

  • Set boundaries collaboratively. Agree on rules, timing, and social norms.

  • Teach red flags. Encourage your child to tell you when something feels wrong — without fear of punishment.

A Shared Responsibility

The recent Roblox lawsuits remind all of us just how complicated parenting in the digital world can feel. It’s not just about rules or apps: it’s about guiding your kids through a space that changes faster than any of us could have imagined! 

And the truth is, everyone involved wants the same thing: a digital world where kids can explore safely, confidently, and with the freedom to just be kids.

At Permission, we’re committed to building an AI that understands what matters, respects your family’s values and boundaries, and puts consent at the center of every interaction.

Announcements

Meet the Permission Agent: The Future of Data Ownership

Sep 10th, 2025
|
{time} read time

For years, Permission has championed a simple idea: your data has value, and you deserve to be rewarded for it. Our mission is clear: to enable individuals to own their data and be compensated when it’s used. Until now, we’ve made that possible through our opt-in experience, giving you the choice to engage and earn.

But the internet is evolving, and so are we.

Now, with the rise of AI, our vision has never been more relevant. The world is waking up to the fact that data is the fuel driving digital intelligence, and individuals should be the ones who benefit directly from it.

The time is now. AI has created both the urgency and the infrastructure to finally make our vision real. The solution is the "Permission Agent: The Personal AI that Pays You."

What is the Permission Agent?

The Permission Agent is your own AI-powered digital assistant - it knows you, works for you, and turns your data into a revenue stream.

Running seamlessly in your browser, it manages your consent across the digital world while identifying the moments when your data has value, making sure you are the one who gets rewarded.

In essence, it acts as your personal representative in the online economy, constantly spotting opportunities, securing your rewards, and giving you back control of your digital life.

Human data powers the next generation of AI, and for it to be trusted it must be verified, auditable, and permissioned. Most importantly, it must reward the people who provide it. With the Permission Agent, this vision becomes reality: your data is safeguarded, your consent is respected, and you are compensated every step of the way.

This is more than a seamless way to earn. It’s a bold step toward a future where the internet is rebuilt around trust, transparency, and fairness - with people at the center.

Passive Earning and Compounded Referral Rewards

With the Permission Agent, earning isn’t just smarter - it’s continuous and always working in the background. As you browse normally, your Agent quietly unlocks opportunities and secures rewards on your behalf.

Beyond this passive earning, the value multiplies when you invite friends to Permission. Instead of a one-time referral bonus, you’ll earn a percentage of everything your friends earn, for life. Each time they browse, engage, and collect rewards, you benefit too — and the more friends you bring in, the greater your earnings become.

All rewards are paid in $ASK, the token that powers the Permission ecosystem. Whether you choose to redeem, trade for cash or crypto, or save and accumulate, the more you collect, the more value you unlock.

Changes to Permission Platform

Our mission has always been to create a fair internet - one where people truly own their data and get rewarded for it. The opt-in experience was an important first step, opening the door to a world where individuals could engage and earn. But now it’s time to evolve.

Effective October 1st, the following platform changes will be implemented:

  • Branded daily offers will no longer appear in their current form.  
  • The Earn Marketplace will be transformed into Personalize Your AI - a new way to earn by taking actions that help your Agent better understand you, bringing you even greater personalization and value.
  • The browser extension will be the primary surface for earning from your data, and, should you choose to activate passive earning, you’ll benefit from ongoing rewards as your Agent works for you in the background.

With the Permission Agent, you gain a proactive partner that works for you around the clock — unlocking rewards, protecting your data, and ensuring you benefit from every opportunity,  without needing to constantly make manual decisions.

How to Get Started

Getting set up takes just a few minutes:

  1. Download the Permission Agent (browser extension)

  2. Activate it to claim your ASK token bonus

  3. Browse as usual — your Agent works in the background to find earning opportunities for you

The more you use it, the more it learns how to unlock rewards and maximize the value of your time online.

A New Era of the Internet

This isn’t just a new tool - it’s a turning point.

The Permission Agent marks the beginning of a digital world where people truly own their data, decide when and how to share it, and are rewarded every step of the way.