Let's get your family set up with Family AI

Submit
Thank you, we received your information. Look out for your Welcome Email with instructions on how to finish setting up your family profile. You might want to check your spam if you don't see your Welcome Email. To ensure you receive your risk alerts and weekly insights, please add hello@permission.ai to your contacts.
Oops! Something went wrong while submitting the form.
Back to Blog
November 30, 2020
|
Read time {time} min

What Is GDPR? A Simple Overview for Businesses and Users

Written by
Permission
Stay in the loop

Get the latest insights, product updates, and news from Permission — shaping the future of user-owned data and AI innovation.

Subscribe

GDPR took the internet by storm in 2018. You may remember that day when your entire inbox was flooded with privacy policy updates, or perhaps your business decided to expand its reach to the EU, and you were reminded that GDPR compliance had to be sound before beginning.

GDPR is the most impactful modern internet privacy law to pass in recent history. At its core, it is designed to protect internet users from exploitative data collection and breaches, and GDPR aims to give users more control over their information while forcing companies to adopt proactive data security and transparency habits.

We’re going to cover what any business owner, user, or marketer needs to know about GDPR. Consider this piece your foundation. Whether or not you choose to dig deeper will be determined by your needs.

Let’s get right to it.

What Is GDPR?

GDPR (General Data Protection Regulation) is a data protection law from the EU, and it’s dense — there are over eleven chapters and 99 articles. This can make it difficult for companies and users to understand, but its goal is to protect the personal data of users, modernize data collection, establish clear directives for data transparency, and give people more choice over what personal data they share.

GDPR is a replacement for the EU’s previous law, the Data Protection Directive (DPD), which was passed over two decades earlier in 1995. Think of GDPR as the modernization and expansion of DPD. DPD couldn’t have predicted the intricate and expansive ways data is used today, and it badly needed updating.

What Countries Does GDPR Apply To?

The law applies to any companies operating in or out of all EU member states and Ireland, Liechtenstein, Norway, and Switzerland.

Who Does GDPR Protect?

GDPR protects any of the users in the member states and additional countries. What’s important to note is that it protects those users regardless of whether the company targeting them is based in the protection zone or not. In other words, it protects users from any company worldwide that decides to do business with the users of those states.

Let’s look at that a bit more.

Who Has to Follow GDPR?

Any company that targets EU citizens must adhere to GDPR. That goes for companies based in EU countries but also any other company (including U.S. companies) who target or work with EU citizens in any internet-based capacity.

Let’s look at a few examples of companies that have to follow GDPR standards:

  1. A U.S. eCommerce company using ads to retarget users from France.
  2. A digital clothing company based in Brussels that collects information for shipping and fitting.
  3. A digital subscription newsletter collecting email addresses in the EU.

Now, let’s look at a few examples of companies that wouldn’t have to follow GDPR standards.

  1. A Brazilian coffee distributor selling bags on its own website, which is in Portuguese. Even if someone from the EU found it and bought from it, because the company isn’t actively pursuing EU citizens, it shouldn’t apply unless they were using advertising to bring EU users to their site.
  2. A U.S. landscaping service that merely has their contact information on the site and doesn’t do any business in the EU. Because they aren’t collecting any EU user information, GDPR doesn’t apply. Any business that isn’t collecting or processing information in any form or fashion is exempt, although that is extremely rare.

Even though GDPR passed in May of 2018, companies have had since 2016 to prepare for GDPR. But even with that runway, following GDPR at first proved to be confusing and nebulous. Many companies struggled to understand exactly what was demanded of them, and many are still at risk of GDPR non-compliance.

Does Brexit Impact GDPR?

No. The UK government has decided to continue operating under GDPR law even after leaving the EU. In other words, treat the UK just like you would any other country protected by GDPR.

Now that we know the scope of GDPR, let’s talk more about what it protects: personal data.

What Personal Data Actually Means

Directly from the source, here is what GDPR means by “personal data”:

The data subjects are identifiable if they can be directly or indirectly identified, especially by reference to an identifier such as a name, an identification number, location data, an online identifier or one of several special characteristics, which expresses the physical, physiological, genetic, mental, commercial, cultural or social identity of these natural persons.

In practice, these also include all data which are or can be assigned to a person in any kind of way. For example, the telephone, credit card, or personnel number of a person, account data, number plate, appearance, customer number, or address are all personal data.

That’s a complex way of saying any type of data that can be used to trace back to an identity is considered personal. This is purposefully broad — that way the law doesn’t need to be updated as often.

In modern practice, this includes data like:

  1. Shipping information
  2. Billing information
  3. User behavior
  4. Cookie data
  5. Pixel Data
  6. Purchase behavior
  7. Name
  8. Phone Number
  9. Geographic data and history
  10. Demographic identifiers
  11. And more.

What Rights Do Users Have Because of GDPR?

GDPR gives additional privacy rights to users, and when these rights are violated companies can be held liable.

Here are the main rights users are guaranteed and that serve as the basis for GDPR compliance:

1. The Right to Be Informed

Users have the right to know what is and what will be collected by companies before the data is processed (collected).

2. The Right of Access

Users have the right to see any data that a company collects. This service must be delivered within a month and must be free.

3. The Right to Rectify (Correct) Information

Users have the right to submit a request to fix inaccurate data.

4. The Right to be Forgotten

Users have the right to withdraw their data consent and request that all data about them be deleted.

5. The Right to Restrict Data Processing

Users have the right to object to data processing and limit how their data is used.

6. The Right to Data Portability

Users have the right to collect their own data and have it delivered to them in a readable format that can easily be transferred to a different company.

7. The Right to Object

Users always have the right to object to specific data collection and marketing mechanisms that use that data.

8. The Right to Breach Disclosure

Users must be informed if their data has been breached within 72 hours.

For a complete list of user rights, here’s a direct link to the appropriate GDPR chapter.

It is the duty of the company to honor these rights effectively. The processes and practices companies have in place to honor these rights are the basis for GDPR compliance evaluation.

And as a user, you have these rights, so if a company is taking advantage of them, you have the full power to report them. Although in many cases a company (especially a small business) may not be aware, so reaching out to them first to talk about it before lawyering up is usually the best first step.

If it’s a major beach and you are whistleblowing, then you can file a complaint here.

What Happens if You Break GDPR?

GDPR stipulates that national authorities have the power to issue fines and limit data processing when GDPR regulations are breached.

According to the fines and penalties section of GDPR, severe violations can result in fines of up to 20 million euros OR up to 4% of the total global turnover of the preceding fiscal year, and smaller violations can still reach 10 million euros or 2% of global turnover.

The six biggest GDPR fines issued so far have been:

  1. British Airways – 204.6m Euros
  2. Marriott International Hotels – 110.3m Euros
  3. Google Inc. – 50m Euros
  4. Austrian Post – 18.5m Euros
  5. Deutsche Wohnen SE – 14.5m Euros
  6. 1&1 Telecom GmbH – 9.5m Euros

Many of these fines were a result of breaches or failing to disclose exactly how companies would use user data when onboarding users.

And while GDPR fines tend to only make headlines when targeting big businesses, GDPR applies to all businesses, both small and large.

The point is, the EU is devoted to making GDPR a standard, and they have shown that they will hold businesses accountable to it.

How Are the Levels of Fines Determined?

There is a multitude of factors that determine how a fine is calculated, and the GDPR text outlines a few factors:

  1. How widespread the damage is
  2. What kind of personal information was released (in the context of a breach)
  3. How quickly the company fixed it
  4. The fidelity of the fix
  5. The Intention of the violation
  6. How prepared the company was for the violation
  7. Was the company proactive in data protection practices?
  8. Did the company cooperate effectively and quickly with all parties?
  9. Did the company notify users of the damages as quickly as possible?

There are more specifics than these, but essentially the data protection board and officers in charge of issuing fines will be looking at how honest and proactive companies were before, during, and after a breach or violation. If at every step in the process a company was doing their best and had proof of that, then the fines will be lower. If the company clearly exhibited negligence, then the fines will likely be steeper.

In Practice: How to Approach GDPR Compliance

Companies must show good faith by achieving initial data compliance and then by incorporating GDPR principles into every part of their operation.

If you own or are in charge of GDPR for your business, then you need to make sure data collection is transparent, legal, and secure in every part of your business.

GDPR compliance must become a fundamental part of your operation. With every new product, you need to make sure data is being collected appropriately. GDPR compliance is about having a plan and devoting resources to actualizing that plan. If you are familiar with the world of PCI compliance in payment processing, GDPR compliance is somewhat similar.

In order to become officially compliant with GDPR, you may have to request a DPO (data protection officer) to oversee your data collection practices, although this is only necessary for companies processing large amounts of data OR if your company’s core business model relies on data collection.

   Here’s what the legislation says on that directly:

Contrary to popular belief, decisive for the legal obligation to appoint a Data Protection Officer is not the size of the company but the core processing activities which are defined as those essential to achieving the company’s goals. If these core activities consist of processing sensitive personal data on a large scale or a form of data processing which is particularly far-reaching for the rights of the data subjects, the company has to appoint a DPO.

In other words, most businesses are fine simply following best practices for compliance, but if you fall under the definition above then you need to reach out and request a DPO.

GDPR compliance is ongoing and can only be the result of consistent effort. It is not a short checklist you can complete and move on. It must become fundamental and be a result of consistent, recurring tasks, and effort.

With this in mind, here are actionable guidelines you can incorporate to maintain GDPR compliance.

Core Guidelines of GDPR Compliance for Businesses

There is no perfect guide for GDPR compliance. It is a collection of efforts unique to each company designed to protect the privacy rights enshrined in GDPR. That being said, there are guidelines and best practices that are standardized across modern businesses.

Here are the major ideas of GDPR compliance, and then we will cover specific steps in the following section.

  1. Data transparency, fairness, and lawfulness. Are you actively open and lawful with your data collection and storage?
  2. Put limits on how and why you collect data. Do you have scheduled processes to remove old and unused data? How can you build the best product using the most specific and least demanding data collection practices?
  3. Only collect the minimum necessary for your operation. If you don’t need it, then don’t collect it.
  4. Devotion to data accuracy. How are you ensuring your data is clean and accurate for each individual?
  5. Data security. How are you protecting against breaches? How does encryption play into your strategy?
  6. Data deletion and portability. Can users easily delete their data? Can they request their data and then give it to someone else?
  7. Data consent. Is your consent for data accessible and easy for users to understand? Is your service still usable without it? Are you transparent on what you collect and easily give users the ability to opt-out?
  8. Privacy by design. Are safety and design fundamentally built into your product?
  9. Data simplicity. Is it easy for users to understand what data you’re collecting? Can they collect for themselves and understand it?

These are the questions that make up a unique and effective GDPR compliance plan. The burden is on companies to build them into their own workflows.

6 Steps to Start Your GDPR Compliance Journey

It’s easy for GDPR to feel overwhelming. Here are a few ways for you to take action today.

Step 1: Start With an Analysis

Outline every aspect of your business that uses data and why. Examine how it’s collected and where it’s stored, and then make sure user rights are protected at every step. Clear opportunities to consent and opt-out must be present at every point.

Step 2: Create a Breach Contingency Plan

Your company must report a breach within 72 hours, and every minute that goes by after a breach will be scrutinized by officials. Make sure you have a specific plan to stop and disclose a breach.

Step 3: Log Everything You Do Around GDPR Compliance

As we said earlier, proof of ongoing effort toward GDPR compliance is critical to remain compliant and reduce fines. Create a centralized location for your efforts and log everything you do in detail.

Step 4: Ensure Partners Are Actively Working Toward Compliance

Even if a breach happens through third-party software, your business could be liable. It is your responsibility to evaluate the trustworthiness and security of your partners. Choose wisely!

Step 5: Create a Checklist for New Products, Operations, and Decisions

Anytime your business grows, makes a new product, or collects new data, it needs to be incorporated into your GDPR efforts. Make sure GDPR is in every conversation.

Step 6: Schedule Ongoing GDPR Training by Department

Make sure your tech teams, marketing teams, security teams, product development teams, and anyone else involved with data has scheduled GDPR training. This is one of the best bits of proof you can hand to data officers to show you have been proactive.

The Bottom Line on GDPR

The General Data Protection Regulation is the biggest modern user privacy law in existence. It is designed to make data security and fidelity the norm in companies and give users more agency over what data they give up and why — while also giving them protected rights to opt-out, remove, and object to any sort of data collection by internet companies.

While the GDPR can seem like a burden on businesses, it gets easier as you develop your own systems and is crucial to creating an internet ecosystem that users can rely on safely.

GDPR is an important step for user privacy, but there is so much more we can do.

GDPR is a good start, but it’s a band-aid for a flawed system. The best kind of internet is one where users have complete control over data and are compensated for it directly (and automatically). Companies make money from your data — why shouldn’t you?

See how Permission is making that dream a reality.

Recent articles

What Every Parent Needs to Know Before Handing Over the iPad

Apr 7th, 2026
|
{time} read time

Spring Break used to mean board games and bike rides.

Now it means 8+ hours a day on TikTok, Roblox, Snapchat.

Most kids are back in school now. But if you noticed something a little off this past week, you're not imagining it. If you're still bracing for the screentime fights, the "just five more minutes" negotiations, the device-at-dinner standoffs, you're not alone. But there's a better way to handle this than becoming the screentime police.

Here's what's actually happening on your kids' devices, and what you can do about it:

The honest truth: more free time = higher risk of social media addiction

During school breaks, kids average 3.5-4 extra hours of screen time per day.

That's not just YouTube and Minecraft. That's unstructured time on platforms that are designed by teams of engineers and behavioral psychologists to keep your child scrolling, clicking, and coming back.

In 2026, it's not just the amount that's shifted — since 2020, daily time on short-form video like TikTok and Reels has increased 14x for younger children.

This isn't an accident. A former Meta researcher described Instagram internally as "a drug." A YouTube internal document listed "viewer addiction" as a goal. A Meta employee even told colleagues: “We're basically pushers.”

Spring Break is one of the highest-risk weeks of the year for unsupervised screen use. More free time, less structure, and the same algorithms running 24 hours a day, messing with your children's attention around the clock.

What's actually happening on the platforms your kids use most

TikTok and Instagram use dopamine loops, short bursts of reward, to make scrolling feel impossible to stop. There is no natural endpoint. The algorithm learns what keeps your child watching and serves more of it, regardless of whether it's healthy. Landmark 2026 jury verdicts have recently found these platforms liable for intentionally designing addictive features that contribute to depression and anxiety in minors.

Roblox and Discord are where a lot of the real danger hides. Unmoderated voice chat, private group invitations, and off-platform contact attempts are common. Predators use these platforms specifically because parents underestimate them. Current multidistrict litigation (MDL 3166) alleges that these companies have failed to implement basic safeguards to prevent the grooming and exploitation of children.

Character.ai and ChatGPT don't verify ages. Kids as young as 8 are forming emotional attachments to AI companions, sharing things they'd never tell a parent or friend. There is no guardrail on what those conversations become. Recent wrongful death lawsuits highlight cases where minors engaged in harmful, obsessive relationships with AI, leading to tragic outcomes.

Snapchat was built around disappearing content, which means disappearing evidence. AI nudification tools are now accessible to teenagers directly through third-party apps that connect to Snapchat. State Attorneys General in Texas and New Mexico have filed suits alleging the platform is a "marketplace for predators" and facilitates the spread of non-consensual deepfake material.

This isn't about scaring you. It's about making sure you're not the last to know.

Stop being the screentime police. Become their coach instead.

Here's the shift that actually works.

The screentime police approach, counting minutes, setting timers, fighting nightly, doesn't build safe habits. It builds resentment. And the moment your kid is out from under your roof, those habits disappear entirely.

The better approach is mentorship. Think about how a great coach works. They don't bench their best player for making a mistake. They show them what went wrong, explain why it matters, and help them do better next time. That's what your kid needs from you on digital safety.

That means shifting from how long they're on a device to what they're seeing and whether they know how to handle it. A 15-minute conversation about what to do when a stranger DMs them on Discord is worth more than a screentime timer.

You don't need to be a tech expert to have that conversation. You just need the right information and the right words.

Three things to do this week (that aren't "take the phone away")

  1. Know which platforms they're actually using. Ask your kid to show you their five most-used apps. Don't make it an interrogation, make it curious. "What's this one? What do you do on it?" You'll learn more in five minutes than any parental control software will tell you.
  2. Have one real conversation, not ten small arguments. Pick a moment when you're both relaxed, not when you're already frustrated about screen time. Tell them what you know about how these platforms work. Not to lecture, to inform. Kids respond much better to "here's how TikTok is designed to keep you scrolling" than "put the phone down."
  3. Set expectations together, not rules from above. Ask your kid what they think fair looks like. You'll be surprised. Most kids actually have a sense of what's healthy, they just need permission to use it. Building the agreement together means they're far more likely to stick to it.

What your family values have to do with it

Every family is different. What's acceptable in one household isn't in another, and that's exactly how it should be.

The problem with most parental control tools is that they're built around a one-size-fits-all set of restrictions. Block this app. Limit that one. It creates friction, not understanding.

The better approach starts with your values. What do you actually care about for your kids? Safety, yes, but also independence, trust, and the skills they'll need when you're not there. The goal isn't to block everything. It's to raise a kid who makes good choices when you're not in the room.

Trusted AI for the Family. Built for Spring Break and beyond.

This is exactly why we built Permission AI for the Family.

It's not a parental control app. It's an AI that works with your family, surfacing what's actually happening on the platforms your kids use, giving you the scripts to have real conversations, and helping your kids build safe habits that last beyond Spring Break.

It's built around your values and your boundaries, not ours.

And right now, it's 100% free. That's a $240 annual value, at no cost.

If you've been meaning to get a better handle on your family's digital life, this is the week to do it.

Get Trusted AI for the Family — free at permission.ai/for-parents

Insights

Parenting In the Age of AI: Why Tech Is Making Parenting Harder – and What Parents Can Do

Jan 29th, 2026
|
{time} read time

Many parents sense a shift in their children’s environment but can’t quite put their finger on it.

Children aren't just using technology. Conversations, friendships, and identity formation are increasingly taking place online - across platforms that most parents neither grew up with nor fully understand. 

Many parents feel one step behind and question: How do I raise my child in a tech world that evolves faster than I can keep up with?

Why Parenting Feels Harder in the Digital Age

Technology today is not static. AI-driven and personalized platforms adapt faster than families can.

Parents want to raise their children to live healthy, grounded lives without becoming controlling or disconnected. Yet, many parents describe feeling:

  • “Outpaced by the evolution of AI and Algorithms”
  • “Disconnected from their children's digital lives”
  • “Concerned about safety when AI becomes a companion”
  • “Frustrated with insufficient traditional parental controls”

Research shows this shift clearly:

  • 66% of parents say parenting is harder today than 20 years ago, citing technology as a key factor. 
  • Reddit discussions reveal how parents experience a “nostalgia gap,”  in which their own childhoods do not resemble the digital worlds their children inhabit.
  • 86% of parents set rules around screen use, yet only about 20% follow these rules consistently, highlighting ongoing tension in managing children’s device use.

Together, these findings suggest that while parents are trying to manage technology, the tools and strategies available to them haven’t kept pace with how fast digital environments evolve.

Technology has made parenting harder.

The Pressure Parents Face Managing Technology

Parents are repeatedly being told that managing their children's digital exposure is their responsibility.

The message is subtle but persistent: if something goes wrong, it’s because “you didn’t do enough.”

This gatekeeper role is an unreasonable expectation. Children’s online lives are always within reach, embedded in education, friendships, entertainment, and creativity. Expecting parents to take full control overlooks the reality of modern childhood, where digital life is constant and unavoidable.

This expectation often creates chronic emotional and somatic guilt for parents. At the same time, AI-driven platforms are continuously optimized to increase engagement in ways parents simply cannot realistically counter.

As licensed clinical social worker Stephen Hanmer D'Eliía explains in The Attention Wound: What the attention economy extracts and what the body cannot surrender, "the guilt is by design." Attention-driven systems are engineered to overstimulate users and erode self-regulation (for children and adults alike). Parents experience the same nervous-system overload as their kids, while lacking the benefit of growing up with these systems. These outcomes reflect system design, not parental neglect.

Ongoing Reddit threads confirm this reality. Parents describe feeling behind and uncertain about how to guide their children through digital environments they are still learning to understand themselves. These discussions highlight the emotional and cognitive toll that rapidly evolving technology places on families.

Parenting In A Digital World That Looks Nothing Like The One We Grew Up In

Many parents instinctively reach for their own childhoods as a reference point but quickly realize that comparison no longer works in today’s world.  Adults remember life before smartphones; children born into constant digital stimulation have no such baseline.

Indeed, “we played outside all day” no longer reflects the reality of the world children are growing up in today. Playgrounds are now digital. Friendships, humor, and creativity increasingly unfold online.

This gap leaves parents feeling unqualified. Guidance feels harder when the environment is foreign, especially when society expects and insists you know how.

Children Are Relying on Chatbots for Emotional Support Over Parents

AI has crossed a threshold: from tool to companion.

Children are increasingly turning to chatbots for conversation and emotional support, often in private.

About one-in-ten parents with children ages 5-12 report that their children use AI chatbots like ChatGPT or Gemini. They ask personal questions, share worries, and seek guidance on topics they feel hesitant to discuss with adults.

Many parents fear that their child may rely on AI first instead of coming to them. Psychologists warn that this shift is significant because AI is designed to be endlessly available and instantly responsive (ParentMap, 2025).

Risks include:

  • Exposure to misinformation.
  • Emotional dependency on systems that can simulate care but cannot truly understand or respond responsibly.
  • Blurred boundaries between human relationships and machine interaction.

Reporting suggests children are forming emotionally meaningful relationships with AI systems faster than families, schools, and safeguards can adapt (Guardian, 2025; After Babel, 2025b)

Unlike traditional tools, AI chatbots are built for constant availability and emotional responsiveness, which can blur boundaries for children still developing judgment and self-regulation — and may unintentionally mirror, amplify, or reinforce negative emotions instead of providing the perspective and limits that human relationships offer.

Why Traditional Parental Controls are Failing

Traditional parental controls were built for an “earlier internet,” one where parents could see and manage their children online. Today’s internet is algorithmic.

Algorithmic platforms bypass parental oversight by design. Interventions like removing screens or setting limits often increase conflict, secrecy, and addictive behaviors rather than teaching self-regulation or guiding children on how to navigate digital spaces safely (Pew Research, 2025; r/Parenting, 2025).

A 2021 JAMA Network study found video platforms popular with kids use algorithms to recommend content based on what keeps children engaged, rather than parental approval. Even when children start with neutral searches, the system can quickly surface videos or posts that are more exciting. These algorithms continuously adapt to a child’s behavior, creating personalized “rabbit holes” of content that change faster than any screen-time limit or parental control can manage.

Even the most widely used parental control tools illustrate this limitation in practice, focusing on: 

  • reacting after exposure (Bark)
  • protecting against external risks (Aura)
  • limiting access (Qustodio)
  • tracking physical location (Life360)

What they largely miss is visibility into the algorithmic systems and personalized feeds that actively shape children’s digital experiences in real time.

A Better Approach to Parenting in the Digital Age

In a world where AI evolves faster than families can keep up, more restrictions won’t solve the disconnection between parents and children. Parents need tools and strategies that help them stay informed and engaged in environments they cannot fully see or control.

Some companies, like Permission, focus on translating digital activity into clear insights, helping parents notice patterns, understand context, and respond thoughtfully without prying.

Raising children in a world where AI moves faster than we can keep up is about staying present, understanding the systems shaping children’s digital lives, and strengthening the human connection that no algorithm can replicate.

What Parents Can Do in a Rapidly Changing Digital World

While no single tool or rule can solve these challenges, many parents ask what actually helps in practice.

Below are some of the most common questions parents raise — and approaches that research and lived experience suggest can make a difference.

Do parents need to fully understand every app, platform, or AI tool their child uses?

No. Trying to keep up with every platform or feature often increases stress without improving outcomes.

What matters more is understanding patterns: how digital use fits into a child’s routines, moods, sleep, and social life over time. Parents don’t need perfect visibility into everything their child does online; they need enough context to notice meaningful changes and respond thoughtfully.

What should parents think about AI tools and chatbots used by kids?

AI tools introduce a new dynamic because they are:

  • always available
  • highly responsive
  • designed to simulate conversation and support

This matters because children may turn to these tools privately, for curiosity, comfort, or companionship. Rather than reacting only to the technology itself, parents benefit from understanding how and why their child is using AI, and having age-appropriate conversations about boundaries, trust, and reliance.

How can parents stay involved without constant monitoring or conflict?

Parents are most effective when they can:

  • notice meaningful shifts early
  • understand context before reacting
  • talk through digital choices rather than enforce rules after the fact

This shifts digital parenting from surveillance to guidance. When children feel supported rather than watched, conversations tend to be more open, and conflict is reduced.

What kinds of tools actually support parents in this environment?

Tools that focus on insight rather than alerts, and patterns rather than isolated moments, are often more helpful than tools that simply report activity after something goes wrong.

Some approaches — including platforms like Permission — are designed to translate digital activity into understandable context, helping parents notice trends, ask better questions, and stay connected without hovering. The goal is to support parenting decisions, not replace them.

The Bigger Picture

Parenting in the age of AI isn’t about total control, and it isn’t about stepping back entirely.

It’s about helping kids:

  • develop judgment
  • understand digital influence
  • build healthy habits
  • stay grounded in human relationships

As technology continues to evolve, the most durable form of online safety comes from understanding, trust, and connection — not from trying to surveil or outpace every new system.

Project Updates

How You Earn with the Permission Agent

Jan 28th, 2026
|
{time} read time

The Permission Agent was built to do more than sit in your browser.

It was designed to work for you: spotting opportunities, handling actions on your behalf, and making it super easy to earn rewards as part of your everyday internet use. 

Here’s how earning works with the Permission Agent.

Earning Happens Through the Agent

Earning with Permission is powered by Agent-delivered actions designed to support the growth of the Permission ecosystem.

Rewards come through Rewarded Actions and Quick Earns, surfaced directly inside the Agent. When you use the Agent regularly, you’ll see clear, opt-in earning opportunities presented to you.

Importantly, earning is no longer based on passive browsing. Instead, opportunities are delivered intentionally through actions you choose to participate in, with rewards disclosed upfront.

You don’t need to search for offers or manage complex workflows. The Agent organizes opportunities and helps carry out the work for you.

Daily use is how you discover what’s available.

Rewarded Actions and Quick Earns

Rewarded Actions and Quick Earns are the primary ways users earn ASK through the Agent.

These opportunities may include:

  • Supporting Permission launches and initiatives
  • Participating in community programs or campaigns
  • Sharing Permission through guided promotional actions
  • Taking part in contests or time-bound promotions

All opportunities are presented clearly through the Agent, participation is always optional, and rewards are transparent.

The Agent Does the Work

What makes earning different with Permission is the Agent itself.

You choose which actions to participate in, and the Agent handles execution - reducing friction while keeping you in control. Instead of completing repetitive steps manually, the Agent performs guided tasks on your behalf, including mechanics behind promotions and referrals.

The result: earning ASK feels lightweight and natural because the Agent handles the busywork.

The more consistently you use the Agent, the more opportunities you’ll see.

Referrals and Lifetime Rewards

Referrals remain one of the most powerful ways to earn with Permission.

When you refer someone to Permission:

  • You earn when they become active
  • You continue earning as their activity grows
  • You receive ongoing rewards tied to the value created by your referral network

As your referrals use the Permission Agent, it becomes easier for them to discover earning opportunities - and as they earn more, so do you.

Referral rewards operate independently of daily Agent actions, allowing you to build long-term, compounding value.

Learn more here:
👉 Unlock Rewards with the Permission Referral Program

What to Expect Over Time

As the Permission ecosystem grows, earning opportunities will expand.

You can expect:

  • New Rewarded Actions and Quick Earns delivered through the Agent
  • Campaigns tied to community growth and product launches
  • Opportunities ranging from quick wins to more meaningful rewards

Checking in with your Agent regularly is the best way to stay up to date.

Getting Started

Getting started takes just a few minutes:

  1. Install the Permission Agent
  2. Sign in and activate it
  3. Use the Agent daily to see available Rewarded Actions and Quick Earns

From there, the Agent takes care of the rest - helping you participate, complete actions, and earn ASK over time.

Built for Intentional Participation

Earning with the Permission Agent is designed to be clear, intentional, and sustainable.

Rewards come from choosing to participate, using the Agent regularly, and contributing to the growth of the Permission ecosystem. The Agent makes that participation easy by handling the work - so value flows back to you without unnecessary effort.

Insights

2026: The Year of Disruption – Trust Becomes the Most Valuable Commodity

Jan 23rd, 2026
|
{time} read time

Moore’s Law is still at work, and in many ways it is accelerating.

AI capabilities, autonomous systems, and financial infrastructure are advancing faster than our institutions, norms, and governance frameworks can absorb. For that acceleration to benefit society at a corresponding rate, one thing must develop just as quickly: trust.

2026 will be the year of disruption across markets, government, higher education, and digital life itself. In every one of those domains, trust becomes the premium asset. Not brand trust. Not reputation alone. But verifiable, enforceable, system-level trust.

Here’s what that means in practice.

1. Trust Becomes Transactional, not Symbolic

Trust between agents won’t rely on branding or reputation alone. It will be built on verifiable exchange: who benefits, how value is measured, and whether compensation is enforceable. Trust becomes transparent, auditable, and machine-readable.

2. Agentic Agents Move from Novelty to Infrastructure

Autonomous, goal-driven AI agents will quietly become foundational internet infrastructure. They won’t look like apps or assistants. They will operate continuously, negotiating, executing, and learning across systems on behalf of humans and institutions.

The central challenge will be trust: whether these agents are acting in the interests of the humans, organizations, and societies they represent, and whether that behavior can be verified.

3. Agent-to-Agent Interactions Overtake Human-Initiated Ones

Most digital interactions in 2026 won’t start with a human click. They will start with one agent negotiating with another. Humans move upstream, setting intent and constraints, while agents handle execution. The internet becomes less conversational and more transactional by design.

4. Agent Economies Force Value Exchange to Build Trust

An economy of autonomous agents cannot run on extraction if trust is to exist.

In 2026, value exchange becomes mandatory, not as a monetization tactic, but as a trust-building mechanism. Agents that cannot compensate with money, tokens, or provable reciprocity will be rate-limited, distrusted, or blocked entirely.

“Free” access doesn’t scale in a defended, agent-native internet where trust must be earned, not assumed.

5. AI and Crypto Converge, with Ethereum as the Coordination Layer

AI needs identity, ownership, auditability, and value rails. Crypto provides all four. In 2026, the Ethereum ecosystem emerges as the coordination layer for intelligent systems exchanging value, not because of speculation, but because it solves real structural problems AI cannot solve alone.

6. Smart Contracts Evolve into Living Agreements

Static smart contracts won’t survive an agent-driven economy. In 2026, contracts become adaptive systems, renegotiated in real time as agents perform work, exchange data, and adjust outcomes. Law doesn’t disappear. It becomes dynamic, executable, and continuously enforced.

7. Wall Street Embraces Tokenization

By 2026, Wall Street fully embraces tokenization. Stocks, bonds, options, real estate interests, and other financial instruments move onto programmable rails.

This shift isn’t about ideology. It’s about efficiency, liquidity, and trust through transparency. Tokenization allows ownership, settlement, and compliance to be enforced at the system level rather than through layers of intermediaries.

8. AI-Driven Creative Destruction Accelerates

AI-driven disruption accelerates faster than institutions can adapt. Entire job categories vanish while new ones appear just as quickly.

The defining risk isn’t displacement. It’s erosion of trust in companies, labor markets, and social contracts that fail to keep pace with technological reality. Organizations that acknowledge disruption early retain trust. Those that deny it lose legitimacy.

9. Higher Education Restructures

Higher education undergoes structural change. A $250,000 investment in a four-year degree increasingly looks misaligned with economic reality. Companies begin to abandon degrees as a default requirement.

In their place, trust shifts toward social intelligence, ethics, adaptability, and demonstrated achievement. Proof of capability matters more than pedigree. Continuous learning matters more than static credentials.

Institutions that understand this transition retain relevance. Those that don’t lose trust, and students.

10. Governments Face Disruption From Systems They Don’t Control

AI doesn’t just disrupt industries. It disrupts governance itself. Agent networks ignore borders. AI evolves faster than regulation. Value flows escape traditional jurisdictional controls.

Governments face a fundamental choice: attempt to reassert control, or redesign systems around participation, verification, and trust. In 2026, adaptability becomes a governing advantage.

Conclusion

Moore’s Law hasn’t slowed. It has intensified. But technological acceleration without trust leads to instability, not progress.

2026 will be remembered as the year trust became the scarce asset across markets, government, education, and digital life.

The future isn’t human versus AI.

It’s trust-based systems versus everything else.