0
0
Podcast

Is Privacy A Winnable Battle? | Andy Yen, Founder of Proton

AI is becoming the most powerful surveillance machine ever built and most people are feeding it their deepest secrets without realizing who can read them.
0
0
Dec 15, 202574 min read

Andy:
[0:00] AI, like social media, is intentionally designed to be addictive. In fact, some of these AIs will even change the way they talk to you to cater to your personality and sort of what it senses is the sort of answer that you want to hear. And that's a really scary thing. You mentioned that AI maybe knows you better than some of your best friends. I would actually argue that not so long from now, the AI could know you better than even you yourself. A lot of us humans are not so self-aware about who we are, right? AI will actually be able to exploit the weaknesses of your personality that even you are not aware of in order to compel you to keep using it and do what it wants.

Ryan:
[0:44] Welcome to Bankless, where we explore the frontier of digital privacy. This is Ryan Sean Adams. It's just me here today, so I am here to help you become more bankless. The question, is privacy a winnable battle? This is a very important conversation between myself and Andy Yen. Andy is the CEO of Proton. They are the makers of ProtonMail. I'm sure you've heard of it. A few things we discuss. Solving privacy in AI. How tech companies are screwing us. EU chat control legislation, encryption as a civil liberty, his views on crypto. Also, I think my favorite part of this episode is that it comes with a little bit of homework. There's some level ups. And I genuinely want you to consider the homework. I want you to consider leveling up your privacy in 2026, like making a project. Because going bankless is about freedom and you lose your freedom when you lose your privacy. I think a number of factors have accelerated this, most notably AI coming on the scene. And I want you to accept for a minute the framing of this episode as we get into it. What if your AI chatbot isn't your friend? What if it's a sycophantic super genius designed by a company to trick you into feeding it more data and then using that data to gain more leverage over you?

Ryan:
[2:07] Even if that's a little bit true,

Ryan:
[2:09] Don't you think it's concerning? There are ways to opt out and we explore them in today's episode. Let's get right to it. So Bankless Nation, there is a confluence of things happening in the world right now. There was a Coinbase data breach earlier this year that leaked customer email address information, phone number information. I know the crypto industry has been hit with in real life attacks on crypto people, particularly those who haven't been able to keep their information private. We live in a world of AI tools. We don't know what they're doing with the data. Like what is ChatGPT doing with the data? I don't feel like I have much control. I have personally adopted the entire Proton stack. I've done this lately to drastically improve my privacy and security posture. And I think every crypto user should go investigate it and take a look at it. So all this to say, it's a very good time to have an episode with the co-founder of Proton. Andy, welcome to Bankless.

Andy:
[3:08] Hey, thanks for having me. It's a pleasure to be here. And I think between the world of crypto, the world of encryption, which you're in, and the world of privacy, a lot of intersections. So hopefully we'll have a fun conversation to go through some of these pretty important topics, as you say.

Ryan:
[3:21] Very important topics and definitely some common cause. So you're the founder of Proton. This is one of the world's largest consumer digital privacy companies. I believe there's 100 million users of Proton worldwide. On a scale of one to 10, I feel like you're the perfect person to ask. So, Andy, on a scale of one to 10, how screwed are we on digital privacy today? For a normal person, they're using the typical web stack of Gmail. They've got an iPhone, Instagram, whatever, chat GPT. How screwed are they?

Andy:
[3:49] I think it really depends on who you are, right? The average person is probably quite screwed, if I'm being completely honest. And I would say the more technical, sophisticated people are...

Andy:
[4:00] Have more means to protect themselves, but it's going to become harder and harder as, you know, if the current trends go on. So what we see actually, you know, people talk about AI, and I think you mentioned AI as well. And what AI is actually doing is it's simply an extension of a trend that's been going on for 50 years, because fundamentally AI is, it's actually a better and more efficient way for humans to communicate with computers. And so it's not dramatically changing any of our business models, but actually it's accelerating the existing models that really exist. So if you think about search, the information that you give into Google search, which allows Google to build a very detailed profile about who you are,

Andy:
[4:44] Well, an AI conversation with, say, Gemini, that is way more intimate, that is getting much more information about you. So what Google is able to do is they're probably able to accelerate by a factor of five or 10 their existing business model with the advent of AI. And this is something that is happening, you know, sort of across the board. So then it becomes harder and harder because AI now becomes the central tool of our lives. It's everywhere. So do you live in the stone age, so to speak, from a tech perspective, or do you participate in the acceleration of the loss of our privacy globally? And this is why I think in some extent, the average person who doesn't have any knowledge of this is pretty screwed. They can't take means to protect themselves. But if you're aware, then you can do something. And the problem is, I would say probably 80% of the population, 90% of the population simply doesn't

Andy:
[5:31] understand some of the issues that we talk about, for example, on this show.

Ryan:
[5:34] Well, this episode is definitely for awareness, but also I hope toward the end we prompt to some action, some small steps maybe, and first steps you could take, because privacy is definitely a journey, but there are some steps you can take that are relatively easy and you can start them now and then you can keep improving over time. That's the journey that I've been on personally. But since you brought up AI again, let's talk about AI. So So.

Ryan:
[5:54] Chat GPT, tools like Gemini,

Ryan:
[5:58] If I'm using a chat bot type tool, like something like chat GPT, who can see my chats? So can employees at these companies see them? Can the government see them? Are they subpoenaable? Are they like open to everyone? Or is it like only under certain circumstances, kind of like they break the glass in order to get access to the chats? Who gets access to this.

Andy:
[6:21] Unfortunately, it's kind of all of the above. And that's kind of the So let's break that down piece by piece. Of course, tech companies can see it because they are recording and essentially, you know, analyzing and saving every single conversation you ever have. So everything you ever type in is being recorded and more or less permanently retained.

Ryan:
[6:45] Precisely. They're not anonymizing it. They're not doing something to kind of mix it in.

Andy:
[6:50] Because you're locked in. It's your user ID. And they're actually looking at this information because that is how they try to improve these programs. So it's there. And they actively look at it and they use it because that is part of their business. And it's also being used to profile you and send you advertisements. And in the case of ChatGPT, even shopping recommendations so they can tell you what to buy and recommend your products. And you can actually directly buy them through ChatGPT now, right? So the companies definitely see all of that. Now, the thing about what that implies is anything that a company has, it's actually obliged to give up to law enforcement. So the FBI asks for it, if the feds come asking for it, if any police or prosecutor requests it, that information is also available. The government has access to this information. But a private party that sues you can also get access to information. And there's also, in fact, a famous New York Times case where New York Times, I think, sued OpenAI. And as part of the lawsuit, they tried to require OpenAI to actually retain all the conversations because they wanted to use those conversations as evidence in their lawsuit against OpenAI. So that's the thing. Now, what is even worse sometimes is then there's also the inadvertent breaches that happen.

Andy:
[8:14] When you give something into OpenAI and ChatGPT, you're actually contributing to the knowledge of ChatGPT. So the information that you give it becomes part of its brain, so to speak. And if it's talking to somebody else,

Ryan:
[8:30] There's actually a

Andy:
[8:31] Very real possibility, in fact, quite high likelihood, that information that you've given it can then be regurgitated out and shown to somebody else as part of another conversation. Because that information is now in the corpus information that is used to train and give answers to these models. And that has happened, right? If you put a password into ChatGPT and someone's very smart in prompt engineering, they can get ChatGPT to spit out the information that you gave it on accident. And then, of course, there's also bugs. I think there was a couple of examples of data breaches where an AI company accidentally left something open and revealed all the chats. Or sometimes the chats were, I think there's even one case where the chats were accidentally opened and indexed by Google, in which case anybody can act get access to them. And that's the nature of information. Once you put it out there, it's out there. You cannot really take it back. You may be able to sometimes force them to delete it, but that may or may not be too late depending on your threat model. So unfortunately, the answer is it's all above in that whole list of things that you gave me. Because when you put it into ChatGPT, it is unfortunately no longer your data.

Ryan:
[9:36] What would be the worst case? Okay, let's talk about data breaches for a minute. Because at the beginning of the episode, I talked about a data breach that happened to Coinbase, which revealed a lot of customer AML KYC type of information. So this would be like, you know, name, email address, physical location, all of the personal data. If something like Google Gemini or ChatGPT had a data breach, maybe this at the level of sophistication, maybe this would be a well-funded state actor or something, or like, I don't know, there are some parties that could probably do this type of thing. What's the worst case scenario? Would they really have access to every, like...

Ryan:
[10:13] Users' logs, chat logs, and be able to leverage that in the future?

Andy:
[10:18] Yeah, they would have your chat logs. And depending on what you say to ChatGPT, that could be quite compromising. And, you know, there are some people today who use ChatGPT for relationship advice, for personal advice. It is their psychologist. It is maybe even their virtual girlfriend or boyfriend, as the case may be. The information that you give at ChatGPT is incredibly intimate. It's literally a private conversation with somebody who, in some cases, is your best friend. And that is all potentially leakable, accessible, subpoenable, and also available to hackers as well.

Ryan:
[10:51] There's been a scaling of this since you started Proton, I believe, in the early days of, you know, was it 2013 or so.

Andy:
[10:57] Andy? 2014, 2014.

Ryan:
[10:58] 2014, okay. So, a long time ago, but we have put more and more in digital format. So, you know, back in 2014, it would be the most sensitive thing I could imagine for myself online would probably be my email address, maybe my search history. Now in 2025, it would be everything that I've ever said to ChatGPT or that it's been able to divine somehow, you know, depending on my usage pattern of ChatGPT. I mean, there's a real case for many users of these AI tools that the AIs know them better than most of their closest relationships in their lives. Like they know everything about them and they can also divine things about them based on particular patterns. So can you talk about that? Like as we are increasingly going into the digital age, it seems like we are giving more and more to the machine. And I guess if knowledge and information is power, then the machines... Become much more powerful or the corporations that control those machines become much more powerful relative to the people. Like, I almost feel helpless with this. And I know I could stop using it at any time. And yet it is so useful for everyday life and to create economic output that that's not an option for me or most people listening to this. Can you talk about that?

Andy:
[12:18] AI, like social media, is intentionally designed to be addictive. In fact, some of these AIs will even change the way they talk to you to cater to, you know, your personality and sort of what it senses is the sort of answer that you want to hear. And that's a really scary thing. You know, you mentioned that AI maybe knows you better than some of your best friends. I would actually argue that in not so long from now, the AI could know you better than even you yourself. A lot of us humans are not so self-aware about who we are, right? AI will actually be able to exploit the weaknesses of your personality that even you are not aware of in order to compel you to keep using it and do what it wants. Because what is the purpose of something like ChatGPT and Gemini? Well, at the end of the day, it's engagement. They want you to keep using it. It's sort of a hamster wheel. Once you get on, they never want you to come off. So it is designed to tell you what you want to hear, to keep you coming back,

Andy:
[13:13] and to ultimately make you dependent. That is not a bug. That's actually the core feature of this product. And we've seen

Ryan:
[13:20] This play out with social media algorithms, which were just basically AI light.

Andy:
[13:24] Yes. But social media, when you do it, you sort of understand it's public. In the back of your mind, you know, if I share something on Facebook, I'm sort of expecting people to see it. But in AI, it's like a chat, which we assume by default is private, but actually it isn't. And this is why I do think it's quite scary, the consequences of this. And it's, of course, the machine knowing about you, but it's who controls the machine. And who controls the machine are giant corporations that they don't really have your best interests at heart. They're sure to make money. And they're sure to make as much money as possible by exploiting your data and by exploiting you ultimately.

Ryan:
[14:02] That's why privacy is very much tied. It's sort of, it's a political idea, isn't it? It's very much tied. Because what we're describing is a world where there is great power asymmetry. And the large corporations with the information, chatbots and data centers, they have all the power. And the individual citizens and the individual users don't have that power. And so privacy is part of, that's what we'll talk about a bit more, but privacy is part of correcting that power asymmetry. I mean, do you see privacy as almost like a modern day digital civil liberty?

Andy:
[14:42] It is. And it's also a fundamental human right. So privacy in many ways is our last defense against the encroachment of civilians' capitalism which today dominates the world. If you look at the largest companies on the stock exchange globally by market cap, they are really all companies who are actively engaged in AI. That is the biggest business and the market cap of these companies added together is bigger than most countries, right? You add them up, you get, for example, you take your top two or three companies, that's bigger than the GDP of Germany.

Andy:
[15:21] So we are really at the stage where these companies have gotten so big that they're actually probably in many cases more powerful, more influential than even governments themselves. So the ability of governments to even regulate these companies is quite limited. We think often about, oh, we used to think about privacy as we need privacy and encryption to serve as the last safeguard against the encroachment of power of government on our individual freedoms. But these companies today are bigger and more powerful than most governments. So you can almost say that the government part is irrelevant. It's the corporates that are probably even worse in many cases. And government actually, in some sense, at least in a democratic world, is supposed to serve the will of the voter. So you have some control over that. But for an open AI or a Google... Your vote doesn't count. You don't have a vote. So it's actually even worse.

Ryan:
[16:21] Let's talk about AI maybe a bit more because I was starting to get some glimmers of hopes when you listen to some of the CEOs. And when ChatGPT first came on the scene, it was actually refreshing to see a subscription-based business model for me, who's like, I'm very aware of surveillance capital and kind of the Google type ad model where they're harvesting your eyeballs and mining your information. That's how they monetize you. It was great to see ChatGPT and it was a subscription service. So I was like, okay, we're going to monetize this in a different way. Then I started to see like, you know, months later and years later, the tremendous amount of CapEx that is being spent by OpenAI and all of these models. And when you kind of run the math, I don't see how it's possible to sustain that level of CapEx and investment from a subscription based model, just because advertising and being able to take out all of the information about individuals and groups and segments and then sell them something based on that, it's got to be always a more revenue producing and more profitable endeavor. And so I've even seen ChatGPT pivoting towards that.

Ryan:
[17:28] There was one other thing that gave me some hope, and I want you to comment on all of this, but Sam Altman said recently that he wished that ChatGPT and

Ryan:
[17:37] AI models had more privacy and more confidentiality. He said that unlike a doctor or a lawyer, we probably need legal protections because ChatGP doesn't have that, but we're sending it information. We're having conversations as if we were talking to a doctor or lawyer and those classes are protected. You know, you got attorney client privilege and that sort of thing. We don't get that with ChatGPT. And so he was pushing for more privacy regulation in that direction. I haven't seen any of that legislation come forward, but it was nice to hear him at least acknowledge that there is a privacy problem. Anyway, take all of this. Do you think that there is a world where the existing AI companies, And maybe the governments can come together and say, okay, no, this is too much. There need to be some privacy regulations here. Or maybe the AI companies decide not to monetize our data for advertising and it's a subscription service and it works a bit more like Proton does.

Andy:
[18:36] I think a subscription doesn't mean that they will not violate your privacy. Okay. No, no. Honestly, if you want, from like a big take on your perspective, if I can, you know, trick this person into giving him his data for free, but I can also make him pay me for that privilege. Why wouldn't I do both? Right? This gold idiot here. Optimistic here, I suppose. Will actually pay me to abuse this data? Hell, I'll take the money. I'm not going to leave the money on the table. Right?

Ryan:
[19:04] And he'll also harvest his eyeballs and sell him some ads.

Andy:
[19:07] Yes, exactly. Why not? We can cut even more money. And that's what these businesses are about. It's all about money. So fundamentally, it's a question of business model. The business model here is monetization at all costs. These are profit-driven companies that care only about profit, and they will squeeze a dollar out of you any way they can. So they're happy to take your subscription money and abuse your data at the same time. And this is what they do. So subscription, I think it's not a close enough safeguard to say that, you know, oh, it's private. It really, you have to go down to the business model, the business ethics, also what the business stands for. That's the key thing. Now, I think Sam is talking about, oh, you know, it'd be great to have some government regulations around privacy, et cetera, et cetera. He's probably more thinking about his New York Times lawsuit, right? He wants the government to protect him from third parties subpoenaing his information that he's collecting. So what he's basically saying is, I want regulation to ensure that the only person that can abuse your data is me and nobody else, right? That's effectively what he's saying. So it's regulation in sort of a, let's say, very self-serving way. He's definitely not asking, oh, let's have regulation that prevents me myself from abusing your data. No, he just wants everybody else to be locked out of his ecosystem so he can have a monopoly on, you know, your information. And that's, so yeah, I wouldn't say that, you know, just because he says that he's actually going to go in that direction because, you know, history has proven over again. And Sam's a known quantity, right? He's been in the Valley for a long time. People know what he's about by now.

Ryan:
[20:35] Andy, would you go as far as maybe my intuition did and say that the only way AI companies are going to be able to show some return on investment for all the capex they're spending is the surveillance capital business model. Like that's the only way you can actually make investors whole and continue paying for the chips and data centers and energy.

Andy:
[20:58] I think in the long run, so here's the interesting thing about AI and sort of all technology shifts. In computing, there's a concept called Moore's Law. Are you familiar with Moore's Law? It's basically that computing power doubles every 18 months. And somehow it's amazingly held up for the last 30 years. It's a bit similar with AI. AI as a technology is going to rapidly commoditize. What would today cost maybe a billion dollars to train may in five, six years time only cost 10 million. So I think there's two sides. One is the cost of AI is going to go down probably exponentially with time. So these giant cost projections that people are putting out today of how much it costs to build AI, that may not even be true. They may say, oh, it's a trillion dollars, but that could end up being only 100 billion over time. So I do think there is a way to make the business model work over time. This, I think, we can be pretty confident about that. But I do think some of the promises that have been made and advertised today in terms of how much money we're going to spend, how much we're going to invest, how much we're going to build, those are unrealistic, not granted in reality. And the businesses today, most A businesses are highly unprofitable.

Andy:
[22:24] And that's okay in the early phase, but at some point investors are going to want to return. And investors may want these returns before the costs have dropped far enough on the exponential scale. And then you enter into a situation where these companies are going to be under increasing pressure to generate as much money as possible. And this is why ChatGPT gets into shopping.

Andy:
[22:42] It gets into, you know, promoting different things to you. This is why they get into, you know, browsers so they can track your browsing activity and pop up other, you know, things to sell you things because they're under this pressure to generate money.

Ryan:
[22:53] But I do think long-term,

Andy:
[22:54] The business model works. It's just going to take a while. And these companies may not have so long. So they're forced to be increasingly aggressive in abusing your data. And unfortunately, that's their business. And this is why when Proton, you know, we built our Luma AI. We did it in a different way because we realized that actually you need to have a different business model or else it simply is going to lead to really bad outcomes for the users.

Ryan:
[23:18] Let's talk about the differences maybe between Luma and some of the existing. So Luma is Proton's AI.

Andy:
[23:24] Yeah, Luma, Luma. It's the end of the O.

Ryan:
[23:26] Luma is Proton's AI, and it preserves all privacy, encrypts all data. Let's say Sam Altman or Sundar from Google, they listen to this interview, and they call your bluff, and they say, no, Andy's got the wrong idea. This was never about surveillance capitalism. This was like, we always want to kind of like protect users. Could they just turn on encryption and have none of their employees have access to any of the data and have none of it be subpoenaable? And would that be legal? Is there anything technically or legally preventing them from just flipping a switch, or at least in user preferences, allowing users to say, hey, I want all of this to be fully encrypted in private?

Andy:
[24:13] Yeah, there's no technical limitation, in fact, that prevents them from doing what we're doing. What really prevents it is business model limitation. And if we're being completely frank, it's a problem of capitalism. Capitalism drives them to make the highest possible profits. And Proton, of course, being predominantly owned by nonprofits, doesn't have those same type of constraints. There's no technical barrier to say why you couldn't do the same thing as LUMO.

Ryan:
[24:38] Is there any legal barrier?

Andy:
[24:40] There's also no legal barrier as well, in fact.

Ryan:
[24:42] Okay, so governments around the world aren't saying, hey, we need subpoena rights just in case, you know, we're dealing with someone who is cooking up a bio device that's going to, like, kill, you know, thousands of people.

Andy:
[24:55] Well, it depends on the government. Russia and China would definitely have those requirements. But in the U.S., here in Switzerland, here in Europe, we thankfully are not there yet, right? I say yet because it's hard to tell the future, but we're at least not there yet. Let's put it that way. So what Lumo does that is quite unique is, number one, we don't keep a record of any of your conversations. Anything that is in your chat history is encrypted in a way that we cannot decrypt. So we are technically prevented from accessing your history. Number two, we don't use any of your conversations or any of your prompts to do model training and refinement. So there's no chance that your information kind of gets leaked out. And that also means that number three, our staff don't read your conversations because they can't. And if we can't get access to information, it means the government coming with a subpoena or a court order, even if fully legal, well, we cannot disclose information that we ourselves don't have access to. And so it's sort of the only chat bot, AI chat bot, where there's a strong technical guarantee that your conversation stays private. And that is what is different. And that's your point.

Ryan:
[26:04] Google could technically build a similar system.

Andy:
[26:08] They just don't have the economic or business model incentives to do so because their business model is flipped. You know, I'm a subscription business and I take all of my money directly from subscribers. And that means that my incentives are aligned with our customers. Our customers pay us because we're private. And the instant we're not private, they stop paying us. So I have a financial incentive to keep doing that. But if you use Google, you're not actually Google's customer. You're the product. They're selling to the real customer, which is the advertiser. And that is a misalignment of incentives, which is never going to really put any real pressure to compel them for the patent for privacy because patenting your privacy, unfortunately, goes against their fundamental business interests.

Ryan:
[26:50] You know, Google used to have this expression, like don't be evil. And I think in crypto, we adopted this expression, can't be evil, because this is what encryption really allows us to do. And I want to ask you, So when you're talking about the chats, you don't have access to them. So just to be clear, they're fully encrypted and you have no way to access. Proton has no way. No employees of Proton, no government subpoenas have any ability to actually decrypt the encrypted conversations. Is that correct? Yeah.

Andy:
[27:21] So we talk about Lumo, your chat history is saved, but it's encrypted in a way

Andy:
[27:25] that we cannot decrypt it. So it's encrypted essentially with your private key that we don't have access to. We just have an encrypted copy of that, that, you know, we cannot decrypt. And that is the technical difference. And by the way, it's all open source, so people can take a look and see how it works. But yes, your history, you know, whatever sense of discussions that you have had with LUMO, that's your data, and we cannot get access to it.

Ryan:
[27:47] Very cool. All right. What model does Luma run under the hood?

Andy:
[27:51] We use all of them. So, you know, but only open source models. We're strict on open source models. So we have basically, for example, we have models of Mistral. We even have actually OpenAI's open source model. We've got the Chinese open source models as well. Like the DeepSeqs. Yes, yes. And KMEK2, you know, those models like that. But of course, then we bring them in, we modify them, and we make sure actually we use the best model for every single query. And this is important because, you know, all the models have their own biases. So if you were to ask a Chinese model certain questions, it wouldn't give you, let's say, the correct answer, right? Sure. And we ensure that we do give the correct answer. And we also try to be as neutral as possible. You know, we don't want to be a right-wing model. We also don't want to be a woke model. So we try to, you know, train and calibrate our systems to be sort of, let's say, as neutral as we can with as few biases as possible. and this is something that we actively do.

Ryan:
[28:49] Open source models are pretty fantastic. I mean, it is the case that they're close to kind of what the frontier models, close source models are actually providing. If a user was switching maybe from Gemini or ChatGPT to Lumo, would they notice anything? Would they like lose anything? What would they lose? I think one of the ways ChatTPT has people locked in is now has kind of a memory of all of their chats. And so it can recall context. It's also very, like, the interface works fantastic. It's very easy to use. How about Lumo? How closely is it able to replicate all of the bells and whistles of some of these frontier chats?

Andy:
[29:30] Well, Lumo has been on the market for five or six months. And these guys have, let's say, been on now at this point for over three years. So we will always be a little bit behind, let's say, the cutting edge, because we simply haven't been around as long, right? But as you say, what is interesting about the AI revolution is the gap between the best open and the proprietary is really, really quite small. And that means that even if we're not 100% of the way there, it's actually pretty close. And we're quickly adding the feature set to, you know, complete the business feature. So you talk about memory. Memory is something that I actually work on right now that is probably going to come to Lumo within the next month or two. We have a major release essentially every two months. And it's a field that is improving very rapidly. So I think it's a gap that we close. And it's a gap that is pretty easy to close because of how good open source solutions are in this particular tech revolution, which is quite rare when it comes to tech revolutions overall. And I think that's quite important. On the topic of memory and sort of personalization, in fact, there is a possibility for you to personalize Limo. You can tell Limo how you want to talk to you and how you want to respond. But there are certain, let's say, features that we are going to decline to copy because we don't think they're right for society. So...

Andy:
[30:46] I don't really want Lumo to begin to change his answers and behave differently because of inferences it has made about your personality. You can do that optionally, if that's what you really want, by opening up the personalization and instructing it to do that. But by default, it's not going to do that because I want to avoid some of the filter bubble stuff that we had on social media.

Ryan:
[31:12] Manipulation? That's what we call it in a human relationship.

Andy:
[31:14] Yes, yes, yes, yes. Yes, manipulation, right? I want to voice on the manipulation because I think that's harmful. And social media, the problem was if you were on the left, it kept feeding you more left content and it made you more and more extreme. It did the same to people on the right. And then we live now in a completely polarized world and then we wonder why, right? So I think LUMO and AI, that's unresponsibly, it should not cater and try to reinforce our worst impulses, even if that gives higher engagement. It should actually try to do more to steer people in society towards the center to see in both

Ryan:
[31:50] Points of views.

Andy:
[31:51] And that means, I would say, probably less personalization by default, because personalization is what gives rise to a lot of that. So some people say, oh, well, but the fact that you don't do this is a bug. It may be, I'll say, a product bug, but it's probably a feature for society. And this is a trade-off that we try to get correct.

Ryan:
[32:09] I think it's a long-term feature for the individual person too, right? I mean, a lot of social media, the dopamine, all of these things, the sycophancy from these chat, it's not good for people. It might feel good in the moment and might cause you to spend more time with the chat but it's not necessarily good for your overall well-being, right? This really gets back to kind of what I think we all need and what we want from AI and what I'm not sure that we will get at the end of the AI rainbow with some of the large companies pursuing this is I want an individual.

Ryan:
[32:46] AI and hopefully at some point an AGI that protects my best interest and works for me and represents me and doesn't work for some other company and doesn't try to manipulate or harvest me or sell me something or use me for some sort of purpose or influence me. Like if an AI is trained as a lawyer, let's say, I want to be able to trust my AI as that lawyer. I want it to work for me on behalf of me to protect my civil liberties in a court case. I could tell it confidential information. It's not going to rat me out, you know? And every individual, I think, actually needs that protection in a world where everybody, large companies are going to have all of these AIs. Like that's how you actually make it democratizing. And I'm not sure that the path that ChatGPT and Google have for us is going to end up with a self-sovereign AI that sort of works on behalf of the user.

Andy:
[33:49] Yeah, it doesn't get there because it's not profitable enough. It's not their business model fundamentally. And the reason why at Proton, you know, we transform the business into being, having a nonprofit as this bigger shareholder is because that is the way in which you resolve this, you know, conflict. So it's kind of funny, you know, because you can sort of see intentions from direction of travel. Sam Ullman has spent the last three years trying as hard as possible to stop being a non-profit. Whereas Proton went the other direction and actually went from a for-profit company that could have stayed for-profit into something that was primarily owned by non-profit. And I think that speaks a lot about intentions, of course. But you need to have that structure. That is the essential structure you need to have in order to ensure that you can carry out sort of the vision that you spoke in the long run. And a foundation structure really puts what is mandating is it's a legal structure that obliges you to put society's interests above financial self-interest. And this is the basis of a nonprofit foundation under Swiss law. And that's a key innovation from business model standpoint. Because it gives you the flexibility and the freedom, actually, to make a decision that is good for the customer, but maybe not always best

Andy:
[35:05] for the bottom line. Let's talk about that.

Ryan:
[35:07] So in crypto, we're actually familiar very much with foundations. And you're like, what a notable foundation I think that comes to mind is the Ethereum Foundation. And I think they might be actually registered in Switzerland as well.

Andy:
[35:20] They're all here. They're all here.

Ryan:
[35:22] They're all here. Okay, okay. And so just like you guys, I guess maybe let's talk about the pros and cons of that. So model the foundation, like many of the benefits that you said, sometimes the disadvantages can be they can get stuck in bureaucracies. They can't move as fast as companies. They're not as well-funded. They're not as aggressive. They can't get maybe the talent from outside that they need. And so when you talk about Lumo being sort of a self-sovereign AI that works on behalf of its user, I want Lumo to be very well-funded because I want it to be a good product. I want it to be a better product than ChatGPT. And in order for it to be a better product, you need the resources to hire the talent to make it a better product. And you also need the business model that supports it, right? The thing with AI is inference costs are high, the compute costs are high. And I can see where a ProtonMail kind of works on a subscription-based model because that's relatively static and it's storage, whatever, costs are low. When you get to AI, I mean, does a $20 a month subscription even pay for AI when I'm going to be cranking on the thing and boosting up your server costs? So can you talk about the model and then how that fits in the foundation to make something like this sustainable?

Andy:
[36:40] So maybe a popular opinion among your audience, but I would actually reject a bit of comparison with the crypto foundations because if we're being let's say completely frank A lot of the crypto foundations that were created in Switzerland, and this was a lot of them in 2017, around sort of the ICO era, we call it.

Ryan:
[36:58] Yes.

Andy:
[36:59] I mean, let's be honest. Most of those are scams, right?

Ryan:
[37:03] Many of them were.

Andy:
[37:04] The vast majority. And these foundations were not here for social benefit, right? These foundations were created because it was a convenient legal structure to legally launder large amounts of money received from unwitting investors who 99% of the time were defrauded at the end of the day. So I actually felt it was a bit of a mistake for Switzerland to cater to this business.

Ryan:
[37:32] It was early days.

Andy:
[37:33] They didn't realize, you know, a lot of those things happened. And some of these foundations are still around. They're still around having cashed out tons of money to the founding teams. And then what happened? They went off and they did things that ultimately didn't have utility. Ethereum maybe is kind of one of the rare exceptions where I suppose some value was created. But a lot of these promised, you know, we're going to have this new blockchain to do this thing that will transform the world. and then it's Vapor where it hasn't shown up and they've just been sitting around collecting salaries year over year, not shipping anything and these networks are dead.

Ryan:
[38:06] Proton, you guys had your opportunity to launch a coin in 2017 and you didn't.

Andy:
[38:10] Right? We did, we did. The problem is we had all the bankers and lawyers in Zoom. We're in Geneva, so we're not in the ecosystem, but Switzerland, same country, right? They all showed up and said, oh, you know, we're going to do this great ICO for you. You're not going to have to give up any equity. You're going to create a new token or coin or blockchain.

Ryan:
[38:30] And you had to resist this.

Andy:
[38:31] And then we promised you at least 100 million. And some of these things are raising billions. You have a big brand. You can do all this. And it was like, wow, this is great. Actually, I went to Zook, in fact, because I was intrigued. And it's like, if someone offers you a billion, you got to go and hear them out, right? Yes. So I went there and I went up to the board. And of course, it was tempting. And they were signing entrepreneurs up left and right every single day.

Ryan:
[38:55] But I realized, like,

Andy:
[38:56] If I go down this path, what you're essentially asking me to do is defraud the customers and the community that has put great trust in the business for essentially personal gain. And it was something that was simply incompatible, what the protocol stands for. It's something that is incapable of my values. I could not personally do it. So we refused. And then people looked at us like we were idiots. It's like, okay, you know, why are these people so stupid? The people are raising 100 million left and right, and these guys are going to sit here and struggle in Geneva, barely having any money to get by and try to grow the business. Like, we look like idiots, right? But in retrospect, I think it was the right thing to do because it just wasn't correct. It was simply a fraud, and we didn't want to be involved in that at all, right? And now, what makes, I think, the prototype of the election structure different? So first of all, it's not credit to defraud people, right? This is the number one thing. And the assets of the foundation didn't come from outside, you know, investors or users who put money in. That wasn't what we did, right?

Ryan:
[39:58] But you're correct.

Andy:
[39:59] It has to be sustainable. This is the key thing. A foundation technically doesn't have a profit interest. But if all your revenue is coming from a couple of donors, guess what? You're in the pocket of those donors. You work for those donors. And this is why Proton has what I call a hybrid structure. It's not purely a foundation. Actually,

Ryan:
[40:18] It's a foundation as

Andy:
[40:19] A bigger shareholder of a for-profit company because profit still needs to be there. But this is sort of a self-fulfilling prophecy in that the two sort of reinforce each other. And the way it works is the company does not have to take decisions that are bad for the users because it's not going to come under pressure from its primary shareholder, a nonprofit, to do that because a nonprofit actually cannot do that. And if the company were to go out and let's say tomorrow we decided to go into a surveillance capitalism business model, well, the foundation as the controlling shareholder is just going to block that. And that's the end of that, right? It's never going to happen.

Andy:
[40:56] So the foundation actually gives the company the freedom to do the right thing. But the foundation has as its asset, actually a big shareholding in a highly profitable business. And that means the foundation doesn't need to go out with a pen to beg for money just in itself. It can collect the money from the company to operate on its own. And that means the foundation is actually independent, completely independent of any outside force. And so I think it's sort of a novel structure. If it was just a company, it wouldn't work. If it was just a foundation, it also wouldn't work. But the combination of the two of them together with this shareholding structure, This gives us a solution that is, I say it's a self-reinforcing. And that is what we did that was really kind of innovative from a business setup standpoint. It hasn't actually been done before. There were really, I think, very few examples of this. So I'm under some pressure because if we fail, then we've shown the world that this model doesn't work. If we succeed, hopefully others will say, hey, I can do business differently. This worked for Proton, it could work for us. And I think this is the future of how you get to responsible capitalism.

Ryan:
[41:59] That's fantastic. And I do wish Sam Altman listens to this episode because that would be instructive in his organization of open AI, although that ship has probably sailed.

Andy:
[42:08] That ship has sailed for them, yes. Yeah, okay.

Ryan:
[42:10] You said highly profitable. So you're indicating that Proton is highly profitable right now, specifically when it comes to the LUMO model. So in kind of the AI feature set, that seems somewhat different than some of the other subscription offerings at Proton in that it has some higher variable cost when I'm pounding inference on your model. How is that priced? And how do you expect to make that sustainable?

Andy:
[42:35] Proton has, from the beginning, always been involved in unsustainable businesses that we're allowed to continue because we have stakeholders who are not purely profit-driven. And I'll give you kind of an example of this. We today have one of the world's largest our free VPN services. And unlike most free VPNs, which monetize your data and abuse your privacy to make money, our free VPN is, in fact, not monetized. Doesn't log, doesn't track, doesn't even have a bandwidth limit, right? You can use it for as much as you want. There's no limit. But not only that, we also spend millions of euros every year in building R&D to ensure this VPN works in Russia and works in Iran. And by the way these are two countries which are under sanctions so even if these users wanted to upgrade to pay us they can't because there's no PayPal there's no you know credit cards you know they're not cryptocurrency

Andy:
[43:36] Yeah they're locked out of the bank and said yeah they can think crypto but that's kind of let's say a small percentage of the population right so we're actively investing millions to be present in a market where there is no prospect for monetization because legally it would be illegal from the pay us so that's clearly money losing business. And Lumo, well, it's a bit early to see in Lumo, but so I may yet turn a profit at some point. But I guess the point is we don't need to turn a profit on all of our businesses because we are not under pressure to do that, right? If I was backed by VCs, they would have told me years ago, you know, kill off your Russian business. This is stupid. This is a waste of money.

Ryan:
[44:20] I get that, yeah.

Andy:
[44:20] And you saw all the Western companies simply left Russia when they couldn't collect money anymore because that was the prudent business thing to do. But our structure allows us to engage in activities in business lines that are not profitable, have maybe very little prospect of being profitable, but are aligned with our mission and are good for the world. And this is why I do want people to pay for Lumo. If you're a happy Lumo user, please pay us, right?

Andy:
[44:47] Even though you're not obliged to do it, because we also have a free version of it.

Ryan:
[44:50] It's subscription-based. It's a paid version.

Andy:
[44:54] Yeah, there's a paid version, but let's say the free version is pretty good like all of our products are probably too good to be honest right But that is our mission. Our core mission is to make privacy accessible. I don't want privacy to be a luxury good. This is maybe Apple's model, right? Although they don't really believe in privacy. Privacy for the business marketing. I think privacy is a fundamental human right that needs to be available for everybody. It can't be a luxury good. And I want it to be open for anybody that needs it. And that comes at a cost, but it's a cost that Proton is willing to pay. And we are very fortunate to be positioned that we're able to pay that because we do have other paying customers who make us profitable. And they are essentially subsidizing the other parts of the business that today do not make money and cannot make money.

Ryan:
[45:39] Just for people listening to this.

Ryan:
[45:40] Because I think they might

Ryan:
[45:41] Be intrigued by the Proton ecosystem. What is the full suite of everything you guys offer? Just like line iteming kind of the product.

Andy:
[45:49] This used to be so easy. I used to say we do email, we do VPN and that's it.

Ryan:
[45:53] Yeah, and that's when I last, so last time I checked on Proton, it was probably like four or five years ago. and just basic email. And then I went back this year and I was like, okay, I'm going to refocus on my privacy. And I was amazed by the slew of services that you now have and actually how much better everything has gotten. I mean, it truly is, Proton truly is just like as good as Gmail and I don't notice a big difference. So you've made some big strides on the product side, but what's everything you offer?

Andy:
[46:24] Well, it's actually been incredibly difficult because you have to get the existing products better and better, but then- People today don't really think about tech as products. Tech is actually ecosystems. Products don't really exist on their own anymore. That's right. So you must build the rest of the ecosystem out. So we have, of course, email. Then we built Calendar. There's also the VPN service. There's Proton Drive. But Proton Drive itself is a smaller product. So there's Proton Drive. There's some photo scalability on Drive. There's also Proton Docs, which is like... It's like the whole G Suite,

Ryan:
[46:56] Basically. You're replicating only. It's private and encrypted.

Andy:
[46:58] Exactly. There's Proton Sheets, which is the Excel equivalent or the Google Sheets equivalent. But then there's also more things as well. There's also the password manager. And I think it's honestly the best free password manager because again, we're not so concerned on monetization. And then when you build a password manager, we had demand for a two-factor authentication app that was actually open and secure. So there's actually a Proton Authenticator.

Ryan:
[47:20] Really? Rather than Google Auth? You have the Proton Authenticator?

Andy:
[47:25] Yes. And there's also even a Bitcoin wallet, Proton wallet as well.

Ryan:
[47:28] Yes, yes, yes.

Andy:
[47:29] Right. And then there's Lumo. which is our privacy-focused AI. And something that is in beta, but not yet released, is actually a ProtonMeet, which is sort of a Zoom competitor that is N10 encrypted.

Ryan:
[47:40] I love this. Yeah, you guys are, it's fantastic. It's fantastic to see the growth here.

Andy:
[47:44] Well, I'm glad people appreciate it because it is not easy to do so many things at the same time. And I don't want to do them for the sake of doing them, right? I want to do them and do it well. I know I may not be able to do it perfectly well in year one or year two, but I do eventually want all of these products to be best in class. And that takes probably a decade in general because that's how long it takes to make sure a product. But we are committing on each product that we release to actually make it eventually best in class. And it's a lot of complexity. It's a lot of hard work. I can't tell people that a lot of times as an organization gets bigger, you sort of have more free time. You can relax a little bit. You can have a little bit less stress. It's simply not true at ProTop because the complexity continues to increase as you get bigger and bigger. But I'm excited to do it because it's something that's exciting for me and I think for the team. But it's also something that we owe the community. And this has to go back to the history of Proton. Many people sort of forget about this, but Proton started through a crowdfunding campaign. It was people, ordinary citizens and users, many of them, in fact, from the crypto and Bitcoin space, who took their hard-earned money and made a crazy bet on a PhD student, a bunch of PhD students.

Ryan:
[49:03] Think about a very wholesome ICO. No, that's what this was, except there's no token.

Andy:
[49:07] Right? Yeah, there was no token. But actually, it was a crazy bet because it wasn't even a speculative investment, right? They didn't get equity. It's a product. What they got was a promise that when we eventually built a product, if we built a product, they would have a credit to use to subscribe to the product that at the point they gave the money still didn't exist yet and was being built by people who had no track record of building a product. success, you know, with any success.

Andy:
[49:37] So I'm very, very thankful and very grateful to those initial users who took that crazy leap of faith in us. And I think we need to keep working for those customers because they have put their faith in us and we need to show, hopefully, eventually that we have earned that trust they put in us. And that's why we keep pushing. And that is, I think, a very strong motivation to also keep going. And also because I think we are doing the right thing. I think our vision of what the internet should be is the correct one. And I believe most people out there, if you ask them, do you believe more in Sam Altman's vision of the future or Google's vision of the future or Proton's vision of the future, they do align with our vision. So it's encouraging to know that I think most people out there would support and back what we're doing. And that really keeps us going. And that keeps us driven, even after all this time, to keep going as fast as possible to build more and more things and, you know, lose more sleep because we're doing too many things at the same time.

Ryan:
[50:37] One thing you didn't mention, and I don't want to add more pressure, product pressure to you, but here's another is peer-to-peer chat. So, you know, kind of, I think about the tools I use every day, Telegram, for instance, Discord, for instance, I noticed X added a new chat feature that is encrypted air quotes. Maybe just give me a rundown because I know something like Gmail. If someone's using Gmail, Google can actually read your email, right? It's the same sort of thing we talked about with Gemini and ChatGPT.

Andy:
[51:10] Yes, exactly the same. Exactly the same.

Ryan:
[51:13] Now let's talk about chat. So Discord, Telegram, WhatsApp, Signal, the new XChat feature. What can the company see? What's subpoenable? What's private? What's not in the, Proton chat anytime?

Andy:
[51:30] Well, I'll give you the rundown of all you talked about. So Discord, no encryption, everything visible, everything discoverable, subpoenaable, it's fully open. Telegram, okay, I might get in trouble for saying this. Actually, I won't get in trouble for saying this. I'm not afraid of Pabllo.

Andy:
[51:46] Advertised as encrypted, but not encrypted. Not by default. Right. So 99% Telegram is simply not encrypted, right? It's, and defaults matter. It's unencrypted by defaults.

Ryan:
[51:57] To go do a separate setting and create new specific like chat rooms with encryption on yes and i've never been involved in one of these chat rooms because the like of the hundreds of telegram chat rooms i'm in it's all the default which is not encrypted yeah.

Andy:
[52:13] And that's why there's some people who say you know there's telegram like an intelligent ops of the russian you know secret services or something like like like because because it looks like a honeypot right I'm not going to speculate, but I would say a lot of people use it believing there's encryption, but in actuality, there isn't. So let's call a spade a spade, right? Signal, actually very well encrypted, encrypted everywhere, but encrypted to an extent that there's probably some significant usability trade-offs. It's maybe not the best for group collaboration. Group chat histories don't really appear as you expect, right? because of, I mean, there's valid encryption reasons for that. And WhatsApp actually also encrypted in many ways. The communities are not, but at least the DMs and small group chats are encrypted.

Andy:
[53:07] Owned by meta, which is probably going to do everything they can with your metadata to try to, you know, mess with you and make money off of you somehow, right? So unfortunately, that's kind of not so great. There is actually a gap in the marketplace because all the solutions are sort of imperfect in some way. I don't think it's possible to build a perfect solution either because, you know, there's always compromises. But I do think, you know, when it comes to like Telegram Discord, I assume that's probably where you spend a lot of your time, right? I think there's a gap there and I think someone can do a better job will it be Proton that does a better job? well we got a lot of things spinning already right now And users would probably kill us if we went off and did even more things before we improved some other stuff. So I would say not immediately, but yeah, we work for the customer. We work for the user. If users tell us, and you're a user as well, if you tell us the product should do it, and enough people say it, well, guess what?

Andy:
[54:02] If you work for the customer, you're obliged to listen to the customer. And that's how we decide how we build. If enough people say they want it, then actually we do it.

Ryan:
[54:10] Let's talk about some other pieces of the digital stack, maybe where you don't have product ambitions, but I'm honestly, I'm just looking for advice here because I don't know what I don't know. So let's talk about browsing and search. So there's search engine and then there's also the browser that I use. So something like Chrome versus maybe a Firefox versus maybe a Brave. It was interesting, Andy. So Gemini actually added a feature inside of Chrome such that when you turn it on, Gemini can look at all of your browsers, all of your open browsers, and actually allow you to ask questions based on what you're reading. This is like a power user type feature, right? So how brilliant would it be if I could be on a webpage and I have a question to Gemini about something I'm reading or some graph that I don't quite understand. I say, hey, Gemini, like, what's this mean? Tell me about it. Give me the history of it. Like it's interactive like that. The trade-off though is, I guess Gemini would be able to incorporate every single tab that I see and everything I'm looking at and incorporate that into its model of me in the world. Anyway, Browsers?

Andy:
[55:18] Browsers are...

Ryan:
[55:19] What do you recommend here? What's good? What's not?

Andy:
[55:21] Yes. Well, what you see now is all the AI companies building browsers.

Ryan:
[55:26] Yes.

Andy:
[55:28] And that's not an accident. They didn't just do it for fun. They did it because they realized that they integrated into your browsing and they can combine your chat history with all your browsing activity. It's like exponentially increasing the data that they have. So just like Chrome is going to integrate Gemini, ChatGPT also wants you to be on a ChatGPT browser because it's just a more effective way to suck up your data. And so I do think the choice of browser is very important. And again, here we kind of live in a sort of imperfect world. Chrome is, well, if we're being honest here again, and this is not because I don't dislike the other browser options out there, but Chrome is the most performant browser. It's the most stable one. and it works with the most websites. Now, that's true also because they're sort of anti-competitive, right? They also do things to make certain sites not work as well as Firefox, like really cripple Firefox. So yeah, they didn't play fair with Firefox, I would say. And Mozilla, being fully funded by Google, had probably limited recourse against that. But in terms of overall reliability, stability, performance, Chrome is unfortunately number one. That's just the way it is.

Andy:
[56:45] Firefox has some performance issues, but honestly, they've closed the gap. It's quite good now. The interesting thing about Firefox, of course, is they've been involved in some controversy recently because they have started to get into advertising, starting to get into AI. They removed some of their promises that they had made to customers to sort of shift towards a more commercial model. and that's raised some concerns in the community. Brave is, I think, a good option, but Brave always sort of had their basic attention token and their crypto sort of add-on, which I suppose the crypto community likes, but other people who don't want to be involved in that maybe don't like that. So I'm going to give you sort of like an outside choice here, which doesn't come very often, because it's been around for a long time. I'm currently liking Vivaldi.

Ryan:
[57:32] That's a lot of choice.

Andy:
[57:33] Okay. Why? It's Chromium based. Isn't doing like some of the, you know, crypto stuff that Brave is doing that some people, you know, maybe are not the biggest fan of. And it's open source. And it actually works pretty good. But the space is constantly evolving, right? So, you know, if you ask me this question a year from now, maybe my answer is different. But this is the one that I have that I think is actually a solid option among browsers.

Ryan:
[57:59] We talk about our phones. So one of the reasons I feel like I like my iPhone is because Apple places a priority on privacy. But I think you're going to tell me some of that is smoke and mirrors and propaganda and maybe not as true as I hope it is. But when it comes to a choice of a phone stack, what should privacy conscious consumers and digital natives be wary of here?

Andy:
[58:31] Yeah. Well, the first thing I might say is if somebody is spending billions of dollars putting up giant billboards, you know, saying privacy, you probably should be a little bit suspicious of why they need to spend so much money to convince you that it's private if it's actually private, right? Actually, you know, Apple has the same definition of privacy as probably OpenAI where, you know, it's kind of funny. Every single company has its own definition of privacy. Right. And what they're really trying to do is redefine privacy. And I can give you some example. You know, Google, if you go to the web page or to any of the product pages, privacy, encryption, security, it's all over the place, right? And I call it privacy washing. But what is Google's actual definition of privacy? The definition is we're going to give you more options over how we abuse your data. Privacy for them is about all the different things that they have. This is a gold definition.

Ryan:
[59:26] It's so cynical, but I think it's true.

Andy:
[59:29] And then Apple's definition is we're going to be the only ones who are allowed to abuse your data. No one else is allowed. Just us, right? So third-party cookies, all this other stuff. No, just us. And Apple has a giant ad business. They do do lots of advertising and they're putting it into their products. It's a 30 billion business today, the Apple advertising business. And they do a bunch of things that are just counterintuitive to privacy. So I'll give you a quick example, app store fees. If you charge people that take subscriptions on mobile a 30% kind of revenue, what you're basically doing is you're incentivizing a surveillance capitalism business model because a free app like Facebook pays zero. Oh no, actually they pay $99 per year, which is for the developer fee, but that's it. So if you charge Proton 30% and you trust Facebook 99, you know, a year, you clearly don't care about privacy because you are essentially making that privacy business model a lot harder to, you know, sustain. And so Apple has a bunch of things that is clearly contradicting to their privacy advertising. So at the end, they don't care about privacy as all the ads say, they care about money. And this is very clear when you look into that.

Andy:
[1:00:45] Unfortunately today, mobile is, it's a monopoly of two players. Every single mobile phone is either Android or it's, you know, iOS. And I think this is one of the hardest monopolies to break because it cannot be broken with less than, I would say, probably, you know, anywhere from five to ten billion dollars. Why? Because the device manufacturers themselves are also complicit in maintaining the monopoly. Now, these device manufacturers are paid by these big tech companies to pre-install certain applications, certain softwares, As a condition for getting access to, you know, Android, for example. And this is the whole thing behind the epic lawsuit against Google, right? With sort of these deals that were being made. And so I feel the only way that we resolve this actually is regulation. I'm not, let's say, a very pro-government person in general. But in a monopoly situation, you've got to have regulators come in and say, this is a monopoly. And here are certain things that you cannot do because you are a monopoly. It's the only way because it's gone too far now. There's only two left in the whole mobile space. They used to be BlackBerry, they used to be Nokia, they used to be other options. But now there's literally just two.

Ryan:
[1:02:02] I guess, Andy, so tell me what areas that Apple is really breaching privacy on. So if my data, let's say, in iCloud, they say it's encrypted. Is that all fully encrypted? Or let's say, you know, can Apple, does Apple have any access to data on my phone? And how well does, you know, like face ID basically, like can third parties really like sort of crack the encryption that's on my iPhone? I mean, there was some case back in the day, I think it was like, was it the FBI? I think Apple made much propaganda about this. The FBI couldn't crack our cones.

Andy:
[1:02:45] We held firm. But you know how that story ended?

Ryan:
[1:02:49] No.

Andy:
[1:02:50] That story ended because the FBI dropped the case because they found a way to crack it without Apple's cooperation, right?

Ryan:
[1:02:59] With their own three-letter agency stuff.

Andy:
[1:03:01] Yeah, yeah. So, I basically said at the end, never mind, it's okay. You didn't have the court win. We found a way in. So don't worry about it. Great. And that's what I said. Look, I think among the big tech companies, Apple definitely is the best, you know, from a privacy standpoint. They do have a different business model of selling hardware, which allows them to do that. Apple is a company that cares first and foremost about profit. You know, other principles kind of fall by the wayside after profit. If you look at what they've done, you know, from a competition standpoint, like, you know, they were referred for criminal prosecution by a court in California for how blatantly they, you know, breached a court ruling that asked them to play fair with Epic and other developers. It's clear that this is a company that only cares about money. And every single time our court asks them to try to open up their ecosystem to allow other privacy players like Proton to be able to have a chance to succeed, they essentially engage in malicious compliance. If you go to a Wikipedia article on malicious compliance, some of the examples are Apple, right? And so it's a company that I think, yes, is more private than Google, but it doesn't really have a moral compass to many sense. It doesn't behave in a very ethical way.

Andy:
[1:04:19] And you see this in sort of the way they act in every single case, right? Like, you know, the European Union said, look, you need to stop being abusive towards developers and you need to open up your App Store ecosystem. And give you a quick example, you know, if you're in certain countries, you can't get Proton. You can't get Proton because the App Store is the only way to get Proton. And Apple has decided to comply with a dictator in some country in moving certain apps because that is more profitable for them than trying to fight it or pulling it out of the market, right? So there's all these different examples where I just think Apple has strayed very far from what it originally stood for. It's sort of been, let's say, it's almost like contaminated or infected by this sort of relentless pursuit of money and only money above all else. It's lost some of the fundamental values. So I do think it's more than Google. I find it very hard to, you know, put my trust in a company that is engaged in all sorts of behavior that if you look from the outside is really quite despicable.

Ryan:
[1:05:27] The question is, where do we put our trust? I will tell you the crypto consensus

Ryan:
[1:05:31] on this is you got to trust the cryptography and that's the only thing you can trust, basically. And maybe this brings in the conversation of governments, right? So we've been talking about a big surveillance capitalism and some of the corporations. And there are ways where our governments are a check on that power. And they're democratic institutions, at least the democratic republics that we have are. And so they should represent the people and they should be there to support civil liberties. But we don't always find that that's the case with respect to privacy and encryption. And maybe I'm much more familiar with some of the battles that are going on in the United States. And we've had battles with cryptocurrency and financial surveillance and all of these things. And we could talk about that. But there are a couple of that have popped on my radar that I want to find out from you a bit more about. Maybe you would know which ones we should focus on. But the EU chat control legislation keeps popping up. And I think, Andy, maybe you could describe this, but does this not give EU countries the ability basically to pre-check communication inside of a chat and just make sure it's not child pornography or any illicit behavior,

Ryan:
[1:06:42] insert whatever the government thinks is illicit, illegal here. And essentially, doesn't it break cryptography? Can you talk about that legislation and anything other that you're seeing coming from world governments? That's pretty alarming.

Andy:
[1:06:56] But actually, before I jump on that, I want to go back to the point you made about trust, about, you know, who we trust and the crypto point of we should trust in the crypto. Actually, sorry to break it to you, right, but crypto is written by people and people manage these crypto systems and the infrastructure in which your crypto runs. So I don't think you can say just trust in the crypto. At some point, you also need that trust in the people as well. And so people is a bit important you can do the crypto correct but if it's a person who is sort of you know like to kind of go back to the the crypto scam examples where icos those were open source projects the crypto was probably correct in many cases but the people were scammers and unfortunately you know crypto was correct but the guy that ran it was a scammer what do you want to do, right? So as much as we try to remove people out of it, there's ultimately element of you're trusting a person, you're trusting a team. And when I look at the services I trust, I also look very closely at who runs it, what do they stand for, what have I said, and what might their values be. And by the way, the first kind of red flag is if you can't even find out who they are, right? Because if they're not very visible, that's like, okay, why are they not discoverable? What are they're trying to hide from, right?

Ryan:
[1:08:20] Interesting.

Andy:
[1:08:20] Yeah. So, sorry. I think people's very important.

Ryan:
[1:08:23] Going to EU chat control,

Andy:
[1:08:25] Chat control is not a new concept. It's been, if the file has been open and debated and kicked back and forth at the European Commission, probably for now close to three years. And what they want to do is they want to say, okay, so I'll tell you what the pitch is. And I'll tell you why the pitch is bullshit,

Ryan:
[1:08:44] Right?

Andy:
[1:08:44] The pitch is basically we need to be able to prevent, you know, child pornography and terrorism. And the best way to do that is every single message on an encrypted app before it is sent. It's going to be scanned and sent to a government database where it can be compared against known bad things. And then if it's bad, then your phone needs to phone home to the government and report you to the police. And the police will look at your stuff and

Ryan:
[1:09:10] Maybe come and get you. Oh my God, that's really what it is. And by the way, would this include email? It's obviously chat.

Andy:
[1:09:17] By the way, so chat control would force you to do that. Well, it's kind of, again, I may sound anti-Apple, right? But, you know, which I'm honestly not so anti-Apple because I try to be pretty objective here in general. And I would say they're better than Google. But, you know, chat control is basically doing what Apple volunteered to do a couple of years ago.

Ryan:
[1:09:36] I remember this.

Andy:
[1:09:37] Right? And then they got a massive backlash for doing it. But actually, in some sense, this is almost an Apple invention, right? Apple was the one that kind of said, hey, you know, we're happy to do this voluntarily. We will voluntarily scan your shit and then call the police on you.

Ryan:
[1:09:50] To protect the kids, of course.

Andy:
[1:09:52] That's just the reason. And I thought that was a giant breach of privacy. I don't know why Apple proposed it, but actually Apple made this proposal voluntarily already several years ago, and then they pulled back from it. But that's what chat control is. In effect, is doing. Now, fortunately, chat control keeps popping up and then getting killed. So it's a zombie. It keeps coming back alive, but it's not being very successful. And what happened recently was the Danish presidency of the European Commission tried again to introduce it and push it forward. And predictably, it ran into opposition because it's deeply unpopular across Europe. And in fact, someone even set up an email campaign to bombard people in the European Commission over this, which is super effective. But in the end, the Danish presidency removed the mandatory detection orders from the current text. So presently, it's actually not possible for the mandatory scanning to be enforced on tech companies. But it is possible for tech companies to do it voluntarily.

Andy:
[1:10:55] So Apple's crazy scheme, if they want to bring it back, would definitely be possible and they could do that under the new legislation. But the mandatory part has been removed and that doesn't mean it won't come

Andy:
[1:11:07] back. These things are a way of coming back every couple of years. But I consider, you know, knocking on wood here, the tech control issue to actually be almost resolved in Europe and we're not going to have mandatory breaking of internet encryption communications for this purpose. And

Ryan:
[1:11:25] That I think is a

Andy:
[1:11:26] Huge win for Europe. It's something that is, yeah, it's...

Ryan:
[1:11:30] We were hoping this would be the outcome,

Andy:
[1:11:33] And it seemed like we've finally gotten there. So I think it shows that human rights still survives today in Europe.

Ryan:
[1:11:41] How do we harden this a bit more? Because as you point out, it does seem like these types of issues is a zombie that comes back from the dead and keeps haunting us. Or we're always playing whack-a-mole. And if it's not in one jurisdiction, it's in another. If it's not one bill, it's another. If it's not one take on, you know, cracking encryption, breaching privacy, it's another. Is there some way to like enshrine these civil digital liberties in some sort of a, like a bill of rights? You know, the U.S. Has kind of the bill of rights and these are things that are baked into the constitution. They don't really address privacy. I mean, that's something you have to like read into it. And furthermore, they don't really, they're not as adapted to the digital world. Like, for example, if I was to think about a modern day digital bill of rights, the number one thing would be you can't outlaw encryption. Every single citizen should always have the ability to encrypt their data and the government shall have no ability to like breach that, interfere with that, make it illegal. Is that kind of what we need in order to like have the zombies stay dead? And is anybody working on that project?

Ryan:
[1:12:49] Yeah. If you look closely at the law,

Andy:
[1:12:52] This is enshrined in certain laws. Like there are people that say encryption is speech. And if that's the case, then the First Amendment does protect you. And if you look at European law, like mass surveillance, mass surveillance is illegal in Europe because the European Court of Human Rights has interpreted some of the EU statutes as saying that. So there is sort of a legal basis for a lot of this. But I do think it has to be strengthened because even if there's a legal basis you can always find creative ways to go around it and for data retentions the argument is always oh we're just going to do it in certain situations when there's a state of emergency whatever whatever right but then you have countries like France which are like perpetually in a state of emergency we're like oh we've been in emergency for 10 years right and so like that's bullshit right so I do think we need to have new legislation that protects us in entrances. But legislation is done by legislators. And most of the people today in government actually are tech illiterate. They don't know anything about tech. So I hate to say it, but I think the solution is that some people need to die. And let me quantify the statement, right? Not kill them, as in they need to die from old age.

Ryan:
[1:14:16] Progress moves one funeral at a time. That's sort of an idea.

Andy:
[1:14:19] Exactly. Of natural causes, of course. Yes. Yes. And then new people need to come in from the more tech-savvy generation who understands things better, understands the issues better, and can put in place a proper legislation. For example, if I were to go today to the European Parliament or even the Swiss Parliament and ask them to write a new legislation for privacy, encryption, and security, I probably wouldn't do it. I wouldn't do it because I would be more worried about

Ryan:
[1:14:48] Them screwing it up. That they would do a bad job of it.

Andy:
[1:14:50] Yeah, about them screwing it up. So I was like, you know, maybe you just don't do it at all because you could actually make it worse. You know, and the saying is, they will kill us with their good intentions, right? So yeah, I think it needs time. We need to have a new generation of legislators who understand better, who are more tech native and who you can have this conversation with.

Ryan:
[1:15:11] But Andy, it's not just being tech native. That is an essential and important component. But you also have to have these kind of classical liberal values of enshrining ideals like privacy as an individual right. Because somebody might push back and say, Andy, no, like privacy is great. This is fantastic. All we're asking for is for governments to have a special key that they can use with court approval to unlock the data. And you don't support, you know, child abusers or terrorism, do you, Andy? Can you talk about that type of objection? Because that's a common objection even people who quote unquote understand the tech might make.

Andy:
[1:15:54] Yeah, and the answer I would give is I've never seen a backdoor that only left the good guys in because it doesn't exist. I wish it did, but the reality is it doesn't. And then the question I would ask is, which government? Because are you talking about a government that is in power in your country today? Or the one that could be in power, you know, five, 10 years from now? The point of civil liberties, fundamental human rights, the reason they're called fundamental is because they're also here to protect us from the tyranny of the government. And the government, even in a democratic society, is often just one election from changing.

Andy:
[1:16:32] And I say here in Europe, but in the US, what I saw was after the last election, half the country became terrified that the civil liberties being infringed. But the election before that, the other half was terrified that they were going to be infringed, right? And that's a perfect example. You need to have these things in place so that no matter how the election goes, you are not terrified because your fundamental rights are going to be protected. So you're not doing it to protect maybe the present, but you're doing to protect against all possible futures. And that's a forward-looking notion of why fundamental rights are required and why even if you don't feel an imminent threat from your government today, you should still fight for this right. And by the way, this is also not saying that we are going to allow criminality to run rampant on the internet with no checks or balances whatsoever. That's not what we're saying either. We're saying it has to be proportional. And the reason people object to mass surveillance and the reason people object to check control is that is essentially saying everybody is under surveillance by default, even if they're not under criminal suspicion. And that is undermining the fundamental presumption of innocence, which is a cornerstone of democracy. Because we as a society say that you're innocent until proven guilty. If you're guilty until proven innocent, then actually that's fascism, right? That's not democracy anymore.

Andy:
[1:18:01] And that's a value that we need to defend. So it's fundamental to democracy. Without this, you don't have a democratic society that survives in the 21st century. And this is the argument that I always give people who bring this up. Because yes, I hear this often, but it's simply not true when you think about it.

Ryan:
[1:18:17] Well, here's another related objection that they might make. And we certainly hear in crypto with peer-to-peer transactions is bad guys use this stuff to do bad things. And I'm sure that there are bad guys who have used Proton to disguise their privacy. In fact, I mean, maybe that's the only place they can get it or one of the few places that they can get it. And so they might say, Andy, well, just like cryptocurrency, what you're doing with privacy is you might have good intentions, but you are empowering criminals and terrorists and child abusers and all of these things and you're equipping them with privacy technologies. Over at Gmail, they use those tools.

Ryan:
[1:18:56] Government subpoena can go access their emails and they don't have that ability. So what you're doing is kind of a net bad. What do you say to that?

Andy:
[1:19:03] Well, the funny thing is this is something that can be proven with data. If you look at the number of law enforcement requests that Proton gets and you look at the ratio of that compared to the number of users, it's not worse than Google, in fact. This is very surprising, but the data is clearly there. It's public and it's there. So the notion that criminals are more strongly preferring platforms like Proton is kind of a false one. And also, the reason for this is kind of simple. Let's say you're using Proton to send a bomb threat. Well, actually, you don't care that your message is encrypted. In fact, you want the other side to read it. So simply, so the data...

Ryan:
[1:19:47] But I might do this, Andy, to seal man this a bit more. I might do this on a Proton VPN so that authorities couldn't track my traffic. And I might do this using other privacy tools that allow me to do this kind of nefarious thing. Yeah, yeah.

Andy:
[1:20:00] Yeah, so I agree. But the first point is the stats don't show it's the case, right? Okay. But let's assume the stats were to show the opposite.

Ryan:
[1:20:10] Yeah, let's assume that.

Andy:
[1:20:11] And actually there were more criminal users on encrypted platforms. Well, I'll give you another stat. I'm willing to bet that the rate of cybercriminality in a fully surveilled society is lower in certain countries. Pretty sure the rate of criminality is going to be lower in China and in North Korea compared to US and Europe. But you definitely pay a very high price for that. And if you were to ask the people in North Korea, do they feel more secure because of the total surveillance that their society provides to them? Well, they would tell you in public, yes. But then in private, when you can ask them and you actually give them actual privacy to express their true feeling, they would probably say no. And this is the core concept. But in a democratic society, in a society that gives privacy to its citizens, there is always a negative personality. There is a cost to that. But it's a cost that we should be willing to bear because the cost of the alternative, a society without privacy, is actually so much higher. And that is all there is to it, right? We can never get to a world where we can prevent all crimes that occur online. But we don't want to get to that world because that's actually a much worse world than the one that we live in today.

Ryan:
[1:21:38] Well said. And I think our politicians also need to understand that aspect of it in addition to understanding the technology to pass some good privacy legislation and regulation. Andy, we've been talking mostly about the proton stack of tools, which is around what I would call communication types of protocols. And so it's, you know, it's email or it's a chat back and forth with an AI, that sort of thing. In the crypto world of things, and I mean cryptocurrency here, not general cryptography, in the crypto world of things, we think a lot about financial privacy. And this is, I would argue, a subset of communication, right? You're communicating economic value, basically. But I'm wondering if you would go that far because there are some people in the privacy space in some jurisdictions that treat financial privacy as different from communication privacy. So, for example, they might respect, you know, citizens' rights to communication privacy. But when it gets to financial privacy territory, we can't have that. You must have AMLKYC. We're not necessarily comfortable with the whole peer-to-peer type thing. We need to know who the person sending the money is and who the receiver is at all time. We need the ability to blacklist, whitelist, and pull the plug.

Ryan:
[1:22:55] In the crypto project,

Ryan:
[1:22:56] We say, no, I mean, financial transactions should be peer-to-peer. And by the way.

Ryan:
[1:23:02] They can be and should be private.

Ryan:
[1:23:04] Some jurisdictions also don't like that aspect of it. We've had many court cases around that. What's your take on this, on financial privacy specifically?

Andy:
[1:23:12] It's a matter of perspective. Yeah. Today, we see a lot in the news about Venezuela. And one of the countries with the highest Bitcoin adoption in the world, in fact, is Venezuela. And there's three reasons for that. One, you know, massive inflation running out of control. The government's suppressing all dissent, also through controlling financial institutions. And just privacy. You need to have, you know, you need to be able to move money in and out of the country without being detected. because otherwise the government may, well, they may tax it, they may steal it, they may use it to target you, a bunch of things can happen. There is really no difference between freedom and financial freedom. If you don't have financial freedom, I would argue you don't have actual freedom either. And so I see it as kind of a similar concept. And you need both. So my view actually is we must have financial freedom. This is something that we should fight for. and it wasn't so long ago that we had this we had cash for many years and cash was

Ryan:
[1:24:18] Well, maybe this is not popular in the crypto crowd,

Andy:
[1:24:20] But I think cash is one of the best privacy technologies out there.

Ryan:
[1:24:23] It's actually extremely popular in the crypto crowd, and we would 100% agree.

Andy:
[1:24:27] Yes, yes. So banning crypto is a bit like saying, I'm going to ban cash. And that's the analogy that we give. You would never ban cash. It would be unacceptable. So I think that's an argument that we should advance in this space. And we should advance it because it's the correct argument, actually.

Ryan:
[1:24:45] I agree with that argument. And I'm curious, your take on this.

Ryan:
[1:24:50] So you're definitely a privacy advocate, many shared values with many people in crypto in general. What's your take on crypto right now? I imagine that you probably, like me and many listeners, see some of the benefits here, also have seen some of the scams and the downsides here as well. But what's your take on it as you look at crypto right now?

Andy:
[1:25:12] I think the biggest challenge in this space and in our space, and we're also in this space as well, in fact, is the ratio between the legitimate and the illegitimate slash scammy is incorrect.

Ryan:
[1:25:28] Yeah.

Andy:
[1:25:28] And at Proton, what we also guard very carefully about is that ratio. We know there will always be illicit uses of our platform, but we need to keep that ratio to be as low as possible. And in our case, we're talking, you know, a small fraction of a percent. Because, you know, above that, you actually get tainted in a way that is not, you know, conducive for the future success of the movement. And in crypto today, when we talk about illegitimate uses, scams, etc., it's not a fraction of 1%. It's unfortunately probably, I don't know, 30%, 40%. It's a substantial. And I think crypto is always going to have a limit to its influence, its growth, its scalability, and how mainstream it can be. If we as a community do not tackle that problem. I don't have the answer, actually, for how is the best way to address that. I think that requires probably someone smarter than me that has thought longer about this problem. But I think we need to do that. If we don't do that, we don't move it into the mainstream. And we need to do it in a way that preserves our values as well.

Ryan:
[1:26:40] How do you do it at Proton? So is it a matter of kind of attracting the good guys, more good guys, because then the good guys outweigh the bad guys if you're able to bring them to the platform?

Andy:
[1:26:50] It's attracting the good guys, but it's also making it crystal clear that we are not here to serve the bad guys. And so it's as much positioning as it is, you know, technology. And of course, you do everything you can to try to block abuse. You try to, you know, find the suspicious user patterns, things that don't look...

Ryan:
[1:27:13] Without breaking your principles.

Andy:
[1:27:15] Yes, without breaking encryption, you do what you can. And also it's reacting quickly. When I'm made aware of users who are using it for illicit purposes, there's no tolerance. They're gone. I don't care if they're paying me or not paying me. That's a breach of terms of service. It's a breach of the law. They're banned from the platform. That's it. And I do think if you look at crypto, there are too many platforms that probably became aware of illicit activities that were happening on their platform.

Andy:
[1:27:46] These activities were profitable for them. So they probably tolerated the activities for far too long. And that made it sort of an environment that welcomed other illicit actors because

Andy:
[1:27:59] they felt safe within the space. So it's about actually creating an environment that is hostile to actors who are not going to be good for the long-term reputation and long-term brand of crypto space as a whole. we need to Maybe we need to call out scammers for being scammers instead of fedding them at crypto conferences. Maybe they should be blacklisted and not allowed to occupy our public spaces and the public imagination. I always say like some of the most famous figures in cryptos all have convictions. That is not a good thing, right? Some of them say it's a badge of honor that we fought the system, but no, you're just criminals. And that's not positive overall if you want crypto to become mainstream. Are you planning

Ryan:
[1:28:48] To do more, Andy, with your crypto wallet in particular? I believe it's a Bitcoin wallet now. Obviously, you could expand that to, say, Ethereum. You could get into decentralized finance. One of the interesting things about Bitcoin is there's no privacy on Bitcoin. So there is pseudo anonymity, of course, but if you send from one Bitcoin address to another, it's all on chain. There are various organizations that can kind of like data mine that and figure out who the underlying wallet identity is. So it's not truly private. It is peer-to-peer. But in general, when you think about crypto products at Proton, what's the idea here?

Andy:
[1:29:24] We also believe very strongly in focus within the company. And I would want to be, let's say, the world's best Bitcoin wallet before I go off and add other things. And the other things that we add also would depend on what is the demand from the community, what are people looking for, what are the key things. Bitcoin today actually is the most commonly used coin within our user community. So it makes sense to support that. Everything else actually is a really, really far, you know, distant second or third. So we think about our mission of best serving our community. The best thing to do right now is actually to take Bitcoin and make it as good as possible within our wallet because that is what the vast majority of people today on Proton are actively using. And this could change with time. You know, blockchains, they come and go. So there's no saying that something new could come out in a few years, become very big because it has better qualities than Bitcoin. And then maybe a big proportion of our community starts to use this new thing, at which point we would also be obliged to adopt it because we're here to serve the community ultimately. So that's kind of how I look at it. I look at what is the work that brings the biggest benefit and biggest value to our user community and what are they asking for? And it's a very simple community-driven decision-making.

Ryan:
[1:30:47] So Andy, let's get practical now as we maybe bring this to a close. So somebody listening to all of this so far is just like, I agree with the principle. I'm busy. Like perfect privacy in today's day and age. It's impossible. It's incredibly difficult. Very time consuming. Too much mental overhead. Why bother? What do you say to that? Is there anything like any advice you'd give to a just a normal person for like three to five things maybe they can do to improve their privacy posture right now that aren't overly burdensome?

Andy:
[1:31:20] Well, people are lazy. So I think three to five is very too much, right? Maybe we leave them with one. And the one that I would give is, actually a lot of people have asked me, you know, Proton always had a vision to do many things about ecosystem. Why did you start with email? Intuitively, it makes no sense because email is sort of a dying medium of communication. Well, the demise of email has been predicted continuously for 30 years.

Ryan:
[1:31:46] It's still around. I'm still an email maxi myself, but I get that people aren't.

Ryan:
[1:31:51] Yes.

Andy:
[1:31:52] And you know what? Email will still be here, in fact, I predict 30 years from now. Because email is not a means of communication. Email is actually identity. It is your digital identity, which I would argue in the 21st century is the only identity that matters. And when you switch from Gmail to ProtonMail, what are you actually doing? You're not finding a new method to communicate because nobody communicates on email anymore. Even I don't communicate so much on email. But making that switch is incredibly powerful because Gmail isn't email, right? Gmail is identity. It's a login state into your account. It is a thing that allows Google to consult your data from across the entire web, all the sites you visit that run Google Analytics, all the cookies that are dropped across the web, all the files you upload, all the communication you have, everything you do on Google Web, every video that you watch on YouTube, it's all linked to your Google account. And you know how you can prevent Google from having that information? Just log out.

Andy:
[1:32:59] So switching from Gmail to ProtonMail is simply saying, I'm going to erase my identity from Google. I can still go on YouTube and watch videos and whatever, right? But it is no longer having all the information on the internet tied to a single profile of who I am. And you have effectively opted out of the Google system by moving your identity to a different provider who you trust more. And thanks to GDPR, this is pretty easy now. This is an easy switch functionality. Google is required to let you export. So you could go to a Proton account, you can link your Gmail account, move all your data over, and it's just a few clicks. So I think-

Ryan:
[1:33:35] And Google will truly delete your data?

Andy:
[1:33:38] Well, Google will let you transfer everything. And they're also obliged under European law to delete your data as well.

Ryan:
[1:33:43] How about US law? Would that be for US listeners as well?

Andy:
[1:33:47] I think on the US, it is maybe not obliged in the same way, but there's many state laws which do require it. So Google does do it as well in the US because they kind of have to.

Andy:
[1:33:57] So moving from Gmail to ProTemol is sort of opting out of the Google ecosystem. It's logging out of Google. It's preventing them having a profile. It doesn't mean you can't use any Google services, but if you're not logged in, it's completely different. The amount of vision they're having on you is different. And it's easy now. It's a couple of clicks, and then you're done. So that's actually how you start. Then, of course, there's all these other things you can do, right? But that's the main one. I think the main one is to protect your identity and separate your identity from a big tech ecosystem. You know, you can still have actually, like I still have, for example, an old Yahoo account, right? You know, when I'm at McDonald's and they asked me to, you know, use the free Wi-Fi, I'm not giving them my proton. I'm going to give them that Yahoo to collect the spam,

Ryan:
[1:34:37] Right?

Andy:
[1:34:37] But there's things like this. I think that's one thing that you do that probably makes a big immediate impact. And that's how we started with email in 2014, which everybody said was already dead back then, but it's not because it's your ID.

Ryan:
[1:34:50] I think you are absolutely right. I think that is golden advice for all Bankless listeners, actually. So I've done this. I haven't full ported my Google information like over, but what's fantastic about Proton is you can also create different aliases for different. So if you go to McDonald's Wi-Fi, you could just spin up a different alias in Proton and use a fake alias. So it's not tied to kind of your main alias and you have like numbers of things that you could spin up. Also, when I was setting up in Proton, just the emphasis on security was really important. I mean, we have a lot of email accounts that get hacked in crypto so that people can go get your identity, go get access to your exchange, go get your recovery password, right? They don't know your Proton email, then they can't get that. And also, if you are two-factor authenticating with like pass keys and Yuba keys and completely locking your email down and Proton kind of makes that easy to do, you do it that way too. So locking down your identity is pretty key. I think that's great advice.

Andy:
[1:35:47] Andy. And there's something that we have called Proton Sentinel, which is pretty unique to Proton, but we do it because we have a lot of activists, journalists, crypto high-profile people who use Proton.

Ryan:
[1:35:55] And it's a way that

Andy:
[1:35:56] Even if your YubiKey is stolen or your 2FA is stolen and your password is stolen, if you enable this feature, we will still sort of detect logins that we find suspicious and block them. So it practically secures your account, even in the case that you have been fully compromised. And that's something that actually we added in because we got demand from crypto users on Proton. That said, hey, you know, this is happening a lot in our space. We need more security. So we actually built that. It's called Proton Sentinel. And it's, yeah, you can read up on it. It's kind of interesting as well. That's fantastic.

Ryan:
[1:36:25] I love the stack. And thank you so much for your time today. And it's been great. I mean, I guess as you close, one last question. It does seem like, I mean, you've been doing this for 10 years, right? And the internet has come

Ryan:
[1:36:35] a long way since then. Now we have AI and everything that that will bring. So paint a future of 2030. Maybe one where privacy wins and we're doing okay. Or another, maybe a darker future where we'll kind of lose this fight and it continues on the trajectory that it's been on. And like, what are the two different worlds look like?

Andy:
[1:36:56] Well, I think losing the fight would be if big tech companies decided to engage in anti-competitive practices, which regulators don't block, and they do that to wipe out companies like Proton and other privacy companies. To give you kind of the very basic example, they could say, we're not going to allow privacy companies on the app stores. And then if they did that, because they're not declared as monopolies right now, there's nothing that prevents them from doing that. And then companies like Proton would not be able to exist in such an outcome. So I think that is the risk, is that big tech is so emboldened by a lack of regulation and lack of government oversight that they just go off and do completely blatantly unfair things to kill off the space.

Ryan:
[1:37:39] I suppose they could even buy the regulators at that point. I mean, they could get involved in lobby groups. They're already doing that.

Andy:
[1:37:45] If you look at the US, I think they've already bought a couple of regulators and maybe even a couple of people high up in government, right? Definitely a few senators, let's say. So this is what the future looks like. So this is the future for government is subservient to big tech. Big tech controls our government and our democracy. And democracy effectively ceases to exist because governments don't work for people anymore. The governments work for big tech companies. And we are, to be frank, pretty dangerously close to that, at least in certain countries. So that's the dystopian view. The alternative is companies like Proton, and not just us, but the entire space of privacy protecting services, the entire crypto and Bitcoin space that is working on financial freedom, is that this space survives. It continues to develop and grow. It provides a viable alternative. Because again, it's not enough to exist.

Andy:
[1:38:45] You need to be an alternative that is viable. You need to have the feature set that someone can credibly switch over and not be so burdened by the lack of features and the poor user experience that they cannot stay on your platform. So it means that we create a user experience that is good enough across our entire ecosystem, not just Proton, but also all the crypto and Bitcoin ecosystem that is a viable replacement for traditional finance and traditional big tech companies. And we win the argument in the public mind, in the public space, where people understand that this is the better future and we at that point would probably achieve a successful market share. So crypto could go from maybe less than a percent to perhaps 20, 30% of finance overall. Maybe a proton instead of having 100% of the market, 20 or 30%. And at that point, that is scale. That is a viable fraction of the world population where you have enough of a base where actually you can win in the long term, right? We get to that scale, getting past the 50% tipping point, that is conceivable. So these are sort of the two paths. And the path that we end up going on really depends on us as individuals because we live in capitalism. And even China that claims to be communism is today capitalism.

Andy:
[1:40:03] And the most powerful force in capitalism is you. Is it individual consumer making the right choices, steering the economic and also technical and a political future of our societies to the choices that we make every single day in our daily consumption of services. And if we make the right choices now, the next five years,

Andy:
[1:40:22] Then we take the world on a different path. And, you know, so I suppose the positive note that we can end on is, yes, it seems depressing. It seems scary. It seems very difficult, but actually we have the power and we can do this if we want to.

Ryan:
[1:40:37] I love that. Perfect way to end. Andy, thank you so much for joining us today.

Andy:
[1:40:41] Yeah, thanks for having me. It's really been a pleasure and hope to be back sometime to share some more thoughts.

Ryan:
[1:40:46] Absolutely. Guys, got to let you know, of course, none of this has been financial advice, Although, with some fantastic privacy advice, we are headed west. This is the frontier. It's not for everyone, but we're glad you're with us on the bankless journey. Thanks a lot.

Not financial or tax advice. This newsletter is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This newsletter is not tax advice. Talk to your accountant. Do your own research.

Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here.