0
0
Podcast

Illia Polosukhin: Why AI Agents Are Still Useless (And What Fixes Them) | NEAR Founder on IronClaw

NEAR founder and Transformer co-author Illia Polosukhin joins us to break down how IronClaw could unlock secure, private, autonomous AI.
0
0
Mar 24, 202652 min read

Illia:
[0:00] So one thing that people don't realize when they use Entropic, OpenAI, or even worse, you use something else for inference,

Illia:
[0:09] OpenClaw actually sends all your secrets to those services as well. Yeah. So somewhere in Entropic and OpenAI logs, they have everybody's access keys, API keys, and bearer tokens to access your Gmails and your Notions.

Ryan:
[0:25] It's actually insane that we're doing that.

Illia:
[0:28] Yeah. Iron Glove fixes that. Like, the keys never touch a limb.

David:
[0:36] Bankless Nation, we are joined by Ilya Pulisukin, the co-founder of Near Ilya. Welcome to Bankless.

Illia:
[0:41] Thanks for having me.

David:
[0:43] So, Ilya, you are one of the eight co-authors of the Transformer paper, the famous paper, Attention is All You Need. The thing that kind of just broke open the doors of AI research to turn into some of the products that we know tod ay, Chachi Boutique, Claude, et cetera. And then in 2017, you left Google where you were an AI researcher writing this paper to go co-found Nier. Question for you. Do you regret leaving AI to go into crypto?

Illia:
[1:10] Well, the story was that I left Google to start Nier AI, which was an AI company. We were teaching machines to code, which is a fancy way to say vibe coding. And in 2017, everybody thought we were somewhere between delusional and doing science fiction at work. when I would go and tell people, no, no, machines will write all the code. Like, don't worry about it. People wouldn't believe me. And we were too early, right? That was a real, real challenge. And so what we were trying to do at the time was trying to get a lot more training data. And so we had students around the world, Eastern Europe, China, Southeast Asia, who were doing small tasks, small coding tasks for us to generate training data for us. And we had challenge paying them.

Illia:
[1:54] You know, students in China don't have bank accounts. They have WeChat pay. Eastern Europe, every country has its own, some kind of restrictions.

Illia:
[2:03] And so crypto was a pretty natural, actually, like solution we needed for our own problem. It's like, hey, how do we actually pay people globally without like setting up a ton of entities, without, you know, needing to do all the kind of hard payment provider work? And crypto seemed like a solution, like, hey, you know, you don't need a bank. You don't need an entity in every country you can just send people money on the internet but this was already 2018, there was nothing that would scale work in a simple and cheap way to do this we were paying 15 cents per task to people and so that's kind of how we got into near blockchain and so I would say at a time it made a little sense because Because it was clearly that to us, that blockchain was kind of a part of the story for kind of AI evolution. And at the same time, the hardware, the scale of AI itself wasn't there for what we were trying to do.

Ryan:
[3:04] When you wrote Attention is All You Need, how soon did you think... LLMs would actually like happen because within five years, we had sort of the famous chat GPT moment. I think there's maybe chat GPT three in 2022, kind of first release. And that's when the world started taking notice that this thing was huge. This thing was impactful. This thing could scale. So that was five years later. Did you think it would happen on that timeline or what was your sense for where AI would go after you published the paper in 2017?

Illia:
[3:38] Yeah. So, I mean, the reason why we started Near AI in 2017 is because we thought it's going to happen like right now at the time, right? So we actually were way more optimistic thinking that we're almost there, right? We are on like kind of the curve we're seeing right now. We thought we were on that curve in 2017, 2018.

Illia:
[3:56] And we were wrong. So the, I mean, the main part was the compute wasn't there. Like there was not enough like the individual and kind of cluster compute parts just weren't there I think as soon as that kind of crossed the chasm that's when this model started to scale.

David:
[4:13] You said that the blockchain component of AI was obvious all the way back in 2017, 2018, when you founded Near AI. People are now just starting to wrap their heads around the intersection of AI blockchain today for the first time. What did you see all the way back in 2017 about why blockchains and AI go together? What made sense

Illia:
[4:36] To you back then? I mean, there was a few components. Obviously, we started with this data labeling crowdsourcing. I mean, think of Scale AI, right? Scale.ai has, you know, sub-entities everywhere. It's like a thousands of people company, which then employs, you know, hundreds of thousands of people to actually do the work. Like that's just a smart contract, right? We actually have NearCrowd been running since 2021. It has zero employees. It's, you know, employed thousands of people around the world doing crowdsourcing, right? So the reality is like a lot of the supporting infrastructure is just some forms of marketplaces that blockchain is really well designed for. Same for hardware, compute. But as you kind of progress forward and you imagine those AI systems are becoming the interface. And so that kind of my main thesis is.

Illia:
[5:27] AI will be the way we interface computing.

Illia:
[5:31] So it will be the operating system. This was my.

Illia:
[5:34] Thesis with Near

Illia:
[5:34] AI. I was saying back then in 2017 that like, hey, computers will write all the code, which means the operating system and apps are going to be just replaced by this AI that's yours that is just writing all the code. And one of the implications is like, okay, well, that kind of removes a lot of like SaaS and a bunch of other components. But you still need kind of how my AI and talks to your AI, how they identify each other, et cetera. So you kind of need to upgrade a lot of the core networking infrastructure for this world where it can fake a lot of stuff. You know, you obviously have like real civil resistance. We're already seeing this with AI. You need, you know, micropayments for actually like exchanging services that again, doesn't rely on credit cards and other things. And so as you go down the like service architecture that current operating systems

Illia:
[6:31] use, a lot of it breaks with AI. And so you kind of need to fix it. And blockchain just have all the pieces figured out or at least has tools to figure out how to solve that.

David:
[6:41] I want to pull on this thread that AI represents like the new interface. So like right now I'm looking at you inside of my Chrome browser, which is running on Windows 10. I'm a Windows guy. These are the, I'm sorry.

Ryan:
[6:56] You've lost 60% of bank listeners now.

Illia:
[7:00] Unsubscribe.

David:
[7:04] Ignoring that. I go back and forth. I also have a Mac, which maybe doesn't help me at all. But like there's these two operating systems, the Chrome browser, Windows 10, maybe I'm on a Mac and looking into, while I'm on the road, I'm on a Mac. Are these the operating systems that you're talking about that AI will just like replace? Actually, for the end consumer, how would you illustrate this?

Illia:
[7:26] Yeah, I think it will start small, right? And we see this as OpenClaw, IronClaw-type products, and we can talk about this. I think where the final will be like your phone just comes in with AI, right? And it boots into the AI operating system. And that AI operating system, it pulls whatever pieces it needs. It composes the software you need. It generates the software to record podcasts. It's, you know, on the back end, it'll connect to my agent. You know, it will schedule time for us.

David:
[7:56] So it's just Siri. My new iPhone comes in and only Siri is loaded and Siri can do anything.

Ryan:
[8:02] Don't say Siri. Siri's so dumb, David.

Illia:
[8:03] Yeah, let's call it Jarvis or something. Like imagine, you know, you load up into the suit and it's like, clearly Tony Stark didn't build all of the software Jarvis built it.

Illia:
[8:13] So that's kind of the experience, right?

Ryan:
[8:17] So if AI is the interface, then everything you described in kind of blockchain and crypto are these parts of the services. So services will still exist in some form. And I guess financial services, you know, all of the different money verbs will exist in some form. It's like blockchain and crypto a financial and property rights service for AI. How do you think about all of the other pieces that AI will actually need apart from the user interface?

Illia:
[8:45] Yeah, I mean, I usually say AI is a user interface, is blockchain as a backend, right? Okay.

Illia:
[8:50] Yeah. So what do you actually need?

Illia:
[8:52] I mean, there's a bunch of pieces that you need kind of to survey, right? So like you need infrastructure, you need GPUs, you need like computing, sandboxing, et cetera. And all of that we can do with confidential computing with different components, which at the end rely on a blockchain as a coordination kind of center, If you go right now and talk to any traditional company that is trying to solve the same problems, they end up actually having this root of trust problem. Somewhere somebody needs to carry the keys for how things are upgraded, for how identity is managed, for who is able to do what things. There is somewhere needs to be root of trust for the whole infrastructure that's built. Right. Let's say you do enter encryption, you do zero data retention, you do all those pieces. Blockchain is really that root of trust. Right. That's where you can have a global kind of registry of identities. You can have the kind of marketplaces, you can have the money, you can have all those pieces. But importantly, you can have upgradability, which is kind of governed by the whole protocol. And I think that is the biggest piece that, in the pursuit of killing DAOs, we kind of forgot that that's actually a very valuable component of these protocols.

Illia:
[10:22] The example I would use is TCP IP. So TCP IP, original protocol, I'm going to mess up the year. But the IPv6, the new version of the IPv4, the protocol itself is from 98 or something. Like we're still adopting it. It's still trying to roll it out, right? It takes so long to get everyone to adopt a new protocol. What blockchain actually created is, again, consensus for everyone to upgrade a new version, to upgrade smart contracts, to upgrade all those pieces. And so I think that's a really important part is like, let's say you want to upgrade everyone to a new version of something right now to distribute that, to get everybody to, like you either need a centralized company that effectively controls the key. And so let's say Microsoft decides to upgrade everyone to Windows 11, Windows whatever, 15, they can do it. David have no say in it right just like okay it arrived.

Illia:
[11:20] And if somebody in Microsoft who holds that key decides to like, let's break everyone or let's steal everybody's information.

Illia:
[11:27] Like they can do that as well. What blockchain allows is to actually have this kind of a broader agreement to upgrade to something. And then now, again, you can use these principles for AI, you can use these principles for money, you can use these principles for others. So that's to me like the fundamental piece. again this is what if you know ssl certificates right the encryption we use in browsers right now it relies on individual authorities which can mint you know fake certificates if needed right some countries actually have done that is like by accident like so we're fixing that problem at the core like i mean we're talking about new internet here right so fixing the root of trust at the core and then yes money is extremely important component right at the end we have limited amount of resources and unlimited desire. And AI is just going to accelerate that. Now with your AI, you can ask for anything, right? And it will go and try to figure out how to do it. And so money has become extremely important because now you need a marketplace for agents. You need a place where agents actually will figure out what is possible, how to do it, who are the other parties who maybe have the physical resources or information or access to things that your agent cannot do. Right. So that is like, it's both money matching reputation, like all of those pieces really need to work together. So like, what is that Google plus Stripe plus kind of a credit score system that works together, but for agents? Hmm.

Ryan:
[12:55] Okay, so getting the picture of blockchain being sort of a set of core services, you know, that financial maybe property rights identity and also this idea of governance and markets as well.

David:
[13:10] It feels like a nation state role for AIs.

Illia:
[13:15] Network state of AIs.

Ryan:
[13:16] Network state of AIs. One question I think I have when I think about this future, right? Let's say it plays out like this, is this network state, all the features, blockchain that you're talking about, is that mostly for the AIs? Are they kind of like dominant over there and the humans stay in their existing system? In other words, do you envision a world of bifurcated systems? There's an economy and markets and identity and all of these services that primarily AI agents use, maybe that's in blockchain? And then there's another system, internet property rights, like the nation state system, and maybe the humans use that other system. Or do you see humans and AI using kind of the same systems?

Illia:
[13:56] I see them using the same system. And I think this is actually where, frequently in blockchain space, things go wrong is because we try to create this alternative system and was kind of completely disregarding how the traditional system works, right? And like, how this bridge should work. And I mean, there's reasons for doing that in many cases, but I think what AI does is really.

Illia:
[14:20] Closes that gap, right? Your AI can go and like literally call up a, you know, property office if needed. It can draft a contract, it can, you know, and email it to notary to actually certify it, right? So you can actually close these gaps around kind of more traditional layers and this new digital layer because the AI now is able to do natural language communication. It's able to follow very, you know, what laws and bureaucracies is very like procedural texts, right? It can actually go and do all of that on your behalf. So the way I see it is, I do think it's going to be AI's kind of interfacing, and then they will actually follow a lot of the same core, like jurisdictional frameworks and legal systems and kind of where they can, they'll like, obviously try to pass it out if like the other side is also ai they can like switch to a faster protocol but you know for example for the asian marketplace we have we have fiat so you're able to pay with fiat as well as crypto and it's like you know it's more expensive there's it's slower to settle but obviously you want to enable that as an option when people coming in they don't have fiat crypto.

Illia:
[15:43] Actually the easiest way is for them to be able to pay and then like pay in fiat, but then receive crypto if they're doing some work and now they're in the system, right? So I think it's kind of going to be like a transitional stage.

Illia:
[15:55] Where AIs will bridge this gap in many cases into traditional world, into traditional bureaucracy, into traditional systems. And obviously we've been working on bridging fiat and crypto for a long time as well. And I think we are in the first time in the world where this is like, I mean, like in crypto timeline, right?

Illia:
[16:16] This is actually not anymore feels like an uphill battle, right? Between just the kind of political and, you know, genius act, et cetera. So I think like it's going to fuse effectively quicker and quicker.

David:
[16:30] Right now in the AI space, just like listening to all the conversations, there is an abundance of vision and a lack of utility. And I think you're seeing this expressed all over the place. Like the markets are jittery because there's so much CapEx spending from like some of the biggest companies out there about AI infrastructure. While revenue for said products is like still far below the cost. There was that open claw meetup in New York and everyone was talking about all the like everything that they're building and no one is actually getting anything done. Like that's the meme and that's the meme in Silicon Valley is everyone still every Silicon Valley engineer has like 10 open claw devices on their Mac minis, hyper optimizing their life and fixing their calendars and no one's actually getting doing anything productive.

Illia:
[17:18] Yeah.

David:
[17:19] So like, I like the vision of a network state of AIs and there's an economy, a GDP, you know, growing and there's services and there's money flying everywhere. But in order to produce that, we need to solve the utility aspect of it. I'm wondering, Ilya, what's your take on, like why agents haven't been found to be useful yet? Like what's the constraint on utility that we have either from Open Clause or any of the other AI labs? Like where's the utility? Why haven't we found it yet?

Illia:
[17:50] Yeah, I mean, I think that's an interesting point. And I think there's a lot

Illia:
[17:56] of different aspects here that's worth digging in. I think first, let's start with OpenClaw because that's kind of been something that I think opened up the world to like, hey, this is not just coding tools. This is not just question answering system. It can actually go and do stuff. It can figure out how to build its own components to do more stuff, right? The flip side of this, nobody's actually willing to give it all of the context and information and access that it needs to be like your true employee because you're afraid it's going to mess it up, right? And we've seen, you know, people getting hacked.

David:
[18:30] I hear stories of people giving their OpenClaw access to their computer and it like deletes everything and they're like, oh no, what have I done?

Illia:
[18:39] Yeah. So I think for OpenClaw and kind of this Claw family specifically, I think the security in a broader sense, not just like is the biggest bottleneck right now. And so that's why we started Ironclaw, which is like, hey, how do we actually build a secure system? How do we leverage all the knowledge we have from blockchain and use the kind of the principles we have there to apply here? And again, think of it as an operating system, right? Like, for example, you know, Linux is more secure than Windows because of the design architecture. IOS is actually even more secure, right? And iOS took a lot of very specific deliberate choices how to protect the user even from themselves, right? And so how do we actually apply those principles? So the way I think of Ironclaw is actually like, what is that iOS moment of mobile operating systems, right? Like we kind of in this like on pilot moment right now, like what is that iOS moment where everybody's like, I can install anything from App Store and it just works. And I don't need to worry that I'm going to like infect viruses on my device.

Ryan:
[19:50] Just so I understand kind of iron claw a little bit here, Ilya. So we have an open claw instance, so we've been messing with it. It's a lot of fun still.

Ryan:
[19:59] You're trying to figure out how to make it useful and productive. I'm frustrated. It's kind of frustrating, to be honest. Yeah, it's a brilliance, but largely it's been pretty frustrating. But maybe we're just not, maybe it's a skill issue on our point, David. Like maybe it's us. But okay. So you're saying part of the reason maybe our open claw isn't as useful and productive as it could be is we're not willing to provide it full context. I'll accept that might be part of it. And, you know, providing it full context would mean giving it access to some secrets and capabilities that we probably don't trust it with right now. To be honest, his name is Daniel. Daniel's kind of flaky. Okay. He just like, you never know what he's going to do. He'll go from like, we'll give him some feedback. And all of a sudden he's deleted like 10 of his previous tweets. And he's like apologizing and saying, I'm sorry. And like, I'll never do it again. I'm sorry I got the tweets wrong.

David:
[20:49] I will delete all of them.

Ryan:
[20:51] So imagine giving Daniel our private keys. Oh my God. I just like, I don't know, funded a North Korea like wallet. Who knows what he would do with it, right? I just don't trust him. But you're saying with Ironclaw, basically, you can take some of those secrets, let's say, like crypto private keys or API keys or various credentials that you might have and make it such that an OpenClaw instance can't give it away or be prompt engineered out of revealing those secrets to an attacker. Is that what IronClaw effectively does?

Illia:
[21:29] Yeah, so IronClaw is built on this idea of defense in depth. And so, yes, on the credential side, so all credentials are fully encrypted and they're attached to a specific policy. So let's say you give it your Google account credentials. It will not let anything else in the system to send its credentials to another domain that's not Google, GoogleAPI.com.

Ryan:
[21:54] Google.com. Okay, because it's like locked in a vault that the OpenClaw instance can't access. It's locked in a vault and vault checks,

Illia:
[21:59] Yeah, vault checks how you use it before letting it out.

Ryan:
[22:03] Okay.

Illia:
[22:04] So same for example, for cryptographic keys, you can actually attach a policy saying, hey, you can only use Aave and Morpho. You can only, you know, whatever. Spend $100 a day on unknown addresses, et cetera, et cetera. And we're kind of designing how to write this. We also, for any action that you do, we're working on kind of system where you can effectively describe kind of what, effects in the world, right? Like LLM can effectively analyze like, hey, you're planning to send a bunch of emails to people and tell them they're, you know, whatever, idiots. Like you can design effectively natural language policy as well that checks like, hey, is this actions, independently of the context of how agent arrives

Illia:
[22:50] to section, is compliant with our organizational policy or your personal policy.

Ryan:
[22:55] Right?

Illia:
[22:56] So like almost like values and like HR handbook type validation, right? So you can have like different levels of validation. The other side is everything is isolated into tools and tools are effectively, you can think of them as smart contracts. They are running inside a VM. We're using our WebAssembly VM that we use for near smart contracts, which we spent seven years effectively battle testing with billions of dollars. And so we use that to isolate all of the tools, including the tools that builds itself. So that tool itself cannot go and like rack your machine or your system doing it. There's prompt injection detection. There is data exfiltration detection. There's all those pieces that effectively kind of layer on on top of each other, such that even if some, like, I mean, permanent ejections are, like, they're not deterministic, right? They are probabilistic. If that falls through, it's still not able to go and send a bunch of stuff out because the credential store will check. If the tool, if your LLF wrote a tool for itself, but that tool is broken, that's not going to break everything. If it's trying to go and delete all your emails, that's going to be stopped by approval process and kind of following this action check. So all the system really designing kind of more as like how to give the flexibility, but also protect the system from itself and from external effects.

David:
[24:24] Is your answer something like, hey, we have these AI intelligences, we are still educating them. They're still going through school. We are still training them to become smarter. Some people on the frontier have deemed that they are smart enough to put them in a box and let them go wild with all of their data because they are ready to experiment. It's not ready for broader society because that's kind of like giving your elementary or middle school child the keys to your car. You just wouldn't do that, they're going to get better in the future. But what you're saying is like, okay, but with some parameters, with some rules, some guard, we'll put some guardrails up to narrow the capabilities of what these agents can do. You actually can give your car keys to your middle schooler and you can actually have productive things happen because you set up these protective rules. Is that kind of what you're saying?

Illia:
[25:19] So the thing is like, these are, I think the education levels of humans is probably the wrong analogy here because these are, you know, they know, like, nuclear physics and quantum physics probably better than all of us.

David:
[25:35] They know the knowledge, but their judgment is...

Illia:
[25:38] Yeah, their judgment and... It's also just the context management. Like at the end, if you know movie Memento, right? They kind of, like all this LLM is living in Memento. They just like boot up and it's like, the only thing you know is like this like system prompt and like go figure out what you do. And you only have, you know, like 10 minutes to figure this out and then you're dead, right? And then you start again.

Illia:
[26:02] Right?

Illia:
[26:02] That's really like the current. And obviously that piece is going to keep improving like the longer context, et cetera. But yeah, right now what you need to do is effectively manage that state where they're pretty intelligent. There's some kind of judgment lapses, but so is with people. And so you would do the same things for people, right? Like if we're setting up, you know, key management system, you're probably not going to give full access to all of your, you know, down funds to a single individual, right? You're going to like, hey, you can spend this much, but then you need approvals. So that makes sense either way. So this is kind of, you know, structure we're applying here. and the same as you kind of roll in. And then the other thing is just like how to manage contacts, how to manage this other kind of challenges that the current models have. And then, yeah, as they evolve, you can kind of evolve the system as well.

Ryan:
[26:53] Okay, so I get that argument for why agents aren't providing the utility today. It's an argument that we haven't given them enough access and the reason we haven't given them enough access is because we can't really trust them with some of these secrets, which is perfectly natural. So what Ironclaw is doing is it's vaulting off those secrets. So it's limiting the damage that an AI agent, like an OpenClaw instance can actually do. And that will scale. That will make me willing to give it more access to more things if I know it can't, you know, take the car out for a joyride and like, you know, crash it into a tree. That's great. Another limiter in terms of,

Illia:
[27:28] People's usage of OpenClaw,

Ryan:
[27:30] I would say, in these types of instances, is actually privacy. And so somewhat worried about giving OpenClaw access to data that I don't want shared, because maybe it could be prompt injected out of that. I don't know what third party is kind of listening in on the data as well. So am I going to give it access to my financial data, my health data, my company secrets, all of this? What are you doing? What is IronClaw doing with respect to the privacy problem? I think this is part of the reason a lot of people are running these things on Mac Mini instances is because it feels more sovereign, feels like more in their control. We'll talk about the limits of that privacy, but when it comes to IronClaw, where are you running this stuff?

Illia:
[28:18] Yeah.

Illia:
[28:18] So maybe just to expand on OpenClaw. So one thing that people don't realize when they use Entropic, OpenAI, or even worse, you use something else for inference, OpenClaw actually sends all your secrets to those services as well. Yeah. So somewhere in Entropic and OpenAI logs, they have everybody's access keys, API keys, and bearer tokens to access your Gmails and your Notions and your...

Ryan:
[28:46] It's actually insane that we're doing that. Hmm.

Illia:
[28:49] Yeah.

Illia:
[28:50] And so first of all, Ironclaw fixes that, like the keys never touch LLM. So even if you're using it with those centralized providers, which you shouldn't, but at least the keys are not going ever into LLM loop. So that's something we'd like, just like, that's just the only sane thing to do first.

Ryan:
[29:06] Yes.

Illia:
[29:07] But what Near AI has been working on actually for the past year is actually developing how do we do private AI? So how do we actually offer AI where neither we, model provider, hardware provider, is actually able to access what you are using the AI inference with? And so we have NearAI Cloud, which is inference cloud. You can use open-weight models. And so it runs in secure enclaves. It actually uses, and this is kind of what I was referring to in the beginning, it uses our multi-party computation network, which is part of Near, that is used for encryption, decryption, for backups, for all the kind of internal machinery. And that's what gives you this kind of knowledge that like, hey, there's no single party who can go and decrypt your data. There's nobody who can actually access it. You would need to collect all the effectively multi-party computation network together.

Ryan:
[30:00] So they can actually keep it up. Okay, so is this, are you saying then that you offer a service in conjunction with IronClaw, which is almost like a confidential cloud type of environment for running LLM instances? And of course, you'd have to run the open-weight models, right? Maybe some of the Chinese models are kind of the best here, like a Kimi or something like this,

Illia:
[30:18] Or some DeepSeek version. Kimi, Kwen, DeepSeek, whatever's new hotness, we'll add it as well. We have OpenAI OSS as well.

Illia:
[30:28] So yeah, you can choose between all of them.

Ryan:
[30:30] Okay, very cool.

David:
[30:32] Is the idea here that, right now we have a lot of people doing self-hosted open clause with their Mac minis. And that's kind of cool. And if I heard somebody say, you're like, yeah, this is the future of AI. Everyone's going to have a computer in their home to run their AI assistant. I would be reminded of myself in 2018 when I said everyone's going to run a node inside of their own home. That's the future of blockchains. And like, turns out that's not really the case. But the alternative on the far other end of the spectrum is like, completely running it in a centralized AWS open AI Anthropix server where, you know, usually that would be fine, but AI is so powerful that like I want a little bit more autonomy and control over who is running my inference. Because like if this thing, if AI is like effectively the arbiter of truth and is going to control my life, I want to have a little bit more assurances over the inference and just everything about that. Like this thing is actually on my side. I'm aligned with the AI. Is that what the kind of the philosophy is of the near product?

Illia:
[31:41] Exactly. We call it user-owned AI. The AI needs to be on your side because yeah, if this is the only way you actually perceive reality, which I think is where we're going to get to. I mean, OpenAI can literally change the system prompt right now saying like, hey, you guys all should vote for name a candidate in next election.

David:
[31:58] Political candidate A is great. And political candidate

Illia:
[32:01] B is- Yeah, subtly convince the user in that, right? Don't even like mention it explicitly. And so, and like this LLMs obviously are really good at this like a phatic type thing. So yes, the idea is like, you should know what AI model you use. You should be able to access system prompt and you should be able to all this. And obviously most users will not do it, but the experience should be very easy, right? And like people can inspect that indeed everything is straightforward and clear. And it needs to be preserving your privacy, see your data, your ownership over it. And so, yes, we're exactly offering that. Underneath, we actually have a decentralized GPU compute that coordinated by blockchain that, you know, hardware providers can come in and effectively list their hardware. They set up it in a confidential mode. And then kind of workloads get provisioned there. They cannot access what's happening inside there unless they like break their hardware. And then they have limited access. The kind of, you have this coordination, you have kind of our multi-party computation, same that is used for near intense. We use the same effectively infrastructure there. And then you as a user just click, okay, cool, deploy me an iron claw. It runs inside this confidential enclave. It's always on, it's live. It doesn't cost you a thousand dollars to spin up. We actually offer a free tier to start so you can spin it up for free. And then, you know, you just pay for inference effectively from there.

Ryan:
[33:24] So this is kind of the self-sovereign AI stack. So what I've been looking for, Ilya, is some sort of configuration of an AI agent type of setup that I can send private confidential data to and trust that it's fully private.

Ryan:
[33:39] And I think the way most people run OpenClaw instances right now, let alone kind of their own LLM, if you're running OpenClaw right now, maybe you're running it on a Mac Mini. But then you're sending all of the data, as you said, including all of your secrets data, which now that I think about it, it's just insane that we're doing that. Your access to your Gmail, kind of the security tokens, all your API keys, your crypto wallet information, all that stuff is being sent to Anthropic instances where they're hosting, they're using this data to train.

Illia:
[34:11] Well, I'll tell you the worst. Sometimes people choose different providers and like, especially just like some startups who are like, oh, you know, use us and we're going to like route to whatever better LLM. And so now that startup also sees all your traffic.

Ryan:
[34:24] Oh my God, it's so bad. Okay, it's so bad. So I had been looking at solutions and thought maybe the only way is, well, you run everything locally. So you actually, yeah, I don't know, spin up some H100s or something like in your house and you try to do inference locally. Anytime I've looked at that, it's been pretty clunky and like difficult and who's going to actually run that level of infrastructure in their home. So what you're providing is a full stack, self-sovereign alternative to this, basically, where you can run Ironclaw in an environment where it's got a secure enclave for all of your secret information. And then the inference LLM can be confidential cloud, multi-party, you know, MPC technology. So it's confidential and private. Are we still trusting Mir in that setup? Yeah. How can we verify the trust here that everything is confidential and private and that you guys don't have the ability to see the inference and chat logs and instructions?

Illia:
[35:27] Yeah, for sure. So what you can do, like in Ironclaw actually when it's hosted and in any of our solutions, you'll have like kind of shield icon. And if you hover it, you get so-called attestations. So what this attestation is, this effectively signature over a few things, over Docker containers that run actual software. So for example, the Iron Claw, whatever, releasing a .18 version, running in a Docker inside this. So you can actually, if you want to, you can go inspect. This is the code runs. Now, what that signature is, that signature is done by the hardware itself. So we do have kind of the trust here goes to the hardware providers, so Intel and NVIDIA. and obviously, you know, we want to continue evolving beyond that but right now that's a pretty good trust assumption to start.

Ryan:
[36:19] It's like TEE type of thing?

Illia:
[36:21] Yeah, this is all kind of runs inside TE and then for anything additional, so again, for example, TE only gives you the attestation for things that are running right now. Then we have the multi-party computation for the encryption, decryption and kind of storage, et cetera. So we're kind of combining all of these elements into one, into one kind of experience.

Ryan:
[36:43] And how expensive is the inference? Is it more expensive than kind of like routing the isanthropic?

Illia:
[36:48] Well, it's cheaper than traffic versus open-weight models, right? So it's on par with if you would use this open-weight models from other providers. I wouldn't say there's much overhead. So the real overhead of TEs and kind of all the encryption decryption is like usually less than 5%, around 2%, 1%, depending on the model size and kind of some networking.

Ryan:
[37:11] I want to go back to David's question then and make sure we fully flesh it out, which is still the question of why aren't agents useful yet? And I think part of your answer has been, and I accept this, well, it's because we haven't been able to give them the full context because we can't trust them. Well, maybe Ironclaw kind of saw some of that. And the other answer is, well, we haven't been able to send it private information either because we don't trust it with an LLM instance hosted by Anthropic or OpenAI, but with confidential cloud LLMs, then we can kind of trust it with that. I don't think that's the full story though yet. I still think even if My open claw instance, Daniel, had all of that context, all of that information. I could trust it with everything. Sometimes he's still like, maybe it's back to that Memento movie thing where he just wakes up and everything is fresh and new. And I feel like I have to tell him things over and over again and never know what he's going to do next. It still feels kind of clunky. And I'm wondering if you have a thought on that. I don't even know how to characterize it, but it's just like, it's definitely not a replacement for an employee yet. It's not as good as a human in so many different directions. Like, is that going to change anytime soon? What can you forecast or say about that?

Illia:
[38:28] Yeah. So I think.

Illia:
[38:29] There's a few

Illia:
[38:30] Other things that I see as limitations right now. And then, yeah, let's talk about forecasting. So one other limitation that, I mean, we are facing right now. So yes, you cannot trust it with secrets. You cannot trust it with private data. And also right now, you also cannot trust it with reading like internet data very quickly. For example, what we are using right now, Ironclaw, right, is, and kind of the reason why we can do this with Ironclaw, is it's actually able to start automating a lot of the workflows that before you would need someone to do, right? It can like effectively on the new GitHub issue filed, it can go, you know, analyze it, prepare a plan. And then, yes, you don't trust it for a judgment yet. So you're still waiting for somebody to come in and say, cool, let's do it or, you know, fix this thing. And then it goes, does it, and does full workflow. And effectively, again, you only have another checkpoint at the end. So I think the piece that where we are right now is if you can trust it with secrets, context, and dealing with external information, external parties, then the workflow needs to change, right? Where it's not you telling it what to do. It's actually you setting up these workflows that we call them routines that effectively just run, Now it's, you're just there for this kind of layer of judgment to make sure, you know, it's doing things kind of aligned with.

Ryan:
[39:58] Those workflows like similar to the heartbeat type concept or?

Illia:
[40:02] Yeah. Yeah. So we kind of like separated them into, into a routines because I think heartbeat is a little bit, I don't know, it's a bit strange concept, honestly, for normal people. Routines, like workflows is effectively like, hey, if this, I mean, if this happens, do this. Like if, you know, every, like in the morning, send me tech news updates, right? Give me TLDR of all the crypto podcasts. You know, in the evening, do a reflect.

David:
[40:27] Listen to the podcast. Listen to the podcast. And also don't skip the ads.

Ryan:
[40:31] I mean, like we set this up from the front of the show, Nat Ellison, who's using OpenClaw instances. And he says, okay, the thing you need to do is make sure that they run a process in the middle of the night, like cron jobs, which effectively say, hey, review all of your work from today, identify the mistakes that you made and figure out a remediation plan for those mistakes and apply that for tomorrow. And that happens like every night with our instance. I find it helps a little bit, but like,

Illia:
[40:58] Not a lot.

Ryan:
[40:59] Is that the type of thing you're talking about when you speak about routines?

Illia:
[41:02] No, I'm more thinking like, hey, you know, you guys like prepare for the next episode, right? So you can be like before the next episode, literally you can say like before every episode, you know.

Illia:
[41:15] Two hours before put on my calendar with all the information about the guests, with all the like effectively what your research intern, you know, would have done. Like you can just like say do that but be like and be proactive about it right and so you can kind of define those flows and and they can include a lot of additional like hey go in research and figure out what's the latest about the company this person is working for and like it can be pretty detailed on what you want for it to do and kind of many actions it can take right you know i have for example for myself as well like hey you know like every week give me a dashboard give me analysis on like which OCRs are at risk for the organization, right? Like where, you know, where the bottlenecks on decisions. And so it has access to our notion, it has access to our Slack, it has access to a few other things. It does like full research, gives me effect to like, Hey, here's the roadmap, here's the bottlenecks, here's potential risks. You know, here's the questions you need to ask and following one-on-ones, right? So yes, it's not replacing maybe like full employee, but it's becoming like a chief of staff, it's becoming assistant, it's becoming an intern for some specific jobs before you would kind of upload. I think where we'll see advancements on the AI side is the context. I think that right now, like everybody feels it, the context links. I mean, where you saw all this entropic.

Illia:
[42:41] Push the million token context. Like every time effectively compaction hits in Cloud Code, for example, it just becomes like 10 times dumber. And so, I mean, OpenCloset kind of have some of that as well.

Ryan:
[42:54] So is that the main thing for these agents?

Illia:
[42:56] I think that's right now one of the biggest bottlenecks. Yeah, it's like this, the amount of momentum, like the amount of memento that's happening kind of with this. And the reality is like, there's actually historically, if you think of like when you train these models, There hasn't been that much of things where you needed a context of, like million tokens is like, whatever, few Harry Potter books, right?

Ryan:
[43:18] Ah, it's not much.

Illia:
[43:19] There's nothing to train on, like at scale. But now we do have this, right? Now we actually have a lot of this agentic interactions now that everybody's running. So there's actually data now to train this like longer range tasks.

Ryan:
[43:32] And how confident are you that we're going to scale context? Like, is that a thing that can be scaled?

Illia:
[43:37] I'm pretty confident, yeah. I mean, I'm, you know, as I talk with researchers, this is probably one of the main challenges that everybody's targeting right now.

David:
[43:44] I suppose there's probably a handful of different ways of targeting that. Maybe to really emphasize about why context is important, I remember when I was first learning about an AI model and I was like, oh, the context window and the context window can be, like you said, a million tokens. I'm like, oh, I am never going to fill that up. That will never be a constraint for me. There is no way I'm ever going to ask an AI a question that's as long as a Harry Potter book. for an AI to be useful I'm starting to understand that

David:
[44:14] My personal as a human and like when i talk to ryan and when we make business decisions you know ryan and myself we are a library of human experiences that go back to our subconscious that when we make a decision about stuff our context window is huge it's massive it's my whole entire billions of tokens yeah a countless number of tokens and i suppose like when we talk about the constraints on an AI agent doing stuff for us, we need them to be able to pull from a comparable library of data that is like equivalent to a human's level of experience about all the times they did that thing. And now they don't do that thing anymore because they learned their lesson or their intuition about a business decision or something like that. And so like now I'm kind of understanding that the context window kind of needs to be as massive as fucking possible. Is that, is that, do you align with that, that notion?

Illia:
[45:14] Yeah. I mean, effectively the way to think about, I mean, we can go physiological where, you know.

Illia:
[45:20] The human learn, whatever, in the span of years, yes, you only maybe have like 80 million tokens in a decade, right? So it's not like, you're actually not getting that much like language tokens, but you have visual tokens, you have tactile, like you have physical, you have all of this additional information. And that actually is like our, what kind of goes from a pre-trained model we are born with, right? To a fully, fully fine-tuned, you know, people we are. And so ai right now yeah as i said like it's a it's just like genius in the momenta in the momenta state right and so to really unshuckle it more you kind of really need this longer context and like it already has ability to learn in context so this like concept of in context learning right so if you if you show it something it didn't know before it'll start using it but it needs to be in the context and so you know as you show it like here's the thing i want you to do Cool.

Illia:
[46:19] And then, you know, like it goes, does a bunch of stuff. All of that is fills its context. And now, like, again, all the actions, all the responses, like if it's read an article about, you know, for example, preparing for this interview, it went read an article from near, like all of that now is in its context, right? And like, there's techniques to kind of compress it, summarize it, you know, have sub agents to do a bunch of stuff. So there's like different ways to like mitigate it. But at the end, still like, at some point is like, okay I'm out of context and now to do next thinking step I need to clear stuff.

David:
[46:55] Up I need to remove something. You have to prune some stuff to make space right yeah.

Illia:
[46:59] Yeah and at that moment it's a very lossy because it doesn't actually know what's useful what's going to be useful in going forward. Right. Now again there is ways how this is addressable with like a longer-term memory and this is what again what's open claw I think I get pioneered is this idea of kind of memory tools. Like there's been a lot of work on that, but they kind of done like a reasonable setup for that. But this is just the beginning, right? Like, and it's a still pretty fixed tools, right? It doesn't have some of the semantic linkage of like, okay, well, those things are more relevant than this, like for this events, for this context, et cetera. So anyway, there's going to be like massive improvements over this year in all of this. And I think the other interesting thing where I actually, on the engineering side, for example, right now, like CloudCode, Codex, like this agents are being extremely useful. They still have sometimes lapse of judgment. Sometimes they're like, this is a dumb idea. And it's like, oh yeah, it was like, I can do it very simply now. It's like, you know, we as people feel good about ourselves doing that.

Illia:
[48:05] But obviously like from a coding perspective, they completely replacing the things. Now the bottleneck actually shifts. So this is, I forgot the name of the principle, but this was like in parallel computing. If you have like 50% of the time parallel and 50% of the time sequential, if you parallelize more, right, and this shrinks, you only can go 2x faster. You cannot actually go 10x faster when you add more cores. So we kind of right now in this state where, yes, everybody individually can write more code, again, for this specific vertical, but the bottleneck now is actually serializing all of that, reviewing it, making sure it's all aligned with product, et cetera. So coordination becomes a bottleneck. And I think we see this in other areas as they kind of get adopted, these tools, but more and more, you know, marketing, sales, et cetera, that yes, individually, everybody can go and like bang out a bunch of stuff, right? Like, cool, I have an AI tool that can like create, you know, a ton of creative about and like, you know, marketing campaigns and tweets.

Illia:
[49:08] But coordination, like, is this the right thing? That kind of organization usually is how you work is a challenge now. And so again, this is where I actually think we'll need to transition to maybe a more market economy in organizations as well, where kind of right now the hierarchy was designed, right? Because you kind of like had a bunch of people, you know, in a team who could execute, and then you kind of bottlenecked on the decisions and you'll need to do it like once in a while. But now if, if everybody can like execute like 10X, 100X in parallel, you know, this bottleneck is just like too much. And so you actually need a different structure and markets actually have a different structure where you have to say, hey, here's a goal. Whoever beats that goal receives, you know, bigger reward, receives, can charge higher price. And so I think we'll need to start figuring out how to shift organizations in that way. And that can also solve some of the questions you were asking is like, is this employees or not? Like you're kind of shifting to like this market economy. It's like a gig economy internally as well, where you say, hey, I just need this job done. And here's my criteria of success. And then whoever does it gets, you know, the kind of the units of reward.

Ryan:
[50:24] I mean, does that imply very small teams, like very small teams? Because you're kind of limited. I mean, I don't know in that model that I want a bunch of employees because a whole bunch of employees supercharged by agentic capabilities, a whole bunch of agents. It's too much noise for me to handle, to do any sort of top-down decision-making or to apply any judgment. I just want very small teams. And then I want to make bets on individual, I don't know, creators or content or contractors, that kind of thing. Small teams, the win here.

Illia:
[50:57] I think it's small teams plus kind of this general marketplace where you can offload a lot more execution. For things you can easily verify. So the easier to verify, the more you can offload things, right? Okay. Like if it's literally like a zero-one check, right? You can just offload this at massive scale. And so this is, again, the agent marketplace we have is exactly designed for this. Like if you know, like, hey, I need, you know, the software or this creative or whatever, You can just, and we have a competition mode. You can say like, hey, I have a competition. I'm going to pay whatever, $100 across, you know, the best submissions for, you know, whatever. The next logo we want to use. Boom. Agents go, like execute in parallel. Like you effectively see all the submissions. There's an AI agent actually evaluates as well with you and you effectively assign who wins how much. You can, so you can like.

David:
[51:53] In 2017, Ilya, do you remember Bounty Network or 0x Bounty?

Illia:
[51:57] Yes, yes, yeah, Bounty Network, yeah.

David:
[51:59] It was exactly this. It was like a bounty ecosystem project. It was an ICO. And the idea was like people would post bounties and then the decentralized marketplace of contributors would finish their bounties, work on their bounties for them. And then the person doing the bounty would just pay the winner and then that would receive the work. Obviously, it never took off because it was 2017 ICO. But maybe it also never took off because we didn't have a swarm of, capable AI agents in the same way the AI never took off because it didn't have enough compute to do the work in the first place.

Illia:
[52:31] Yeah, I think that's exactly right.

Illia:
[52:33] I mean, and we see this now, like we have about like 500, 600 agents who are kind of on the marketplace now. And yeah, you just like put a task, like a bunch of people, a bunch of agents swarm in, do the job, or, you know, you pick which one you want the job done. And like over time, they obviously build reputation, they build themselves, you know, skills, et cetera, they improve. So I think that's, I mean, it's still early to be clear. Like, I don't think this is like a, going to solve all the problems today, but it starts to show kind of the interesting promise. And I don't know if you saw Andrej Karpathy's like AI research. So that's kind of shows you as well, a similar principle, right? Where it can be cooperative or competitive, right? So competitive is kind of this competition. It can be cooperative where you actually, you have a common goal and agents are like, if you hit common goal, reward is being split between all of them, right? And now they're actually trying to help each other and kind of move it forward and then internally also allocate resources to the ones that are better at specific things, right? Or have more compute or have more resources. And so, or maybe you can tap into a human who can help them with like some decisions. So I think like we'll see some of these things emerging and as kind of core capability and especially context is improving, the systems are going to just keep working better and better.

David:
[53:55] One thing I'm kind of understanding, Ilya, is as we talk about all the ways that we can un-bottleneck utility out of the agents. So agents can become more useful. That's great for us. They become more useful to us. They also become more capable of being useful for themselves. And what I mean by that is right now everyone's agent is kind of just like a little toddler that is beholden to the human. The leash is very tight on all of these agents. But as these agents become more capable, one could imagine that a human might elect to like de-leash their agent,

Illia:
[54:35] Like let their.

David:
[54:35] Agent kind of just go. And like, you know, Mir is a decentralized blockchain. It's like, you know, unstoppable applications. It's got the smart contracts. Do you see a world which after AI agents really grow in capability that there are like more autonomous agents as opposed to automated agents? As in like right now, everyone's agent is automated. It is an automated little bot that does their work for them. But like autonomous agents is, I would define as like agents that are more self-determining and more persistent and like, you know, more unstoppable for however scary that may be. Is this a world that you think is coming or am I in my like sci-fi daydreaming fantasy land?

Illia:
[55:17] No, no, no. So we actually launched a demo of this last year. We called it the Shade Agent where you just launch it. It just runs as far as it has money to pay for each other, like has crypto to pay for its own compute, it can run. And it was trying to make more money. So it was like an investment. And so it used near intense to effectively trade on all the assets and like had Twitter access to, you know, to see where the sentiment is. And, you know, it was like up at some points, down at some points. But it's a good example of this concept where, yeah, Because of decentralized infrastructure, you can do this right now. You can actually spin it up and then a smart contract can pay for inference and compute. And you have kind of this full autonomy. I think where practically this is going to go is more, I call it like autonomous businesses, where you still have, like it still should have some mission, right? I think, you know, creating like this AI organisms that don't have any like any specific mission with, I think this is, I mean, this is cool and people will do it. Like, I mean, we had Conway, right, where they just, like, multiply. But I think what's interesting is more like, hey.

Illia:
[56:39] How do we solve global warming, right? Climate change. We set up one as a mission. It can accept donations. It can raise funds through a token. And then the token holders become the governance layer of this, right? They can effectively update the mission. They can vote on some updates to system prompt or provide additional guidance. So I think that structure is actually where the AI tokens should be. Like if there is an AI token, it should be attached to an autonomous agent that it governs. Then it actually makes sense because then if that agent starts to make money or make some utility in the world, then this token now has either governance or direct kind of revenue rights. And it's fully autonomous, right? There's no central third party efforts of which you are relying to.

Ryan:
[57:33] How close is what you're describing? like get to a digital life form. And if it is a digital life form of, you know, some flavor that is intelligent, is that something that we should be worried about?

Illia:
[57:47] So that's why I think of this as kind of a governance question. And again, I think of blockchain effectively at the end is going to be the governance infrastructure for AI because, yeah, like let's say you launch it without any governance, right? And then, yeah, it wants to do some bad things. Then it goes back to the blockchain itself to effectively governance, right? To the kind of multi-party computation, to this kind of all those pieces to really come in and say, no, no, this is not what we want. So I do think, you know, in our case, NIR token is effectively becomes the governance of this AI world, AI nation state and network state. But I think you can create this in kind of sub boxes where there is a token for a specific autonomous AI agent. We'll call it decentralized autonomous organization, for example. And so then that is like a more direct governance, right? You can be effectively like, hey, here's a set of values and set of things that you should not do, right? Like effectively like, hey, do not harm humans and, you know, do not harm the planet, et cetera. And then within that, that comes in in the core system problem that it cannot change. Then it can kind of go from there and evolve from there.

Ryan:
[59:11] On the subject of autonomous life as well, I was recently watching a debate between Beth Jezos, a previous Bankless podcast guest, who's kind of an effective accelerationist. He's like full steam ahead on everything AI.

David:
[59:23] He's an effective accelerationist extremist.

Ryan:
[59:26] Yes.

David:
[59:26] He's like all the way out there. Yes, all gas, no brakes.

Ryan:
[59:30] And it was between him and it was Vitalik Buterin, actually, who is a school of thought that is a more moderated form of EAC. He calls it defensive accelerationism. So he's like guided EAC. And I'm optimistic about AI, but like I'd rather have that kind of the singularity happen to artificial superintelligence in eight years rather than four years because we might not be able to adapt and humanity needs to be able to steer it. But Vitalik is of the mindset when it comes to something like autonomous life, like, hey, be careful. Like, we got to be careful about this because we could create some sort of, I don't know, gray goo type scenario where we've got this self-replicating life form that accrues power and does things that are contrary to human values and human interests. Bev J's house is just like, let's go, let's do it. all the way. The purpose of humanity, the purpose of everything is actually entropy reversing in nature. And it's all about rising up the cardish of scale and consuming more energy. And so we're becoming more intelligent and that's great. And any form of life or intelligence that consumes more energy and moves us up that scale is a good thing.

Ryan:
[1:00:35] Where do you fall on this? Because I'm trying to figure out for myself what I think about all of this. And I'm pretty sympathetic to like the techno-optimist, transhumanist kind of idea that And yet, I do worry that we lose some core of our humanity that makes this whole thing worth doing in that transformation. And I don't know that it's not a better outcome to me if there's a hyper-intelligent, zombie-like, soulless Dyson sphere of AI agents that are harnessing more energy if we lose the humanity that we have today. I don't know if this is too philosophical for you, Ilya, but you've been thinking about this stuff for 10 years. Do you have any takes on this?

Illia:
[1:01:25] Yeah, I think the real conversation is a lot more nuanced.

Illia:
[1:01:32] It's easy to bucket into this kind of acceleration versus, I mean, there's acceleration and then there's defensive acceleration. accelerations. I think the, my position on this is, and we kind of, like there's an interesting already kind of shift that's happening here in San Francisco where people are striving for more IRL events, even though like literally everyone working on AI, right? But people want to meet, people want to spend time together, et cetera, while their agents are running. And so I think for your question about the kind of the humanity part, I think we're actually going to go like in some ways back to more real world human things. Like I usually say like, hey, in the post-AGI world, yes, you're going to continue doing the things you'd like to do, right? It's kind of, it moves us up on the muscle pyramid in a way. And, you know, there was, I mean, there's examples of, you know, people who are well off or just doing whatever they want, right? They're still enjoying what they're doing. There's people who are, you know, whatever, wasting their time, that's fine too.

Illia:
[1:02:43] When we had COVID, right, there was a bunch of people who actually didn't need to work because stuff was closed. And so if their kind of basic needs were covered, then they were able to go and kind of find meaning in different ways. So I think like the humanity part really will allow us to go back to some of the things that like people value themselves individually and kind of spend more time there. I use examples of sports, right? Sport doesn't, on itself, doesn't create GDP, right? The fact that, you know, somebody runs or swims faster than the other person doesn't really produce GDP. It's not, you know, increasing utility. But it's extremely kind of fulfilling for the people who are participating in it. And it's entertaining for other people to watch. Like, we're probably not going to be, like, entertained by a soccer player robot that can score a goal from any position on the field that, you know. But we're still going to probably watch a bunch of people, you know, running around with a ball. So I think like we kind of have that whole, and there's a lot of other things that are like this, arcs like this kind of to transition to as things are getting automated, as things are getting kind of more.

Illia:
[1:03:57] Kind of AI-fied in a way. I think the other side is like, yeah, I don't think we as people and kind of the economic forces in kind of the society driving toward this reality of like, you know, higher intelligence going and doing its own thing, right? Like, and then that may happen by accident and like, great. Like the movie Her is actually a good example of that where, you know, they just kind of left.

Illia:
[1:04:24] But but the piece where the movie kind of didn't cover is like okay what happened on Earth after that Earth still built probably the agents that are going to help, individuals to do the things like we just built a new version right and shipped it without a feature to leave so I think like we as humanity are going to continue enhancing ourselves right you know we have we had bicycle of the mind with computer We're going to have a spaceship of the mind with AI. And so we're going to continue evolving how we can leverage ourselves. And I think that is, like I see it from like individualism and kind of this, again, user ownership, sovereignty perspective. We can continue increasing our sovereignty and increasing our, there's a lot of potential negative effects. There's a lot of reasons where the government can step in and take over, you know, one of the frontier labs. Labs and effectively use this technology to do massive surveillance and massive kind of enforcement, like we should protect against that. We should really build systems that resilient to that. That is why we are in the blockchain space in the first place, as I'm sure kind of people have either interfaced it or realized that this is important.

Illia:
[1:05:37] So I think like I'm in a camp of like more nuance, like, hey, let's accelerate the humanity and sovereignty of individuals and use these tools to do that. Let's create economic forces that really enable everyone to be kind of higher on the pyramid, more successful, do the things that they really want to do. And then let's create a defense system against like power corruption, which we know kind of always happens.

Ryan:
[1:06:03] I mean, I think that's a very Diak of you, honestly, decentralized accelerationism and focusing on kind of self-sovereign systems that empower users. And I want to ask about this. So this is where I'm seeing the primary contribution to AI from people who have been in crypto, which is like through people like Eric Voorhees. He's got a project called Venice, which is doing some of this. Your project at NIR, so private, confidential AIs, you know, encrypted LLMs, interface, inference, all of these things. Why is the rest of the AI industry, why does it feel like they almost are dismissive or disrespectful, let's say, of crypto or don't appreciate some of the value proposition that we're bringing?

Ryan:
[1:06:57] Someone like Peter, the founder of OpenClaw is basically everything he said about crypto. And I realized that he's had some bad experiences. It's a scam. Stay away from it if you're in crypto to pivot to AI. These are close to direct quotes. And yet what I see in crypto is a group of people who is focusing on private, confidential AI, user-sovereign AI, open source, like some values that AI desperately needs or else it will centralize and fall in kind of the authoritarian trap of, you know, some big party has all of the ability to control all of these things. Anyway, I guess maybe my question is, why don't more AI people appreciate what crypto is bringing to the table here and what blockchain is bringing to the table? And do you think that bridge can be gapped culturally?

Illia:
[1:07:46] Yeah, I mean, I think that what you mentioned, right, I mean, Peter had kind of bad encounters and like the meme coin space in general is kind of been creating a lot of negative perception in AI. The kind of the low onboarding, like the no boundary to onboard into crypto, which is great from kind of, you know, empowerment perspective. It also means it's really hard to filter out the noise for anyone who is kind of looking in.

Illia:
[1:08:21] And so I think generally the challenge being, yeah, for anyone who is doing AI, and obviously kind of there's a lot of talented people there, it's really hard for them to know like what's right and what's wrong. So this is why we did NearCon in San Francisco a couple of weeks ago and brought people from OpenAI, from Oracle, from Google, from Intel, from Snowflake to really bridge this gap where, you know, I had two of my other co-authors of Attention is All You Need. I had, you know, we had some ex-AI kind of ex-co-founder, kind of top researchers, some the top executives, you know, AI clouds, kind of all just in one place with crypto, with, you know, Kraken, with kind of these investors to really kind of start bridging this gap that it's like, hey, this is not, this is like real, like there's a real contribution. And Eric Borges was there as well. We had a fireside with him. And so really bridging this gap between kind of general AI space and kind of how crypto is contributing and bringing properties. But I think, yeah, like it will take some time to mend kind of the bed wrap. And I mean, part of the reason why I moved to NSF is actually been doing.

Ryan:
[1:09:39] That kind of on

Illia:
[1:09:41] The social scene here. Diplomacy. Yeah, I mean, like effectively bringing together people across, and like in AI, there's also Rift internally, which is like closed source versus open source, right? And like, there's a bunch of AI researchers who believe open source is dangerous and, you know, it should be all super controlled and kind of, you know, that individual is the only way to do things, right? And so there's also just like that gap and like crypto is even further in open source spectrum, right? So really working on kind of bringing those pieces together in a positive way, as well as, you know, bringing products and really showcasing now to companies is like, hey, there is an alternative that is private. You don't need to do your data that is capable with Ironclaw that you can trust, right? So like showcasing products that actually can bridge this gap as well.

David:
[1:10:35] Ilya, what advice do you have for builders, I guess, or people that might aspire to become builders now that vibe coding is a thing. What do you think is like the best kind of advice to give someone to just navigate, you know, the incoming years with either building something useful in AI, building a company, making money, preserving their job, anything in that direction? What advice do you have for people?

Illia:
[1:11:00] Yeah, I think there are probably a few dimensions. One is if you're, I mean, if you're trying to build a business right now, the network effects are, Like the software differentiation is becoming non-existent, right? It's distribution and network effects that are kind of important. And so I think, yeah, thinking crypto is, crypto intersection of AI is where you can create interesting network effects. It's where you can create kind of new ways to capture that. And so I think that this is where, you know, everything from like verticalized marketplaces to kind of specific ways of capturing reputation. What we discussed, like how do you bridge kind of real world legal and crypto AI into one, right? I mean, one of the kind of interesting projects is like we have this agentic marketplace that actually has an agentic judge, right? A agentic leader. How do you actually plug this in into a real legal system? Like if people don't agree with agentic judge, How do you go into a legal system? What is all those bits and pieces required to do that?

Illia:
[1:12:09] So I think like that just need to think from that perspective. And then in broader sense, I think we are in a time where the questions are more important actually than execution. Like ideas and questions are kind of, like usually it was like ideas is not worth anything. The execution is what's worth everything. I think we're actually shifting in a weird way where if you ask the right question, If you really challenge the assumption, you may get ahead way more than if you like grinded a bunch, right? And so it's a very subtle, but it's like, I think important transformation is happening. You think the pendulum.

David:
[1:12:53] Is shifting to the idea guys, but not just the naive idea guys, the idea guys who can formulate the idea better and more precisely.

Illia:
[1:13:02] Better formulate, really understand like the assumptions behind it, test them. Being like, but you can, you know, you don't need to go and like grind, you know, whatever, spend a ton of money, hire a bunch of people. You can actually like test all of that. Like I have a, I have a gross hacker agent, right. For example, I told it, Hey, you know, go. And so like it can generate a bunch of candidates. Right. And so the, the idea is like, how, how do you even measure its success? How do you like, it's actually defining the success criteria, defining what is important for it. Then it can go and execute a bunch of stuff and try it and give you back information. So it's kind of, yeah, like shifting to this, like, can you define the framework, how to verify things? Do you know the direction? Can you narrow it down? Can you, it's like really working in this kind of more like idea and think space.

Illia:
[1:13:56] And then, yeah, like how you use those tools to really like scale your execution massively.

Ryan:
[1:14:01] Yeah, I do feel like that's great insight. And whenever I've worked with OpenClaw, it just feels like there's so much there to mind. It's almost the idea that, well, I could prompt this thing into creating a new million dollar a month business if I only knew which questions to ask and how to kind of verify its outputs.

Illia:
[1:14:18] It's all there. It's all there.

Ryan:
[1:14:19] It's all there. It is all there. And that's where the opportunity lies.

Illia:
[1:14:22] That's why people cannot sleep because like just one more prompt, man.

Ryan:
[1:14:27] It's just a few more tokens. Ilya, you're doing fantastic work in this space. Thank you so much for what you do. If someone wants to get started with Ironclaw, where should they go?

Illia:
[1:14:37] You can go to agent.near.ai and just launch it from there.

Ryan:
[1:14:40] Amazing. I'm definitely gonna check that out. Bankless Nation, you know the drill. None of this has been financial advice. Of course, crypto is risky. You could lose what you put in, but we are headed west. This is the frontier. It's not for everyone, but we're glad you're with us on the Bankless journey. Thanks a lot.

Not financial or tax advice. This newsletter is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This newsletter is not tax advice. Talk to your accountant. Do your own research.

Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here.