I’ve been refining the cryptographic foundation of my self-evolving system.
Each node now operates within layered credentials like an onion or a parfait.
Every layer has its own keys:
Public-only layers can read and submit, but not modify or decrypt.
Authoritative layers hold private keys and actually change data.
The root layer anchors the whole trust chain.
When a node boots, it carries its credentials and those of its ancestors.
If it’s authoritative, it’s a full node. If not, it proxies upstream.
Inside and outside these nodes live sessions (think Git forks) but focused on differences.
Sessions track changes to the object model, can inherit or derive credentials, and merge upward when their deltas prove useful.
Light clients just call into a node remotely. (Like the CLI does) Heavier clients run their own sessions. The heaviest run a node in userspace and call into it directly or via local RPC. (Like the CLI also does)
Training/Program Search is treated as a first-class entity. If a session proves it made a verifiable improvement, it can push changes upstream as a simple “proof of useful work.”
Maybe, eventually, that becomes the economic loop: nodes trade training work for compute.
Right now, I’m focused on getting the whole stack booting layers, sessions, caching, and merge logic all cleanly flowing up and down the hierarchy.
When I started my project many years ago, I had quite a few people tell that what I had envisioned was crazy and that it was impossible.
Now it seems everyone is starting to follow in the same direction.
I can't compete with Elon's network effect or Deep Mind's deep pockets, but I'm not going to stop, even if I come last, I'm going to finish it.
Onward to the next stage of The Object Model.
(This will be the basis of the script for my next video update)
The Object Model
I'm aiming to get a minimal CRDT system that the system can explore self-modifying in the future.
No, I'm not using something off the shelf. I need to write everything in a way that can eventually be substituted by the evolvable base code, and then the code that invokes it is ported into a similarly self-modifiable orchestration/scaffolding. All in the same language and the same codebase, structured in a way where its code and architecture can be optimally reasoned with by LLMs.
It's the best clear bootstrap path that scales, other than GPT-9000 farting it out fully formed as an afterthought.
Even that path is covered, though, by the same strategy.
While the system as a whole is more inspired by
@DavidBrin's Practice Effect (as per my last video), I kind of feel it will initially run without an effectively impressive mutation for a long time before we see something small, but exciting, like the cell simulation in @gregeganSF's Permutation City. Like a work of art ticking over, where we all "Monitor the Situation" :D
My project is one part serious, three parts "The Throne of the Third Heaven of the Nations’ Millennium General Assembly" and with a tiny pinch of TempleOS because to attempt something like this, you have to be a little bit touched.. even though now it seems that all the serious players are heading in the same direction.
What I have right now in the OM is a teeny, tiny minimalist CRDT.
I need to extend it just a little further to allow horizontal merges. In this scenario, multiple nodes mutate a common object, and as they push the changes back up to the root, the changes are merged deterministically.
It's perilous because it's easy to make a change with unforeseen circumstances. Locks, knots and two smoking servers.
It and my orchestration system need to be kept as a diagrammatic, local‐rewrite system with small, commuting steps guaranteeing a global invariant or convergence to same.
That will make it safe and deterministic.
Interaction nets and CRDTs share a common foundation with knot theory in local, confluent rewrite systems. I have a highly speculative/fringe hunch that every NP problem has a reduction to something knot-theory-ish, so a minimal framework rooted in knot-theoretic moves might offer new insights into P vs NP.
If I had infinite money, I would snap up @VictorTaelin's @higherordercomp now, hire a busload of knot theory PhDs, and steer them to build what I wanted.
Thank you to everyone who shared their opinion about the audio in my test videos.
Hoping to finish the video by the end of the month, but with kids, etc. Additionally, I am sometimes called upon to address re-engineering someone else's problem at short notice, and I have heard that I might be needed at short notice soon.
I have emerged at the end of moving house. Or is it just the beginning? So many boxes to unpack still. My office is the general accumulator register for this process of moving, categorizing, and unpacking, so it's swamped and in disarray.
During the move, between traumatic deliveries and bouts of construction, I started refactoring my video encoder to make it a general media encoder. A lot of generic/template abuse going on. I'm just about to complete the distillation of the encoding essence and implement a concrete system for audio.
I've been doing that on my 2023 cheap, gutless AMD laptop. It's somewhere in the house. I can ping it. I have no idea where though.
As for my walking-about coding but expendable 2014 MacBook Air... during the move, I upgraded its storage from 128GB to 1TB for peanuts because I need to generate serious training data for my latest model. Positive side-effect: it's much more responsive and speedy now.
I will be assembling my "studio" over the next few weeks. Time to brush off that script that I started working on such a long time ago that I have rewritten it twice. Third time's the charm.
Here is a big-ish update that I will edit and expand into my next video. Please think of this as a down payment on my next video update. :D
Diplomatic status is still being sorted out. In the mail, ...but... Christmas... so I may need to duck out of the Schengen region for a day to reset my stay for another 90 days until it is complete.
Two of the three kids are in school. The youngest cries in class, and they have an extreme form of "gentle integration praxis" here, so I have to go each day, take him for 30mins and then get told to take him home. It's been going on for the last week and a half. Due to Christmas, there is no continuity, and everything seems chaotic.
I've only been able to work 15 minutes here and there during the day, but I have been making progress.
As I can't do anything significant, I've been working on *small* components of the system that will allow the nodes in my system to act like a single entity.
*Low Latency Video App Streaming*
I have video-only low latency UI streaming integrated into my node server. Think of it like cloud gaming, but for apps and it runs in a browser. I also need to incorporate audio from my proof of concept, which will probably be pushed back because it is optional for where I want to get with the first working version.
At the moment, you need to run a separate client web app to access an Application. The web app negotiates with the API to start a UI stream to the Application running in the system.
I want to serve the client web app from the server and I need to route requests between nodes and have a point of ingress via a domain name in the URL of the app. So I need to respond to DNS requests dynamically. Which node has the network layer you want that has the app you want?
*DNS Server*
So, I've written my own server for DoH (DNS over HTTPS). This way, I can respond *immediately* to network changes. I've tried building similar systems with 3rd party DNS servers before, and the publishing lag and lack of fine TLS control are killers. This way I can block on some requests until I have safe answers. I have also implemented DNSSEC and use the same system network layer chain of trust for certificates.
*TLS Chain Of Trust*
These same core network layer certificates also drive the TLS chain of trust for HTTPS and SSH in the system.
I managed to get this tested and working again. Something rotted with browser updates over the last year but am now back to being able serve a static Hello, Cruel World! with my own certs, chain of trust and root cert in all the browsers I could get my hands on.
If you add the system's root cert into your OS's or OpenSSL's "Root Key Chain" then HTTPS works transparently. Similar for DNSSEC, you can add a local "trust anchor" for that.
This means the system can operate with or without the core internet, but I want to work in a hybrid mode. Public facing using the standard root services and then between nodes using its own private root services.
Everything is published into a distributed object model that exists in various forms on different network layers. It would be great to be able to get parts of the object model, like an entire app or a single entity as local files, make edits into those files and push changes. Like you can with Git. Why not use Git? So I am doing that.
*Git as an API*
Using the traditional approach of piping requests and responses to git scm executables on the server to respond to git requests would suck because I would need a git file-system mirror of all the objects in the entire system for each "view" of the object model checked out. I want to pass the data directly to the object model, potentially over the network and not marshall it to and from disk with all the added latency. The system will have a lot of training data, program, program execution history and models. Too much data for that approach.
What would be better would be if I could perform git clone, commit etc and when I pushed changes, I streamed the objects down and translated this into CRUD operations with streams on the object model.
So, I have written a nice event-driven parser/streamer that can extract essential elements from the Git "SMART protocol" operations and pack files.
I'm keeping it minimal because my needs are simple. I negotiate the smallest set of capabilities I need with the Git client, so I will only be sent pack files that I can handle.
I want to detect CRUD operations on objects (files). Also, depending on the network layer I am on, I want to be able to push changes between network layers. (More about this in my next video.)
Anyone wanting to know what pack files are like, here is a good primer by someone who also wrote their own pack-file parser.
Each Git operation my server can handle has a parser which provides a writer; as we go deeper, sub-parsers expose their own writers, and we route received data to the top level writers and they filter down until the request is complete.
Each parser emits events; some of these events contain a Reader that allow the parsed data to be read out, once again, as data arrives it can be parsed by other sub-parsers with the same execution pattern. Sub-parsers send events back up to parent parsers, so it forms a nice, efficient system where each layer handles and gets informed of what is essential within its context.
You shift a block of bytes off the network and write it to the nested parser streams. It filters through immediately, and data starts pouring into the events. So as the changes are being pushed they are immediately streaming into the server objects.
I can generate a pack file on the server that represents a set of files. That will need to be rewritten eventually, but for now that part is just to prove that I can do it. Once I have these current CRUD events being captured, I will stop and by then, hopefully, I will have some larger chunks of clear time to integrate and complete the distributed object model at the center of all of this. Then once I am able to get access to that running on the network I will look at generating packfiles from that and mutating it from pushes.
That part will require attention to detail and some larger scale effort I can only do with large blocks of clear-headed time.
What I will likely work on next is pivoting back to serving the video UI streaming client web app for an Application directly from a node via "egress points" and use the DNS and TLS work I have done to pull I all together.
This data is all mocked in memory for now, so after that I can't find any more small things that needs to be down and need to tackle a big job... the object model. I've not looked at it since COVID lockdown so it will need some love.
James (DrMiaow)
Hello, Cruel World!
*a progress update from the progressing-guild*
I’ve been refining the cryptographic foundation of my self-evolving system.
Each node now operates within layered credentials like an onion or a parfait.
Every layer has its own keys:
Public-only layers can read and submit, but not modify or decrypt.
Authoritative layers hold private keys and actually change data.
The root layer anchors the whole trust chain.
When a node boots, it carries its credentials and those of its ancestors.
If it’s authoritative, it’s a full node. If not, it proxies upstream.
Inside and outside these nodes live sessions (think Git forks) but focused on differences.
Sessions track changes to the object model, can inherit or derive credentials, and merge upward when their deltas prove useful.
Light clients just call into a node remotely. (Like the CLI does) Heavier clients run their own sessions. The heaviest run a node in userspace and call into it directly or via local RPC. (Like the CLI also does)
Training/Program Search is treated as a first-class entity. If a session proves it made a verifiable improvement, it can push changes upstream as a simple “proof of useful work.”
Maybe, eventually, that becomes the economic loop: nodes trade training work for compute.
Right now, I’m focused on getting the whole stack booting layers, sessions, caching, and merge logic all cleanly flowing up and down the hierarchy.
In short:
Layers form the trust chain.
Sessions form the version chain.
Nodes and clients form the compute mesh.
It’s an insane idea… but it’s coming together.
Here is a rougher but longer version.
x.com/DrMiaow/status/1983869214868840529
1 week ago | [YT] | 9
View 8 replies
James (DrMiaow)
Hello, Cruel World!
When I started my project many years ago, I had quite a few people tell that what I had envisioned was crazy and that it was impossible.
Now it seems everyone is starting to follow in the same direction.
I can't compete with Elon's network effect or Deep Mind's deep pockets, but I'm not going to stop, even if I come last, I'm going to finish it.
Onward to the next stage of The Object Model.
(This will be the basis of the script for my next video update)
The Object Model
I'm aiming to get a minimal CRDT system that the system can explore self-modifying in the future.
No, I'm not using something off the shelf. I need to write everything in a way that can eventually be substituted by the evolvable base code, and then the code that invokes it is ported into a similarly self-modifiable orchestration/scaffolding. All in the same language and the same codebase, structured in a way where its code and architecture can be optimally reasoned with by LLMs.
It's the best clear bootstrap path that scales, other than GPT-9000 farting it out fully formed as an afterthought.
Even that path is covered, though, by the same strategy.
While the system as a whole is more inspired by @DavidBrin's Practice Effect (as per my last video), I kind of feel it will initially run without an effectively impressive mutation for a long time before we see something small, but exciting, like the cell simulation in @gregeganSF's Permutation City. Like a work of art ticking over, where we all "Monitor the Situation" :D
My project is one part serious, three parts "The Throne of the Third Heaven of the Nations’ Millennium General Assembly" and with a tiny pinch of TempleOS because to attempt something like this, you have to be a little bit touched.. even though now it seems that all the serious players are heading in the same direction.
What I have right now in the OM is a teeny, tiny minimalist CRDT. I need to extend it just a little further to allow horizontal merges. In this scenario, multiple nodes mutate a common object, and as they push the changes back up to the root, the changes are merged deterministically.
It's perilous because it's easy to make a change with unforeseen circumstances. Locks, knots and two smoking servers.
It and my orchestration system need to be kept as a diagrammatic, local‐rewrite system with small, commuting steps guaranteeing a global invariant or convergence to same.
That will make it safe and deterministic.
Interaction nets and CRDTs share a common foundation with knot theory in local, confluent rewrite systems. I have a highly speculative/fringe hunch that every NP problem has a reduction to something knot-theory-ish, so a minimal framework rooted in knot-theoretic moves might offer new insights into P vs NP.
If I had infinite money, I would snap up @VictorTaelin's @higherordercomp now, hire a busload of knot theory PhDs, and steer them to build what I wanted.
3 months ago (edited) | [YT] | 7
View 9 replies
James (DrMiaow)
As requested in the chat #3.... some Rick and Morty startup art
5 months ago | [YT] | 7
View 2 replies
James (DrMiaow)
Almost there. A slight delay because a new shipment of furniture to build arrived, and there were some bugs to be fixed for the demos. :D
Now recording the demos!
5 months ago | [YT] | 18
View 0 replies
James (DrMiaow)
OSINT Challenge!
Where am I?
6 months ago | [YT] | 3
View 11 replies
James (DrMiaow)
Duty calls. No video editing for a few days.
6 months ago | [YT] | 3
View 2 replies
James (DrMiaow)
I'm all tentacles editing right about now.
Thank you to everyone who shared their opinion about the audio in my test videos.
Hoping to finish the video by the end of the month, but with kids, etc. Additionally, I am sometimes called upon to address re-engineering someone else's problem at short notice, and I have heard that I might be needed at short notice soon.
6 months ago | [YT] | 8
View 0 replies
James (DrMiaow)
I would have had a video out this week, but I have been at home babysitting sick children for the last two weeks.
All three are now back at school as of today.
I will again attempt to record on this coming Monday and start editing, barring any of the usual disasters.
6 months ago | [YT] | 8
View 4 replies
James (DrMiaow)
Hello, Cruel World!
I have emerged at the end of moving house. Or is it just the beginning? So many boxes to unpack still. My office is the general accumulator register for this process of moving, categorizing, and unpacking, so it's swamped and in disarray.
During the move, between traumatic deliveries and bouts of construction, I started refactoring my video encoder to make it a general media encoder. A lot of generic/template abuse going on. I'm just about to complete the distillation of the encoding essence and implement a concrete system for audio.
I've been doing that on my 2023 cheap, gutless AMD laptop. It's somewhere in the house. I can ping it. I have no idea where though.
As for my walking-about coding but expendable 2014 MacBook Air... during the move, I upgraded its storage from 128GB to 1TB for peanuts because I need to generate serious training data for my latest model. Positive side-effect: it's much more responsive and speedy now.
I will be assembling my "studio" over the next few weeks. Time to brush off that script that I started working on such a long time ago that I have rewritten it twice. Third time's the charm.
9 months ago | [YT] | 15
View 4 replies
James (DrMiaow)
Here is a big-ish update that I will edit and expand into my next video. Please think of this as a down payment on my next video update. :D
Diplomatic status is still being sorted out. In the mail, ...but... Christmas... so I may need to duck out of the Schengen region for a day to reset my stay for another 90 days until it is complete.
Two of the three kids are in school. The youngest cries in class, and they have an extreme form of "gentle integration praxis" here, so I have to go each day, take him for 30mins and then get told to take him home. It's been going on for the last week and a half. Due to Christmas, there is no continuity, and everything seems chaotic.
I've only been able to work 15 minutes here and there during the day, but I have been making progress.
As I can't do anything significant, I've been working on *small* components of the system that will allow the nodes in my system to act like a single entity.
*Low Latency Video App Streaming*
I have video-only low latency UI streaming integrated into my node server. Think of it like cloud gaming, but for apps and it runs in a browser. I also need to incorporate audio from my proof of concept, which will probably be pushed back because it is optional for where I want to get with the first working version.
At the moment, you need to run a separate client web app to access an Application. The web app negotiates with the API to start a UI stream to the Application running in the system.
I want to serve the client web app from the server and I need to route requests between nodes and have a point of ingress via a domain name in the URL of the app. So I need to respond to DNS requests dynamically. Which node has the network layer you want that has the app you want?
*DNS Server*
So, I've written my own server for DoH (DNS over HTTPS). This way, I can respond *immediately* to network changes. I've tried building similar systems with 3rd party DNS servers before, and the publishing lag and lack of fine TLS control are killers. This way I can block on some requests until I have safe answers. I have also implemented DNSSEC and use the same system network layer chain of trust for certificates.
*TLS Chain Of Trust*
These same core network layer certificates also drive the TLS chain of trust for HTTPS and SSH in the system.
I managed to get this tested and working again. Something rotted with browser updates over the last year but am now back to being able serve a static Hello, Cruel World! with my own certs, chain of trust and root cert in all the browsers I could get my hands on.
If you add the system's root cert into your OS's or OpenSSL's "Root Key Chain" then HTTPS works transparently. Similar for DNSSEC, you can add a local "trust anchor" for that.
This means the system can operate with or without the core internet, but I want to work in a hybrid mode. Public facing using the standard root services and then between nodes using its own private root services.
Everything is published into a distributed object model that exists in various forms on different network layers. It would be great to be able to get parts of the object model, like an entire app or a single entity as local files, make edits into those files and push changes. Like you can with Git. Why not use Git? So I am doing that.
*Git as an API*
Using the traditional approach of piping requests and responses to git scm executables on the server to respond to git requests would suck because I would need a git file-system mirror of all the objects in the entire system for each "view" of the object model checked out. I want to pass the data directly to the object model, potentially over the network and not marshall it to and from disk with all the added latency. The system will have a lot of training data, program, program execution history and models. Too much data for that approach.
What would be better would be if I could perform git clone, commit etc and when I pushed changes, I streamed the objects down and translated this into CRUD operations with streams on the object model.
So, I have written a nice event-driven parser/streamer that can extract essential elements from the Git "SMART protocol" operations and pack files.
I'm keeping it minimal because my needs are simple. I negotiate the smallest set of capabilities I need with the Git client, so I will only be sent pack files that I can handle.
I want to detect CRUD operations on objects (files). Also, depending on the network layer I am on, I want to be able to push changes between network layers. (More about this in my next video.)
Anyone wanting to know what pack files are like, here is a good primer by someone who also wrote their own pack-file parser.
codewords.recurse.com/issues/three/unpacking-git-p…
Each Git operation my server can handle has a parser which provides a writer; as we go deeper, sub-parsers expose their own writers, and we route received data to the top level writers and they filter down until the request is complete.
Each parser emits events; some of these events contain a Reader that allow the parsed data to be read out, once again, as data arrives it can be parsed by other sub-parsers with the same execution pattern. Sub-parsers send events back up to parent parsers, so it forms a nice, efficient system where each layer handles and gets informed of what is essential within its context.
You shift a block of bytes off the network and write it to the nested parser streams. It filters through immediately, and data starts pouring into the events. So as the changes are being pushed they are immediately streaming into the server objects.
I can generate a pack file on the server that represents a set of files. That will need to be rewritten eventually, but for now that part is just to prove that I can do it. Once I have these current CRUD events being captured, I will stop and by then, hopefully, I will have some larger chunks of clear time to integrate and complete the distributed object model at the center of all of this. Then once I am able to get access to that running on the network I will look at generating packfiles from that and mutating it from pushes.
That part will require attention to detail and some larger scale effort I can only do with large blocks of clear-headed time.
What I will likely work on next is pivoting back to serving the video UI streaming client web app for an Application directly from a node via "egress points" and use the DNS and TLS work I have done to pull I all together.
This data is all mocked in memory for now, so after that I can't find any more small things that needs to be down and need to tackle a big job... the object model. I've not looked at it since COVID lockdown so it will need some love.
- James
10 months ago (edited) | [YT] | 17
View 2 replies
Load more