James (DrMiaow)

Here is a big-ish update that I will edit and expand into my next video. Please think of this as a down payment on my next video update. :D

Diplomatic status is still being sorted out. In the mail, ...but... Christmas... so I may need to duck out of the Schengen region for a day to reset my stay for another 90 days until it is complete.

Two of the three kids are in school. The youngest cries in class, and they have an extreme form of "gentle integration praxis" here, so I have to go each day, take him for 30mins and then get told to take him home. It's been going on for the last week and a half. Due to Christmas, there is no continuity, and everything seems chaotic.

I've only been able to work 15 minutes here and there during the day, but I have been making progress.

As I can't do anything significant, I've been working on *small* components of the system that will allow the nodes in my system to act like a single entity.

*Low Latency Video App Streaming*

I have video-only low latency UI streaming integrated into my node server. Think of it like cloud gaming, but for apps and it runs in a browser. I also need to incorporate audio from my proof of concept, which will probably be pushed back because it is optional for where I want to get with the first working version.

At the moment, you need to run a separate client web app to access an Application. The web app negotiates with the API to start a UI stream to the Application running in the system.

I want to serve the client web app from the server and I need to route requests between nodes and have a point of ingress via a domain name in the URL of the app. So I need to respond to DNS requests dynamically. Which node has the network layer you want that has the app you want?

*DNS Server*

So, I've written my own server for DoH (DNS over HTTPS). This way, I can respond *immediately* to network changes. I've tried building similar systems with 3rd party DNS servers before, and the publishing lag and lack of fine TLS control are killers. This way I can block on some requests until I have safe answers. I have also implemented DNSSEC and use the same system network layer chain of trust for certificates.

*TLS Chain Of Trust*

These same core network layer certificates also drive the TLS chain of trust for HTTPS and SSH in the system.

I managed to get this tested and working again. Something rotted with browser updates over the last year but am now back to being able serve a static Hello, Cruel World! with my own certs, chain of trust and root cert in all the browsers I could get my hands on.

If you add the system's root cert into your OS's or OpenSSL's "Root Key Chain" then HTTPS works transparently. Similar for DNSSEC, you can add a local "trust anchor" for that.

This means the system can operate with or without the core internet, but I want to work in a hybrid mode. Public facing using the standard root services and then between nodes using its own private root services.

Everything is published into a distributed object model that exists in various forms on different network layers. It would be great to be able to get parts of the object model, like an entire app or a single entity as local files, make edits into those files and push changes. Like you can with Git. Why not use Git? So I am doing that.

*Git as an API*

Using the traditional approach of piping requests and responses to git scm executables on the server to respond to git requests would suck because I would need a git file-system mirror of all the objects in the entire system for each "view" of the object model checked out. I want to pass the data directly to the object model, potentially over the network and not marshall it to and from disk with all the added latency. The system will have a lot of training data, program, program execution history and models. Too much data for that approach.

What would be better would be if I could perform git clone, commit etc and when I pushed changes, I streamed the objects down and translated this into CRUD operations with streams on the object model.

So, I have written a nice event-driven parser/streamer that can extract essential elements from the Git "SMART protocol" operations and pack files.

I'm keeping it minimal because my needs are simple. I negotiate the smallest set of capabilities I need with the Git client, so I will only be sent pack files that I can handle.

I want to detect CRUD operations on objects (files). Also, depending on the network layer I am on, I want to be able to push changes between network layers. (More about this in my next video.)

Anyone wanting to know what pack files are like, here is a good primer by someone who also wrote their own pack-file parser.

codewords.recurse.com/issues/three/unpacking-git-p…

Each Git operation my server can handle has a parser which provides a writer; as we go deeper, sub-parsers expose their own writers, and we route received data to the top level writers and they filter down until the request is complete.

Each parser emits events; some of these events contain a Reader that allow the parsed data to be read out, once again, as data arrives it can be parsed by other sub-parsers with the same execution pattern. Sub-parsers send events back up to parent parsers, so it forms a nice, efficient system where each layer handles and gets informed of what is essential within its context.

You shift a block of bytes off the network and write it to the nested parser streams. It filters through immediately, and data starts pouring into the events. So as the changes are being pushed they are immediately streaming into the server objects.

I can generate a pack file on the server that represents a set of files. That will need to be rewritten eventually, but for now that part is just to prove that I can do it. Once I have these current CRUD events being captured, I will stop and by then, hopefully, I will have some larger chunks of clear time to integrate and complete the distributed object model at the center of all of this. Then once I am able to get access to that running on the network I will look at generating packfiles from that and mutating it from pushes.

That part will require attention to detail and some larger scale effort I can only do with large blocks of clear-headed time.

What I will likely work on next is pivoting back to serving the video UI streaming client web app for an Application directly from a node via "egress points" and use the DNS and TLS work I have done to pull I all together.

This data is all mocked in memory for now, so after that I can't find any more small things that needs to be down and need to tackle a big job... the object model. I've not looked at it since COVID lockdown so it will need some love.

- James

10 months ago (edited) | [YT] | 17