AI. Robotics. Code. Engineering.

GOD BLESS THE SUPPORTERS!


FlowState

just posted a sneek peek of the new video over on the patreon and in the discord.

if youre not on the patreon, what are you even doing?



you get:

- early access to content, workflows, code, and other assets

- your name in the credits of my videos (and a link of your choice)

- free FlowState stickers, coffee mugs, shirts, hoodies... and all kinds of shizzzz!


plus it helps me put more time into creating the goods.

if you cant support right now, please make sure youre subbed and liking and sharing the videos.

that stuff helps a ton too!

for everyone not on the patreon, the new video should be published by the end of the weekend.


hope everyone is having a great summer!

7 months ago | [YT] | 0

FlowState

This is something I've pondered for a lonnng time. Figuring out how to get nanometer precision on a shoestring budget is... challenging, to say the least.

BUT... I think I've finally wrangled it. Tax returns are coming soon and I've got a materials list together.

The overall project will happen in stages. First, I'll build just one nanopositioner - which is just a linear actuator with nanometer precision.

Next, will come two more of those. With three total, I can build a 3-axis nanopositioning bed. Think like a 3D printer bed that only moves a few nanometers in the X, Y, and Z directions.

Adding a cantilever and probe will give me nano-scale imaging capabilities. This is officially an Atomic Force Microscope at this point. We can take images of individual atoms with this device.

Adding a second probe without an oscillating cantilever will then allow me to do the basics operations that Wolkow demonstrates in this video.

So, stage one coming soon. Taxes are filed. I used CashApp to file this year. (Completely free btw!) My taxes are pretty simple so should get a return in a week or so and stage one of this project kicks off!

Just as a reminder to everyone, I am taking a full course load this semester and I do have a part time job now. Also will be taking on more serious projects like this one, which take time and money to build. So content will come a little more slowly, BUT it will be so much cooler than previous content!

THANK YOU SO MUCH to all the Patrons here who have stuck with me through the winter hiatus! This year is going to be a crazy good year and I'm very excited to share and discuss with everyone!

11 months ago | [YT] | 0

FlowState

Crazy Good LTX Results in About 15 SECONDS!!!

LTX runs in seconds on consumer hardware and generates cohesive, high quality, high frame rate video.

Video is a .webp for YouTube but even better when saved as .mp4

Serious game changer for video!

Video on LTX & Mochi coming soon!

1 year ago (edited) | [YT] | 12

FlowState

Image Building with FlowState Preprocessor

By masking a couple "from" images and "to" images, you can bring elements into an image from other images, to get exactly what you want in the output. Now you have some of the key clipping, cropping, and layering features you'd use in InkScape or Illustrator, right in Comfy. No need to export/import and edit images in different programs. New update coming very soon!

1 year ago | [YT] | 7

FlowState

WE ARE SO BACK!

So close guys! Demo stream tonight, in about 12 hrs!

The FS Image Batcher is the new Latent Chooser. Here's the Flow..

The prompt node generates a list of prompts, based on your settings (multiple prompts, using LLM, CLIP preset values, etc)

Then the image batcher creates one latent image, or a batch of latent images, for the sampler, for each prompt. So you can use a variety of noisy latent images or an empty latent/black square, or an all white square, with each prompt. You can set custom sizes or select the preset resolution/orientation. Or you can select an input image to use as a latent for img2img... all the same as before with the Latent Chooser, except a few new additional options for noisy latents.

BUT THEN... you can also choose to inpaint/outpaint your input image as well.

For outpainting, you can scale down or crop down (say like 10% of the surrounding) and paint in the surrounding area, or you can use the custom sizes or resolution presets to add more area to a smaller image, say going from 1280x720 to a 1920x1080. The surrounding area to be painted in can also be noisy, black, or white (maybe more colors coming soon).

For inpainting, you can crop out a section in the middle (again say 10% or whatever) and paint in the cropped out area, again noisy, black, or white.

Then there's masked painting, where you can mask certain parts of the image, fill with black/white/noise, and paint in that area. Which can work as an object removal, or with the right prompting, can replace objects in the image based on your prompt.

AND THEN there's image transfer/object transfer, where you can transfer people, items, buildings, whatever from one image to another, and then use that new image as your latent input. Which can be transferring from one image to another, or transferring items from one place in an image to another place in the same image.

So that's all built into the standard workflow. In a few mins I will also add an optional input to the Image Batcher to accept a batch of images from a Preprocessor node I'm building, which can do all of these same things but with lots of images. Which lets you transfer many things from many pictures into one image, or move many things around in a single image, or do large batches of inpainting/outpainting, etc.

Between this new node and the updated Styler node with the control nets and loras, anyone should have no problem getting exactly the content, structure and quality of the image they want. You can decide what goes in the image & where it is, move things around, choose your characters/models, get them in the right position/pose, etc... all super easy.

1 year ago | [YT] | 5

FlowState

New LoRA Chaining Support

This is a mixture of a 1980s, a Cyberpunk and a Horror LoRA. At first, the Cyberpunk was the highest strength. Check what happens as I turn the 1980s LoRA up. Pretty awesome.

1 year ago | [YT] | 9

FlowState

Evening Stream

Stream in T-60 mins. hopefully 20 ish, but just in case lol


Agenda:

- custom message instructions and presets for llm prompt models, including for negative prompts

- changing LLM node name since its more than just an llm node now

- loading video models (svd, etc.) in the unified model loader

- updating FVD name since its not just for Flux anymore

- new node for control nets, ipadapters, and loras

- changing node repo status to experimental

- add collapse feature to nodes where space can be conserved

- add option for combinations of models, clips, and vaes in the model loader

1 year ago | [YT] | 2

FlowState

Stream in T-5 mins... See you on the inside

1 year ago | [YT] | 3

FlowState

Just Thought of a GREAT New Feature.

This may be a little hard to explain in text (very quickly bc I'm prepping for the stream in a few mins) but I want to generate images, choose the parts I like and position them where I want them, then reprocess them for a final output image that is what I want it to be. Will explain more on stream and in an upcoming video as well. Basically, its what I'm already doing with Comfy/Illustrator but all in Comfy with simple selections.

1 year ago | [YT] | 9

FlowState

FVD Stream Pt 3

Hopefully wrapping this node up this evening and moving on to the next node that will house all of our control net, ip adapter and lora functionality. In any case, the stream will be starting a little later today, around 9pm EST. Hope to see you all there!

1 year ago | [YT] | 3