HTMX is great by I don't think it's what OP needs since the input and desired output is not hypermedia in the first place.
hosaka
Honestly not sure about Swagger, I've only ever used swagger-ui to show the API docs on a webpage. OpenAPI as a standard and openapi-generator are not abandoned and quite active. I'll give you an example of how I use it.
I have a FastAPI server in python that defines some endpoints and data models that it can work with, it exports an openapi.json definition. I also have a common schemas library defined with pydantic that also exports an openapi.json (python was chosen to make it easier for other team members to make quick changes). This schemas library is also imported in the FastAPI app, basically only the data models are shared.
I use the FastAPI/openapi.json to generate C++ code in one application (the end user app) using the openapi-generator-cli, serialize/deserialize is handled by the generated code, since the pydantic schema is a dependency of the FastAPI server, both the endpoints and data models get generated. The pydantic/openapi.json is also used by our frontend written in typescript to generate data models only since the frontend doesn't need to call FastAPI directly but it has an option to in the future by generating from FastAPI/openapi.json instead.
This ensures that we're using the same schema across all codebases. When I make changes to the schema, the code gets re-generated and included in the new c++/web app builds. There's multiple ways to go about versioning, but for data only schema I'd just keep it backwards compatible forever (by adding new props as optional field rather than required and slowly deprecating/removing props that are no longer used).
I found this to be more convoluted than just using something like gRPC/Protobuf (which can also be serialized from JSON), I've used it before and it was great. But for other devs that need to change a few lines of python and not having to deal with protobuf compiler, it's a more frictionless solution at the cost of more moving parts and some CICD setup on my side.
Use Open API schema. You can define data models and endpoints or just the models, I do this at work. Then generate your code using openapi-generator.
Glad you figured it out! A separate network for a set of services that need to talk to eachother is the way I do it for my selfhosted tools, if you want some more ideas on setting up the *arr apps using docker compose, this is my current setup: https://github.com/hosaka/selfhosted/blob/main/servarr.yml
I think you're using docker internal IPs, which are not static and can change between docker compose runs. You can instead address them by name if you connect then to a same virtual network: https://docs.docker.com/compose/networking/#specify-custom-networks
This allows two service to "see eachother". For example "calibre:8081" will resolve to an internal IP address. I'm general, this is a better approach when you need to connect apps to each other.
When setting up nvim-treesitter neither clang nor msvc worked. Rather, it worked and compiled the necessary libs but the treesitter plugin failed to load the necessary .so libs. The common troubleshooting steps didn't help (setting up clang as preferred compiler etc.), so I just ended up installing zig and that helped to get it working.
Also allows you to use hardware acceleration for inference. Quite a comprehensive set of tools actually, also the new revamped UI is on the horizon with version 0.14
In a game that is production ready you would be going through individual assets with the person who designed them and you'd establish when to spawn and despawn them. As designers tend to go crazy and not worry about memory at all, I tend to guide them to think about memory availability in a particular scene. Really depends on the game you're making though
You can push mirror your fork back to GitHub when you deem necessary (e.g when it's in a good shape) and create a PR to the parent repo automatically using forgejo runner script, you'd just need to make an API token. If the goal is to automate PRs. If the goal is to not use GitHub for your forks but still continue to make PRs, you can't work around that I think. Unless there's a way to PR a bunch of patch files perhaps?
Yeah I can make a list of features that I'd like to have and annoyances that exist now, but I wouldn't call it stuck in a single monitor paradigm either. Depends on what your needs are I guess!
If only it wasn't paywalled