BehindTheBarrier

joined 1 year ago
[–] [email protected] 3 points 8 months ago

AI had working 3d models in just seconds previously. Not sure about editing though, but this has that. Sure beats waiting a few hours if it's nearly instant.

[–] [email protected] 7 points 8 months ago

So you say, but it does make me flinch when it suddenly hits.

[–] [email protected] 3 points 8 months ago

Just learning Rust for fun, but decided I wanted to make a simple website. I don't like web stuff that much, but seen htmx, so I gave that a shot. Found popular actix for the server side, and set out to make a simple blog.

Making a page is simple, using htmx is also simple. Setting out to create an blog that is all in a single evolving page? Not so much. Either you don't get the essential back and forward navigation, or you add that but a site refresh will call just the partial endpoint and screw things up. There's some quite nice work arounds, but at the end result is that sometimes going back will leave me on a blank site in one step.

I'm probably going to settle for each blog entry being a seperate page if I make the site public. Or just let the small flaws be there, because I hate sites these days being slow. So loading literally only the text/html that's supposed to change is very cool.

Next steps is going to remove chances of path traversal and reading literally any file on disk by modifying urls..., some markdown to html crate, and see how image loading works. If I ever get around to any of it.

[–] [email protected] 13 points 8 months ago (1 children)

I made do with my IDE, even after getting a developer job. Outside shenanigans involving a committed password, and the occasional empty commit to trigger a build job on GitHub without requiring a new review to be approved, I still don't use the commandline a lot.

But it's true, if you managed to commit and push, you are OK. Even the IDE will make fixing most merges simple.

[–] [email protected] 4 points 8 months ago* (last edited 8 months ago)

It's probably more common that scientific notation is used. So 3.2 *10^4 or simply 3.2e4. From the little physics I had, you often used kilometers instead of something like megameters. Or used just lightyears when you got on a big enough scale.

[–] [email protected] 4 points 8 months ago (1 children)

I got a Peugeot 208. It's small, and ok in all aspects except the software. Typical bad car UI. It works with cabled Android Auto, so for long drives that's more than fine. But touch screen is still old, and the app/site hasn't let me log in for a few weeks now... So I can't remote start heating.

But it's a great car that I bought used, for driving to and from work. Looks good, yellow color, parking sensors and rear camera for my blind ass. But is also probably not available in America for all I know, I live in Europe.

[–] [email protected] 4 points 8 months ago* (last edited 8 months ago)

Already been explained a few times, but GPU encoders are hardware with fixed options, with some leeway in presets and such. They are specialized to handle a set of profiles.

They use methods which work well in the specialized hardware. They do not have the memory that a software encoder can use for example to comb through a large amount of frames, but they can specialize the encoding flow and hardware to the calculations. Hardware encoded can not do everything software encoders do, nor can they be as thorough because of constraints.

Even the decoders are like that, for example my player will crash trying to hardware decode AV1 encoded with super resolution frames, frames that have a lower resolution that are supposed to be upscale by the decoder. (a feature in AV1, that hardware decoder profiles do not support, afaik.)

[–] [email protected] 16 points 8 months ago

Our company did a thing like this, focusing on the manager and above. They got password and authenticator codes out of them and admin access to the slack...

Good method to have users learn about critical thinking.

[–] [email protected] 3 points 9 months ago

I don't know what's required for KLWP support in a launcher, but KLWP is a live wallpaper with touch interaction. Don't know if stock support that on my phone. But some cool setup like media player functionality, time and such, in the font and size I want. And some other disguised shortcuts.

But for me in actual Nova, it being able to style app icons individually, swipe actions on app icons, and the dock at the bottom being able to slide to have more apps. And pretty standard now, but swipe up for app drawer and down for notifications. It works especially well with OnePlus gestures that I'm still holding on to.

[–] [email protected] 2 points 9 months ago

I stopped auto updating the 3rd time my god damn app was force closed when using it. Either update for the app itself or damn webview. Been many years since then, so not sure if things changed but man it was frustrating having things just go poof in the middle of something.

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago)

I do think the idea is pretty neat, although it's pretty close to returning structured data like json.

A slight disclaimer that these people are smarter than me, and know better about what we are talking about, so I may be wrong here on some assumptions. But I do get a bit of feeling they are trying to solve a trivial problem, at least in their use case. Ultimately there are only so many lecturers, and so many man lectures at a given time. The total data amount wouldn't be so much, and you can easily group by and sort on client side to achieve the original table which is show on a per lecturer basis. A little redundancy is in my opinion preferred over a query that returns 3 tables that then needs additional complicated work. I also find arguments about overlapping names to not be something the database should be handling, it falls on the data owners/manager instead. Academia is a wild west at times, but either this table is presentation only or a link to lecturer or lecture. And in the latter case, you'll already throw in the ids so they can be used in an URL to some other site.

While this can have significant less bandwidth, it also risks falling as soon as more data is introduced, as you're putting the large join operations on the client when you can get free optimizations from the SQL engine you use. I know not having duplicate data could be a thing for something where I work, where essentially we have hourly breakdowns but fetch at least the entire day for a single set of parameters. So that means 24x data for a surprisingly high amount of columns. When we only need 2 of them on the hourly level! But in this case, the data doesn't strictly need many joins as it has a lot of the information itself, along with there being too much data to join on the client side anyways for this to feel ideal. I feel you'll increase the complexity a bit too much as well. A big advantage of sql is how easy it is to understand what you are getting.

Its somewhat of a solved problem, if the performance becomes a problem, since we can return nested data anyways. So we can already today technically return a row where the hour(I think, never tried a date before) and value columns have arrays instead of a single value. We just haven't done it because it is not a big enough problem yet.

[–] [email protected] 2 points 9 months ago

There's ongoing work to use AI Gen to introduce "new" details during upscaling. But I assume many will not be interested in something that doesn't recreate real details.

view more: ‹ prev next ›