vidarh

joined 1 year ago
MODERATOR OF
[–] [email protected] 41 points 4 months ago (7 children)

The age matters less than the power-dynamics of her being his nanny.

[–] [email protected] 2 points 9 months ago

I'm not sure if "optimized" is the right word for my stack at the moment. Optimized in the sense that it is small, sure, but it does come at a performance cost - as much as I love Ruby, e.g. doing font rendering in Ruby is only viable because you only need to render each glyph once per size you use, for example. But I feel the performance tradeoff is acceptable. For me at least.

The terminal is also nothing "special" yet, other than the fact it's written in Ruby, and uses that Ruby font-renderer. It needs some serious bug fixes and cleanups and then that too will go on Github.

For me the tradeoff is that I get full control, and there are a few things I want to experiment with:

  • Since it can parse escape codes, there's nothing preventing a thin IO wrapper so it's possible to use the backend to output to an X11 window. Benefits of that would be being able to e.g. use part of a window for text output while rendering other things in the rest of the window, or plugging in your own code to augment the rendering in various ways.

  • But if you do that, you can strip out the escape code parsing, or bypass it, and use the underlying terminal buffer for the same purpose. E.g. my text editor already renders to a terminal-like buffer, and so when running under X it'd save going through the terminal pipeline, and I'd have the option of "upgrading" its rendering while keeping most of it pure text.

  • I'd like to play with ways to do filtering and post-processing of content. E.g. highlighting based on running Ruby code over the output.

Especially since a large part of my use is my editor, augmenting the backend with support for small "upgrades" w/GUI features, like letting Ruby apps that pull in the backend control and respond to a scrollbar in the terminal, or "replace" the scrollback buffer w/control over the editors buffer, or plugins to add a minimap, to make it really easy to write Ruby apps that work in any terminal but that can get extra features when it can open its own windows would be interesting.

[–] [email protected] 2 points 10 months ago (2 children)

At this point my window manager, terminal, file manager, and text editor, as well as a bunch of utilities like contextual popup-menus for my file manager (similar to 9menu, fed by a script) are written in pure Ruby, including the X11 client bindings and the TrueType font-renderer. I really would love to see Ruby get more use outside of Rails, as I have no interest in Rails, and Ruby has a lot to offer there. E.g. you might think it'd be too slow for a font-renderer, and while it's slow-ish you only need to render each glyph once per size as you use them so it works just fine and the whole font renderer is only 588 lines currently... Extend this across many of your main tools and you gain a system far easier to understand and modify to my own needs. E.g. my terminal is about 1800 lines of code. Xterm is about 88,000. Of course xterm does more, but most of things I don't need. Trying to add features I want to xterm would be a massive pain; adding it to 1800 lines of Ruby on the other hand is comparatively easy.

I'm slowly packaging up more of these tools, but the big caveat is that I'm not really writing these "for users" but for my own use, and I have peculiar preferences (e.g. very minimalist) and so these would not be pleasant for others to actually use, hence the over the top warnings :)

It's surprisingly easy to get an absolutely minimal wm working, though. E.g. this was my very first version (based on a C example called TinyWM): https://gist.github.com/vidarh/1cdbfcdf3cfd8d25a247243963e55a66

That is in fact all you need to do for a minimalist wm (that one is "just" floating and just a single desktop).

99% of the pain past that is learning all the quirks of how X11 works more so than the rest of the logic. E.g. after restarting it last night, for some reason the grab of the windows key + mouse button "broke" without a single code change on my end. I'm doing something wrong, clearly, but last time I ran into this it eventually "resolved itself", so it's hard to debug...

But to use this at this point you really need to actually enjoy chasing down those things. Hopefully it'll get closer to something usable for other people at some point down the line.

 

What the title says. It's <1k lines of Ruby, and provides a basic tiling WM w/some support for floating windows. It's minimalist, likely still buggy and definitely lacking in features, but some might find it interesting.

It is actually the WM I use day to day

 

It never ceases to amaze me how trivial it is to get temporary control over a phone number, or that given how trivial it is that anyone trusts it for any kind of verification, and so as hilarious it is that the SEC didn't have 2FA set up, it's rather rich for X to claim it's nothing to do with them when they choose to trust a demonstrably unreliable method of proving ownership....

[–] [email protected] 0 points 10 months ago

Except if they were it'd be well known, and no startup typically has contracts that doesn't involve approvals for secondary sales at this kind of early stage because increasing the number of people on the cap table enough triggers nearly the same reporting requirements as being public, and is a massive burden. Just doesn't work that way.

It's also hilarious that you take posting an article that is at best neutral, with a message of doom and gloom about risks to their business, on Lemmy is something OpenAI would have any interest in. If I wanted to pump OpenAI there are better places to do it, and more positive spins to put on it.

[–] [email protected] 0 points 10 months ago (2 children)

Lol, what. OpenAI shares aren't available - there'd be no benefit to anyone trying to pump them.

[–] [email protected] 1 points 10 months ago

You can't really trust anything a human says because we're frequently wrong yet convinced we're right, or not nearly as competent as we think, yet we manage, because in a whole lot of endeavours being right often enough and being able to verify answers is sufficient.

There are plenty of situations where they are "right enough" and/or where checking the output is trivial enough. E.g. for software development, where I can easily tell if the output is "right enough", and where humans are often wrong, and where we rely on tests to verify correctness anyway.

Having to cross-check results is a nuisance, but when I can e.g. run things past it on subjects I know well enough to tell if the answers are bullshit and where it can often produce answers better than a lot of actual software developers, it's worth it. E.g. I recently had it give me a refresher on the algorithms to convert an Non-deterministic finite automata (NFA) to a deterministic finite automata (DFA) and it explained it perfectly (which is not a surprise; there will be plenty of material on that subject), but unlike if I'd just looked it up in google, I could also construct examples to test that I remembered it right and have it produce the expected output (which, yes, I verified was correct).

I also regularly has it write full functions. I have a web application where it has written ca 80% of the code without intervention from me. Plenty of my libraries now have functions it has written.

I use it regularly. It's saving me more than enough time to justify both the subscription to ChatGPT and API fees for other use.

As such, it is "actually useful" for me, and for many others.

[–] [email protected] 1 points 10 months ago (2 children)

Bubble in the sense that "many companies will fail" we can agree on. Companies like OpenAI will survive - lawsuits or not - and even if they were to fail due to the lawsuits the algorithms are known and e.g. Microsoft, who has a license to the tech would just hire the team and start over and let the corporate entity go bankrupt.

But all of the "ChatGPT for field X" companies that are just razor-thin layers on top of OpenAI's API, sure, they will almost all fail, and the only ones of them that won't will be the ones that leverages initial investment into an opportunity to quickly pivot into something more substantial.

A lot of people talk about AI as a bubble in the sense of believing the tech will go away, though, and that will never happen, because it's useful enough.

Regarding OpenAI's market cap, I don't agree - I think it'll increase far more, unless they massively misstep, because even though it's riding high on hype, they also still have big lead not down to their hype but down to actually being significantly ahead of even competitors like Google, and given the high P/E ratios in tech they don't need to be the backend all that many big deployments behind big companies even just to field really stupid-simple uses that don't really need the capabilities of GPT before they'll justify that valuation.

 

The manchild strikes again.

 

The world's fastest supercomputer blasts through one trillion parameter model with only 8 percent of its MI250X GPUs

[–] [email protected] 2 points 10 months ago

Whenever I see them described as "plagiarism machines", odds are about 99% that the person using the term have no idea how these models work. Like with humans, they can overfit, but most of what they output will have have far less in common with any individual work than levels of imitations people engage in without being accused of plagiarism all the time.

As for the environmental effects, it's a totally ridiculous claim - the GPUs used to train even the top of the line ChatGPT models adds up to a tiny rounding error of the power use of even middling online games, and training has only gotten more efficient since.

E.g. researchers at Oak Ridge National Labs published a paper in December after having trained a GPT4 scale model with only 3k GPUs on the Frontier supercomputer using Megatron-DeepSpeed. 3k GPUs is about 8% of Frontiers capacity, and while Frontier is currently fastest, there are hundreds of supercomputers at that kind of scale publicly known about, and many more that are not. Never mind the many millions of GPUs not part of any supercomputer.

[–] [email protected] 4 points 10 months ago

You can't. The cat is out of the bag. The algorithms are well understood, and new papers on ways to improve output of far smaller models come out every day. It's just a question of time before training competitive models will be doable for companies in a whole range of jurisdictions entirely unlikely to care.

[–] [email protected] 0 points 10 months ago (4 children)

Why do you think anything will "burst"? If anything, if licensing requirements for contents makes training expensive it's likely to make the biggest existing players far more valuable.

[–] [email protected] 5 points 10 months ago (10 children)

Possibly. On the other hand, OpenAI's market cap is bigger than the ten largest publishers combined - despite their whining they can afford to. It's not OpenAI that will be prevented from getting training data - the biggest impact will be that it might stop smaller competitors and prevent open-source models.

 

I would still manage to break it with ease one week after purchase (and so will stick with my cheap Android phones)

 

No shit. As if we didn't know. Doesn't make it less depressing.

[–] [email protected] 1 points 1 year ago

Seems better behaved than many commuters I've run into.

view more: next ›