I disabled my adblock for Twitter to see a update about game server maintenance. It showed me random posts, nothing from this year. Literally unusable site when you can't even see the latest tweets. Had to have other people tell me maintenance was extended...
BehindTheBarrier
Faster is 90% of the reason for me. It's just so much more smoother, i can find things faster and do things faster. ButI also use a lot of the developing features, creating/converting properties, functions and such. Probably also in VS but i'm not that used to them. I'm also used to quickly jumping around in the code by going to definitions. Rider is nicer here, because VS is clunky and feel like there is two system competing to do that in VS with Resharper. Not to mention the stutters and slow program.
I like the git integration better in Rider. I think VS solved it, but selecting a remote branch wasn't actually getting you that branch before you pulled the changes manually. To the point where i pushed a brand new branch, someone else selected that in VS, and when they ran the build it didn't work at all because it didn't have all the changes?? It also did not auto-fetch, so it showed you were up to date with the remote despite not being... Apart from that, it makes swapping branches a lot easier. VS gets angry with uncommited changes. And while i wasn't a huge fan of the new diff view, diffs without newline changes and such is a killer feature, especially for someone that got a new editorconfig but not the entire codebase refactored... (because, too busy to do such a large change)
The biggest downside to Rider is hot-reloading of XAML. Rider does not support that for .NET at least. It's a bit of a bummer since VS allows some very rapid iteration solving layouting issues.
Just the last week, I have had memory issues where Rider eats up to 10 GB of ram and then starts stuttering after being open for more than a day. I just installed the latest update that hoepfully fixed that. Rider also sometimes just decides not to run one or more programs in a multi-launch config, particularly the first time after starting. That's a bit annyoing.
I do not really like the database integration, but we also have a stupid oracle database and the way to handle that is a whole other story.
Assuming digital button here.
Like how a lot of sites, link to Facebook, insta, X, etc at the bottom of their web page. Just the fact it was an option meant something.
Our main motivator was, and is, that manual testing is very time consuming and uninteresting for devs. Spending upwards of a week before a release because the teams has to setup, pick and perform all featue tests again on the release candidate, is both time and money. And we saw things slip through now and then.
Our application is time critical, legacy code of about 30 years, spread between C# and database code, running in different variations with different requirements. So a single page may show differently depending on where it's running. Changing one thing can often affect others, so for us it is sometimes very tiresome to verify even the smalles changes since they may affect different variants. Since there is no automated tests, especially for GUI (which we also do not unit test much, because that is complicated and prone to breaking), we have to not only test changes, but often check for regression by comparing by hand to the old version.
We have a complicated system with a few intergrations, setting up all test scenarios not only takes time during testing, but also time for the dev to prepare the instructions for. And I mentioned calculations, going through all motions to verify that a calculated result is the same between two version is a awfully boring experience when that is exaclty something automated tests can just completely take over for you.
As our application is projected to grow, so does all of this manual testing required for a single change. So putting all that effort into manual testing and preparation can intsead often just be put on making tests that check requirements. And once our coverage is good enough, we can only manuall test interfaces, and leave a lot of the complicated edge cases and calculcation tests to automated tests. It's a bit idealistic to say automated tests can do everything, but they can certainly remove the most boring parts.
I'm on a similarly sized team, and we have put more effort into automated testing lately. We got an experienced person in on the team that knows his shit, and is engaged in improving our testing. But it's definiely worth it. Manual testing tests the code now, automated testing checks the code later. That's very important, because when 5 people test things, they aren't going to test everything every time as well as all the new stuff. It's too boring.
So yes, you really REALLY should have automated testing. If you have 20 people, I'd guess you're developing on something that is too large for a single person to have in-depth knowldge of all parts.
Any team should have automated test. More specifically, you should have/write tests that test "business functionality", not that your function does exactly what it is supposed to do. Our test expert made a test for something that said "ThisCompentsDisplayValueShouldBeZeroWhenUndefined" (Here component is something the users see, and always exepct to have a value. There is other components that might not show a value).
And when I had to interact with the data processing because another "component" did not show zero in an edge case. I fixed the edge case, but I also broke the test for that other component. Now it was very clear to me that I also broke something that worked. A manual tester would maybe have noticed, but these were seperate components, and they might still see 0 on the thing that broke becase they had the value 0. Or simply did not know that was a requirement!
We just recently started enforcing unit tests to be green to merge features. It brings a lot more comfort, especially since you can put more trusting changing systems that deal with caluclations, when you know tests check that results are unchanged.
Python, C#, Rust
Used a bit of C++ and Matlab, but saying I know them is a stretch really.
Not to completely spring to IKEAs defense here, but I heard they really were affected by production and shipping problems during covid. It's reasonable prices would go up, and at least good that they are going down again.
At least LCD is mostly temporary according to that article.
But pixels do degrade over time, not sure if it counts as burn-in though.
Given your little driving, sticking with the old one is a sound decision. But it's worth looking into getting a used electric car if you do need to upgrade. Especially since you say you don't use the car too much, an older one which has had some battery decay might still be fully operable, and closer to a price range where it's not a large monetary loss.
Ah, so I'm actually cheating with the pointer reading, i'm actually making a clone of Arc without using the clone()... And then dropping it to kill the data. I had assumed it just gave me that object so I could use it. I saw other double buffer implementations (aka write one place, read from another palce, and then swap them safely) use arrays with double values, but I wasn't much of a fan of that. There is some other ideas of lock free swapping, using index and options, but it seemed less clean. So RwLock is simplest.
And yeah, if I wanted a simple blog, single files or const strings would do. But that is boring! I mentioned in the other reply, but it's purely for fun and learning. And then it needs all the bells and whistles. Writing html is awful, so I write markdown files and use a crate to convert it to html, and along the way replace image links with lazy loading versions that don't load until scrolled down to. Why, because I can! Now it just loads from files but if I bother later i'll cache them in memory and add file watching to replace the cached version. Aka an idea of the issue here.
Thanks for the great reply! (And sorry for that other complicated question... )
Knowing that &str is just a reference, makes sense when they are limited to compile time. The compiler naturally knows in that case when it's no longer used and can drop the string at the appropriate time. Or never dropped in my case, since it's const.
Since I'm reading files to serve webpages, I will need Strings. I just didn't get far enough to learn that yet.... and with that 'Cow' might be a good solution to having both. Just for a bit of extra performance when some const pages are used a lot.
For example code, here's a function. Simply take a page, and constructs html from a template, where my endpoint is used in it.
pub fn get_full_page(&self, page: &Page) -> String {
self.handler
.render(
PageType::Root.as_str(),
&json!({"content-target": &page.endpoint}),
)
.unwrap_or_else(|err| err.to_string())
}
Extra redundant context: All this is part of a blog I'm making from scratch. For fun and learning Rust, and Htmx on the browser side. It's been fun finding out how to lazy load images, my site is essentially a single paged application until you use "back" or refresh the page. The main content part of the page is just replaced when you click a "link". So the above function is a "full serve" of my page. Partial serving isn't implemented using the Page structs yet. It just servers files at the moment. When the body is included, which would be the case for partial serves i'll run into that &str issue.
The simplicity of Google Photos has me still rolling with that.
But for all my music, syncthing is the best. In my case it's synced to my phone though, and also backuped up from that to the cloud.