Would you have a link to that? I know there are many third-party garbage collectors for Rust, but if there’s something semi-official being proposed or prototyped I’d be most curious :)
Cool, that was an informative read!
If we were willing to leak memory, then we could write […]
Box::leak(Box::new(0))
In this example, you could have just made a constant with value 0
and returned a reference to that. It would also have a 'static
lifetime and there would be no leaking.
Why does nobody seem to be talking about this?
My guess is that the overlap in use cases between Rust and C# isn’t very large. Many places where Rust is making inroads (kernel and low-level libraries) are places where C# would be automatically disqualified because of the requirements for a runtime and garbage collection.
I think that’s very team/project dependent. I’ve seen it done before indeed, but I’ve never been on a team where it was considered idiomatic.
I don’t know about your workplace, but if at all possible I would try to find time between tasks to spend on learning. If your company doesn’t have a policy where it is clear that employees have the freedom to learn during company time, try to underestimate your own velocity even more and use the time it leaves for learning.
About 10 years ago I worked for a company where I was performing quite well. Since that meant I finished my tasks early, I could have taken on even more tasks. But I didn’t really tell our scrum master when I finished early. Instead I spent the time learning, and also refactoring code to help me become more productive. This added up, and my efficiency only increased more, until at some point I only needed one or two days to complete a week’s sprint. I didn’t waste my time, but I used it to pick up more architectural stuff on the side, while always learning on the job.
I’ll admit that when I started this route, I already had a bunch of experience under my belt, and this may not be feasible if you have managers breathing down your neck all the time. But the point is, if you play it smart you can use company time to improve yourself and they may even appreciate you for it.
If we’re looking at it from a Rust angle anyway, I think there’s a second reason that OOP often becomes messy, but less so in Rust: Unlimited interior mutability. Rust’s borrow checker may be annoying at times, but it forces you to think about ownership and prevents you from stuffing statefulness where it shouldn’t be.
You can use the regular data structures in java and run into issues with concurrency but you can also use unsafe in rust so it’s a bit of a moot point.
In Java it isn’t always clear when something crosses a thread boundary and when it doesn’t. In Rust, it is very explicit when you’re opting into using unsafe
, so I think that’s a very clear distinction.
Java provides classes for thread safe programming, but the language isn’t thread safe. Just like C++ provides containers for improved memory safety, and yet the language isn’t memory safe.
The distinction lies between what’s available in the standard library, and what the language enforces.
Modern C++ does use references, which can also reference memory that is no longer available. Avoiding raw pointers isn’t enough to be memory safe.
Try browsing the list of somewhat recent #CVE rated critical, as I just did to verify. A majority of them is not related to any memory errors. Will you tell all them “just use a different programming language”?
I’m sorry, but this has been repeatedly refuted:
And yes, they are telling their engineers to use a different programming language. In fact, even the NSA is saying exactly that: https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/3215760/nsa-releases-guidance-on-how-to-protect-against-software-memory-safety-issues/
It doesn’t come out today, it’s been there for a long time, and it’s standardized, proven and stable.
This seems like an extremely short-sighted red herring. C has so many gaps in its specification, because it has no problem defining things as “undefined behavior” or “implementation defined”, that the standard is essentially useless for kernel-level programming. The Linux kernel is written in C and used to only build with GCC. Now it builds with GCC and LLVM, and it relies on many non-standard compiler extensions for each. The effort to add support for LLVM took them 10 years. That’s 10 years for a migration from C to C. Ask yourself: how is that possible if the language is so well standardized?
Great suggestions! One nitpick:
But in principle I find this quite workable, as you get to write your CI code in Rust.
Having used xtask in the past, I’d say this is a downside. CI code is tedious enough to debug as it is, and slowing down the cycle by adding Rust compilation into the mix was a horrible experience. To add, CI is a unique environment where Rust’s focus on correctness isn’t very valuable, since it’s an isolated, project-specific environment anyway.
I’d rather use Deno or indeed just
for that.
No, OP asked for a black and white winner. I was elaborating because I don’t think it’s that black and white, but if you want a singular answer I think it should be clear: Rust.
I would say at this point in time it’s clearly decided that Rust will be part of the future. Maybe there’s a meaningful place for Zig too, but that’s the only part that’s too early to tell.
If you think Zig still has a chance at overtaking Rust though, that’s very much wishful thinking. Zig isn’t memory safe, so any areas where security is paramount are out of reach for it. The industry isn’t going back in that direction.
I actually think Zig might still have a chance in game development, and maybe in specialized areas where Rust’s borrow checker cannot really help anyway, such as JIT compilers.
Ah yes, exactly.
Runtime performance is entirely unaffected by the use of macros. It can have a negative impact on compile-time performance though, if you overdo it.
While I can get behind most of the advice here, I don’t actually like the conditions array. The reason being that each condition function now needs additional conditions to make sure it doesn’t overlap with the other condition functions. This was much more elegantly handled by the else
clauses, since adding another condition to the array has now become a puzzle to verify the conditions remain non-overlapping.
I find Linear to be reasonably pleasant.
Issue resolved
I mentioned it in the first comment:
the reason I tend to recommend B-Tree maps over hash maps for ordinary programming is consistent iteration order. It is simply too easy to run into a situation where you think iteration order doesn’t matter, but then it turns out it does in some subtle unforeseen way.
I’m not talking about bugs in the implementation of the map itself, I’m talking about unforeseen consequences in the user’s code since they may not anticipate properly for the randomness in iteration.
Oh, I agree, they both have their use cases. But that doesn’t mean there’s not plenty of situations where the performance is effectively irrelevant, but where people tend to default to using a hash map because they heard it’s faster (probably because lookups are O(1) indeed). So that’s where I would say, as long as performance doesn’t matter it’s better to default to B-Tree maps than to hash maps, because the chance of avoiding bugs is more valuable than immeasurable performance benefits (not to mention that for smaller data sets B-Tree maps can often outperform hash maps due to better cache locality, but again that’s hardly relevant since the data set is small anyway).
Hehe, yeah, I actually agree in principle, although in the context of web tooling I think it’s at least understandable. For many years, web tooling was almost exclusively written in JavaScript itself, which was hailed as a feature, since it allowed JS developers to easily jump in and help improve their own tooling. And it made the stack relatively simple: All you needed was Node.js and you were good to go.
Something like the Google Closure Compiler, written in Java, was for many years better than comparable tooling written in JS, but remained in obscurity, partially because it was cumbersome to setup and people didn’t want to deal with Java.
Then the JS ecosystem ran into a wall. JS projects were becoming bigger and bigger, and the performance overhead of their homegrown tooling started frustrating more and more. That just happened to be the time that Rust came around, and it happened to tick all the boxes:
I think these things combined helped the language to quickly win the hearts and minds of many in the web community. So now we’re in a position where just name dropping “Rust” can be a way to quickly resonate with those developers, because they associate it with fast and reliable and portable. In principle you’re right, it should just be an implementation detail. But through circumstance it seems to have also become an expression of mindshare – ie. a marketing tool.
Using smart pointers doesn’t eliminate the memory safety issue, it merely addresses one aspect of it. Even with smart pointers, nothing is preventing you from passing references and using them after they’re freed.