One has to add that from the 218 UB in the ISO C23, 87 are in the core language. From those we already removed 26 and are in progress of removing many others. You can find my latest update here (since then there was also some progress): https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3529.pdf
tialaramex 4 hours ago [-]
A lot of that work is basically fixing documentation bugs, labelled "ghosts" in your text. Places where the ISO document is so bad as a description of C that you would think there's Undefined Behaviour but it's actually just poorly written.
Fixing the document is worthwhile, and certainly a reminder that WG21's equivalent effort needs to make the list before it can even begin that process on its even longer document, but practical C programmers don't read the document and since this UB was a "ghost" they weren't tripped by it. Removing items from the list this way does not translate to the meaningful safety improvement you might imagine.
There's not a whole lot of movement there towards actually fixing the problem. Maybe it will come later?
taneq 20 minutes ago [-]
> practical C programmers don't read the document and since this UB was a "ghost" they weren't tripped by it
I would strongly suspect that C compiler implementers very much do read the document, though. Which, as far as I can see, means "ghosts" could easily become actual UB (and worse, sneaky UB that you wouldn't expect.)
tialaramex 8 minutes ago [-]
The previous language might cause a C compiler developer to get very confused because it seems as though they can choose something else but what it is isn't specified, but almost invariably eventually they'll realise oh, it's just badly worded and didn't mean "should" there.
It's like one of those tricky self-referential parlor box statements. "The statement on this box is not true"? Thanks I guess. But that's a game, the puzzles are supposed to be like that, whereas the mission of the ISO document was not to confuse people, so it's good that it is being improved.
uecker 3 hours ago [-]
Fixing the actual problems is work-in-progress (as my document also indicates), but naturally it is harder.
But the original article also complains about the number of trivial UB.
ncruces 4 hours ago [-]
And yet, I see P1434R0 seemingly trying to introduce new undefined behavior, around integer-to-pointer conversions, where previously you had reasonably sensible implementation defined behavior (the conversions “are intended to be consistent with the addressing structure of the execution environment").
Pointer provenance already existed before, but the standards were contradictory and incomplete. This is an effort to more rigorously nail down the semantics.
i.e., the UB already existed, but it was not explicit had to be inferred from the whole text and the boundaries were fuzzy. Remember that anything not explicitly defined by the standard, is implicitly undefined.
Also remember, just because you can legally construct a pointer it doesn't mean it is safe to dereference.
ncruces 1 hours ago [-]
The current standard still says integer-to-pointer conversions are implementation defined (not undefined) and furthermore "intended to be consistent with the addressing structure of the execution environment" (that's a direct quote).
I have an execution environment, Wasm, where doing this is pretty well defined, in fact. So if I want to read the memory at address 12345, which is within bounds of the linear memory (and there's a builtin to make sure), why should it be undefined behavior?
And regarding pointer provenance, why should going through a pointer-to-integer and integer-to-pointer conversions try to preserve provenance at all, and be undefined behavior in situations where that provenance is ambiguous?
The reason I'm using integer (rather than pointer) arithmetic is precisely so I don't have to be bound by pointer arithmetic rules. What good purpose does it serve for this to be undefined (rather than implementation defined) beyond preventing certain programs to be meaningfully written at all?
I'm genuinely curious.
JonChesterfield 2 hours ago [-]
Pointer provenance was certainly not here in the 80s. That's a more modern creation seeking to extract better performance from some applications at a cost of making others broken/unimplementable.
It's not something that exists in the hardware. It's also not a good idea, though trying to steer people away from it proved beyond my politics.
jcranmer 3 minutes ago [-]
Pointer provenance probably dates back to the 70s, although not under that name.
The essential idea of pointer provenance is that it is somehow possible to enumerate all of the uses of a memory location (in a potentially very limited scope). By the time you need to introduce something like "volatile" to indicate to the compiler that there are unknown uses of a variable, you have to concede the point that the compiler needs to be able to track all the known uses within a compiler--and that process, of figuring out known uses, is pointer provenance.
As for optimizations, the primary optimization impacted by pointer provenance is... moving variables from stack memory to registers. It's basically a prerequisite for doing any optimization.
The thing is that traditionally, the pointer provenance model of compilers is generally a hand-wavey "trace dataflow back to the object address's source", which breaks down in that optimizers haven't maintained source-level data dependency for a few decades now. This hasn't been much of a problem in practice, because breaking data dependencies largely requires you to have pointers that have the same address, and you don't really run into a situation where you have two objects at the same address and you're playing around with pointers to their objects in a way that might cause the compiler to break the dependency, at least outside of contrived examples.
tialaramex 1 hours ago [-]
> It's not something that exists in the hardware
This is sort of on the one hand not a meaningful claim, and then on the other hand not even really true if you squint anyway?
Firstly the hardware does not have pointers. It has addresses, and those really are integers. Rust's addr() method on pointers gets you just an address, for whatever that's worth to you, you could write it to a log maybe if you like ?
But the Morello hardware demonstrates CHERI, an ARM feature in which a pointer has some associated information that's not the address, a sort of hardware provenance.
lmkg 1 hours ago [-]
It very much is something that exists in hardware. One of the major reasons why people finally discovered the provenance UB lurking in the standard is because of the CHERI architecture.
gpderetta 2 hours ago [-]
I'm not a compiler writer, but I don't know how you would be able to implement any optimization while allowing arbitrary pointer forging and without whole-program analysis.
ncruces 1 hours ago [-]
Why? What specific optimization do you have in mind that prevents me from doing an aligned 16/32/64-byte vector load that covers the address pointed to by a valid char*?
ncruces 47 minutes ago [-]
Can't reply to the sibling comment, for some reason.
If you don't know the extents of the object pointed to by the char*, using an aligned vector load can reach outside the bounds of the object. Keeping provenance makes that undefined behavior.
Using integer arithmetic, and pointer-to-integer/integer-to-pointer conversions would make this implementation defined, and well defined in all of the hardware platforms where an aligned vector load can never possibly fail.
So you can't do some optimizations to functions where this happens? Great. Do it. What else?
As for why you'd want to do this. C makes strings null-terminated, and you can't know their extents without strlen first. So how do you implement strlen? Similarly your example. Seems great until you're the one implementing malloc.
But I'm sure "let's create undefined behavior for a libc implemented in C" is a fine goal.
gpderetta 35 minutes ago [-]
[when there is no reply button, you need to click on the date (i.e. N minutes ago) to get the reply box]
I think your example would fall foul of reading beyond the end of an object in addition to pointer provenance. In your case the oob read is harmless as you do not expect any meaningful values for the extra bytes, but generally the compiler would not be able to give any guarantees about the content of the additional memory (or that the memory exists in the first place).
This specific use case could be addressed by the standard, but vectors are already out of the standard, so in practice you use whatever extension you have to use and abide to whatever additional rule the compiler requires (of course this is often underspecified). For example, on GCC simd primitives already have carve-outs for TBAA.
FWIW, a libc implementation in practice already must rely on compiler specific, beyond the standard behaviour anyway.
gpderetta 1 hours ago [-]
Casting a char pointer to a vector pointer and doing vector loads doesn't violate provenance, although it might violate TBAA.
Regarding provenance, consider this:
void bar();
int foo() {
int * ptr = malloc(sizeof(int));
*ptr = 10;
bar();
int result = *ptr;
free(ptr);
return result;
}
If the compiler can track the lifetime of the dynamically allocated int, it can remove the allocation and covert this function to simply
int foo() {
bar();
return 10;
}
It can't if arbitrary code (for example inside bar()) can forge pointers to that memory location. The code can seem silly, but you could end up with something similar after inlining.
torstenvl 44 minutes ago [-]
> It can't if arbitrary code (for example inside bar()) can forge pointers to that memory location.
Yes. It absolutely can. What are you even talking about?
C is not the Windows Start Menu. This habit of thinking it needs to do what it thinks I might expect instead of what I told it is deeply psychotic.
gpderetta 33 minutes ago [-]
I litterally have no idea what are you trying to say. Do you mean that bar should be allowed to access *ptr with impunity or not?
safercplusplus 4 hours ago [-]
A couple of solutions in development (but already usable) that more effectively address UB:
i) "Fil-C is a fanatically compatible memory-safe implementation of C and C++. Lots of software compiles and runs with Fil-C with zero or minimal changes. All memory safety errors are caught as Fil-C panics."
"Fil-C only works on Linux/X86_64."
ii) "scpptool is a command line tool to help enforce a memory and data race safe subset of C++. It's designed to work with the SaferCPlusPlus library. It analyzes the specified C++ file(s) and reports places in the code that it cannot verify to be safe. By design, the tool and the library should be able to fully ensure "lifetime", bounds and data race safety."
"This tool also has some ability to convert C source files to the memory safe subset of C++ it enforces"
tialaramex 3 hours ago [-]
Fil-C is interesting because as you'd expect it takes a significant performance penalty to deliver this property, if it's broadly adopted that would suggest that - at least in this regard - C programmers genuinely do prioritise their simpler language over mundane ideas like platform support or performance.
The resulting language doesn't make sense for commercial purposes but there's no reason it couldn't be popular with hobbyists.
eru 3 hours ago [-]
Well, you could also treat Fil-C as a sanitiser, like memory-san or ub-san:
Run your test suite and some other workloads under Fil-C for a while, fix any problems report, and if it doesn't report any problems after a while, compile the whole thing with GCC afterwards for your release version.
safercplusplus 1 hours ago [-]
Right. And of course there are still less-performance-sensitive C/C++ applications (curl, postfix, git, etc.) that could have memory-safe release versions.
But the point is also to dispel the conventional wisdom that C/C++ is necessarily intrinsically unsafe. It's a tradeoff between safety, performance and flexibility/compatibility. And you don't necessarily need to jump to a completely different language to get a different tradeoff.
Fil-C sacrifices some performance for safety and compatibility. The traditional compilers sacrifice some safety for performance and flexibility/compatibility. And scpptool aims to provide the option of sacrificing some flexibility for safety and performance. (Along with the other two tradeoffs available in the same program). The claim is that C++ turns out to be expressive enough to accommodate the various tradeoffs. (Though I'm not saying it's always gonna be pretty :)
kazinator 5 hours ago [-]
Undefined behavior only means that ISO C doesn't give requirements, not that nobody gives requirements. Many useful extensions are instances where undefined behavior is documented by an implementation.
Including a header that is not in the program, and not in ISO C, is undefined behavior. So is calling a function that is not in ISO C and not in the program. (If the function is not anywhere, the program won't link. But if it is somewhere, then ISO C has nothing to say about its behavior.)
Correct, portable POSIX C programs have undefined behavior in ISO C; only if we interpret them via IEEE 1003 are they defined by that document.
If you invent a new platform with a C compiler, you can have it such that #include <windows.h> reformats all the attached storage devices. ISO C allows this because it doesn't specify what happens if #include <windows.h> successfully resolves to a file and includes its contents. Those contents could be anything, including some compile-time instruction to do harm.
Even if a compiler's documentationd doesn't grant that a certain instance of undefined behavior is a documented extension, the existence of a de facto extension can be inferred empirically through numerous experiments: compiling test code and reverse engineering the object code.
Moreover, the source code for a compiler may be available; the behavior of something can be inferred from studying the code. The code could change in the next version. But so could the documentation; documentation can take away a documented extension the same way as a compiler code change can take away a de facto extension.
Speaking of object code: if you follow a programming paradigm of verifying the object code, then undefined behavior becomes moot, to an extent. You don't trust the compiler anyway. If the machine code has the behavior which implements the requirements that your project expects of the source code, then the necessary thing has been somehow obtained.
throw-qqqqq 4 hours ago [-]
> Undefined behavior only means that ISO C doesn't give requirements, not that nobody gives requirements. Many useful extensions are instances where undefined behavior is documented by an implementation.
True, most compilers have sane defaults in many cases for things that are technically undefined (like take sizeof(void) or do pointer arithmetic on something other than a char). But not all of these cases can be saved by sane defaults.
Undefined behavior means the compiler can replace the code with whatever. So if you e.g. compile optimizing for size, the compiler will rip out the offending code, as replacing it with nothing yields the greatest size optimization.
Snippets of software exhibiting undefined behavior, executing e.g. both the true and the false branch of an if-statement or none etc. UB should not be taken lightly IMO...
eru 3 hours ago [-]
> [...] undefined behavior, executing e.g. both the true and the false branch of an if-statement or none etc.
Or replacing all you mp3s with a Rick Roll. Technically legal.
(Some old version of GHC had a hilarious bug where it would delete any source code with a compiler error in it. Something like this would technically legal for most compiler errors a C compiler could spot.)
pjmlp 4 hours ago [-]
Unfortunely it also means that when the programmer fails to understand what undefined behaviour is exposed on their code, the compiler is free to take advantage of that to do the ultimate performance optimizations as means to beat compiler benchmarks.
The code change might come in something as innocent as a bug fix to the compiler.
quietbritishjim 2 hours ago [-]
> Including a header that is not in the program, and not in ISO C, is undefined behavior.
What is this supposed to mean? I can't think of any interpretation that makes sense.
I think ISO C defines the executable program to be something like the compiled translation units linked together. But header files do not have to have any particular correspondence to translation units. For example, a header might declare functions whose definitions are spread across multiple translation units, or define things that don't need any definitions in particular translation units (e.g. enum or struct definitions). It could even play macro tricks which means it declares or defines different things each time you include it.
Maybe you mean it's undefined behaviour to include a header file that declares functions that are not defined in any translation unit. I'm not sure even that is true, so long as you don't use those functions. It's definitely not true in C++, where it's only a problem (not sure if it's undefined exactly) if you ODR-rule use a function that has been declared but not defined anywhere. (Examples of ODR-rule use are calling or taking the address of the function, but not, for example, using sizeof on an expression that includes it.)
kazinator 2 hours ago [-]
> I can't think of any interpretation that makes sense
Start with a concrete example. A header that is not in our program, or described in ISO C. How about:
#include <winkle.h>
Defined behavior or not? How can an implementation respond to this #include while remaining conforming? What are the limits on that response?
> But header files do not have to have any particular correspondence to translation units.
A header inclusion is just a mechanism that brings preprocessor tokens into a translation unit. So, what does the standard tell us about the tokens coming from #include <winkle.h> into whatever translation unit we put it into?
Say we have a single file program and we made that the first line. Without that include, it's a standard-conforming Hello World.
quietbritishjim 45 minutes ago [-]
Oh I see, you just meant an attempt to include a file path that couldn't be found. That's not a correct usage of the term "program" – that refers to the binary output of the compilation process, whereas you're taking about the source files that are the input to the compilation. That sounds a bit pedantic but I really didn't understand what you meant.
I just checked, and if you attempt to include a file that cannot be found (in the include path, though it doesn't use that exact term) then that's a constraint violation and the compiler is required to stop compilation and issue a diagnostic. Not undefined behaviour.
im3w1l 33 minutes ago [-]
I think we are slowly getting closer to the crux of the matter. Are you saying that it's a problem to include files from a library since they are "not in our program"? What does that phrase actually mean? What is the bounds of "our program" anyway? Couldn't it be the set {main.c, winkle.h}
fattah25 6 hours ago [-]
Rust here rust there. We are just talking about C not rust. Why we have to using rust. If you talking memory safety why there is no one recommends Ada language instead of rust.
We have zig, Hare, Odin, V too.
the__alchemist 4 minutes ago [-]
Even within the rust OSS community it's irritating. They will try to cancel people for writing libs using `unsafe`, and makes APIs difficult to use by wrapping things in multiple layers of traits, then claim using other patters are unsafe/unsound/UB. They make claims that things like DMA are "advanced topics", and "We haven't figured it out yet/found a good solution yet". Love rust/hate the Satefy Inquisition. Or say things like "Why use rust if you don't use all the safety-features and traits"... which belittles rust as a one-trick lang!
ViewTrick1002 5 hours ago [-]
> Ada language instead of rust
Because it never achieved mainstream success?
And Zig for example is very much not memory safe. Which a cursory search for ”segfault” in the Bun repo quickly tells you.
And with this attitude it never will. With Rust's hype, it would.
lifthrasiir 5 hours ago [-]
More accurately speaking, Zig helps spatial memory safety (e.g. out-of-bound access) but doesn't help temporal memory safety (e.g. use-after-free) which Rust excels at.
ViewTrick1002 5 hours ago [-]
As long as you are using the "releasesafe" build mode and not "releasefast" or "releasesmall".
pjmlp 4 hours ago [-]
Which is something that even PL/I predating C already had.
pjmlp 4 hours ago [-]
None of them solve use after free, for example.
Ada would rather be a nice choice, but most hackers love their curly brackets.
laauraa 5 hours ago [-]
>Uninitialized data
They at least fixed this in c++26.
No longer UB, but "erroneous behavior".
Still some random garbage value (so an uninitialized pointer will likely lead to disastrous results still), but the compiler isn't allowed to fuck up your code, it has to generate code as if it had some value.
tialaramex 3 hours ago [-]
It won't be a "random garbage value" but is instead a value the compiler chose.
In effect if you don't opt out your value will always be initialized but not to a useful value you chose. You can think of this as similar to the (current, defanged and deprecated as well as unsafe) Rust std::mem::uninitialized()
There were earlier attempts to make this value zero, or rather, as many 0x00 bytes as needed, because on most platforms that's markedly cheaper to do, but unfortunately some C++ would actually have worse bugs if the "forgot to initialize" case was reliably zero instead.
eru 2 hours ago [-]
What are these worse bugs?
tialaramex 2 hours ago [-]
The classic thing is, we're granting user credentials - maybe we're a login proces, or a remote execution helper - and we're on Unix. In some corner case we forget to fill out the user ID. So it's "random noise". Maybe in the executable distributed to your users it was 0x4C6F6769 because the word "Login" was in that memory in some other code and we never initialized it so...
Bad guys find the corner case and they can now authenticate as user 0x4C6F6769 which doesn't exist and so that's useless. But - when we upgrade to C++ 26 with the hypothetical zero "fix" now they're root instead!
kazinator 4 hours ago [-]
C also fixed it in its way.
Access to an uninitialized object defined in automatic storage, whose address is not taken, is UB.
Access to any uninitialized object whose bit pattern is a non-value, likewise.
Otherwise, it's good: the value implied by the bit pattern is obtained and computation goes on its merry way.
VivaTechnics 4 days ago [-]
We switched to Rust.
Generally, are there specific domains or applications where C/C++ remain preferable? Many exist—but are there tasks Rust fundamentally cannot handle or is a weak choice?
pjmlp 4 hours ago [-]
Yes, all the industries where C and C++ are the industry standards like Khronos APIs, POSIX, CUDA, DirectX, Metal, console devkits, LLVM and GCC implementation,....
Not only you are faced with creating your own wrappers, if no one else has done it already.
The tooling, for IDEs and graphical debuggers, assumes either C or C++, so it won't be there for Rust.
Ideally the day will come where those ecosystems might also embrace Rust, but that is still decades away maybe.
bluetomcat 5 hours ago [-]
Rust encourages a rather different "high-level" programming style that doesn't suit the domains where C excels. Pattern matching, traits, annotations, generics and functional idioms make the language verbose and semantically-complex. When you follow their best practices, the code ends up more complex than it really needs to be.
C is a different kind of animal that encourages terseness and economy of expression. When you know what you are doing with C pointers, the compiler just doesn't get in the way.
eru 2 hours ago [-]
Pattern matching should make the language less verbose, not more. (Similar for many of the other things you mentioned.)
> When you know what you are doing with C pointers, the compiler just doesn't get in the way.
Alas, it doesn't get in the way of you shooting your own foot off, too.
Rust allows unsafe and other shenanigans, if you want that.
bluetomcat 57 minutes ago [-]
> Pattern matching should make the language less verbose, not more.
In the most basic cases, yes. It can be used as a more polished switch statement.
It's the whole paradigm of "define an ad-hoc Enum here and there", encoding rigid semantic assumptions about a function's behaviour with ADTs, and pattern matching for control-flow. This feels like a very academic approach and modifying such code to alter its opinionated assumptions isn't funny.
uecker 5 hours ago [-]
Advantages of C are short compilation time, portability, long-term stability, widely available expertise and training materials, less complexity.
IMHO you can today deal with UB just fine in C if you want to by following best practices, and the reasons given when those are not followed would also rule out use of most other safer languages.
simonask 3 hours ago [-]
This is a pet peeve, so forgive me: C is not portable in practice. Almost every C program and library that does anything interesting has to be manually ported to every platform.
C is portable in the least interesting way, namely that compilers exist for all architectures. But that's where it stops.
pjmlp 1 hours ago [-]
Back in the 2000's I had lots of fun porting code across several UNIX systems, Aix, Solaris, HP-UX, Red-Hat Linux.
A decade earlier I also used Xenix and DG/UX.
That is a nice way to learn how "portable" C happens to be, even between UNIX systems, its birthplace.
uecker 2 hours ago [-]
Compilers existing is essential and not trivial (and also usually then what other languages build on). The conformance model of C also allows you to write programs that are portable without change to different platforms. This is possible, my software runs on 20 different architectures without change. That one can then also adopt it to make use of specific features of different platforms is quite natural in my opinion.
lifthrasiir 5 hours ago [-]
> short compilation time
> IMHO you can today deal with UB just fine in C if you want to by following best practices
In the other words, short compilation time has been traded off with wetware brainwashing... well, adjustment time, which makes the supposed advantage much less desirable. It is still an advantage, I reckon though.
uecker 2 hours ago [-]
I do not understand what you are tying to say, but it seems to be some hostile rambling.
lifthrasiir 2 hours ago [-]
Never meant to be hostile (if I indeed were, I would have question every single word), but sorry for that.
I mean to say that best practices do help much but learning those best practices take much time as well. So short compilation time is easily offseted by learning time, and C was not even designed to optimize compilation time anyway (C headers can take a lot to parse and discard even when unused!). Your other points do make much more sense and it's unfortunate that first points are destructively interfering each other, hence my comment.
pizza234 5 hours ago [-]
Yes, based on a few attempts chronicled in articles from different sources, Rust is a weak choice for game development, because it's too time-consuming to refactor.
Basically all of those problems originate with the tradition of conflating pointers and object identity, which is a problem in Rust as soon as you have ambiguous ownership or incongruent access patterns.
It's also very often not the best way to identify objects, for many reasons, including performance (spatial locality is a big deal).
These problems go away almost completely by simply using `EntityID` and going through `&mut World` for modifications, rather than passing around `EntityPtr`. This pattern gives you a lot of interesting things for free.
bakugo 3 hours ago [-]
The video I linked to is long but goes through all of this.
Pretty much nobody writing games in C++ uses raw pointers in entities to hold references to other related entities, because entities can be destroyed at any time and there's no simple way for a referring entity to know when a referenced entity is destroyed.
Using some sort of entity ID or entity handle is very common in C++, the problem is that when implementing this sort of system in Rust, developers often end up having to effectively "work around" the borrow checker, and they end up not really gaining anything in terms of correctness over C++, ultimately defeating the purpose of using Rust in the first place, at least for that particular system.
ramon156 5 hours ago [-]
We've only had 6-7 years of hame dev in rust. Bevy is coming along nicely and will hopefully remove these pain points
flohofwoe 3 hours ago [-]
"Mit dem Angriff Steiner's wird das alles in Ordnung kommen" ;)
As shitty as C++ is from today's PoV, the entire gaming industry switched over within around 3 years towards the end of the 90s. 6..7 years is a long time, and a single engine (especially when it's more or less just a runtime without editor and robust asset pipeline) won't change the bigger picture that Rust is a pretty poor choice for gamedev.
eru 2 hours ago [-]
> As shitty as C++ is from today's PoV, the entire gaming industry switched over within around 3 years towards the end of the 90s.
Did they? What's your evidence? Are you including consoles?
Btw, the alternatives in the 1990s were worse than they are now, so the bar to clear for eg C or C++ were lower.
flohofwoe 2 hours ago [-]
I was there Gandalf... ;) Console SDKs offering C or C++ APIs doesn't really matter, because you can call C APIs from C++ just fine. So the language choice was a team and engine developer decision, not a platform owner decision (as it should be).
From what I've seen, around the late mid-90's, C++ usage was still rare, right before 2000 it was already common and most middleware didn't even offer C APIs anymore.
Of course a couple of years later Unity arrived and made the gamedev language choice more complicated again.
pjmlp 1 hours ago [-]
As another Gandalf, Playstation 2 was the very first console to actually offer proper C++ tooling.
That would be 2000, until then Sega, Nintendo and Playstion only had C and Assembly SDKs, even the Playstation Yaroze for hobbists did get released only with C and Assembly support.
PC was naturally another matter, especialy with Watcom C/C++.
eru 2 hours ago [-]
> I was there Gandalf... ;)
You were at most in one place. My question was rather, which corners of the industry are you counting?
However you are right that one of the killer features of C++ was that it provided a pretty simple upgrade path from C to (bad) C++.
It's not just API calls. You can call C APIs from most languages just fine.
flohofwoe 2 hours ago [-]
My corner of the industry back then was mostly PC gamedev with occasional exploration of game consoles (but only starting with the OG Xbox. But that doesn't really matter much since it was obvious that the entire industry was very quickly moving to C++ (we had internet back then after all in my corner of the wood, as well as gamedev conferences to feel the general vibe).
id Software was kinda famous for being the last big C holdout, having only switched to C++ with Doom 3, and development of Doom 3 started in late 2000.
mgaunard 4 hours ago [-]
Rust forces you to code in the Rust way, while C or C++ let you do whatever you want.
nicoburns 4 hours ago [-]
> C or C++ let you do whatever you want.
C and C++ force you to code in the C and C++ ways. It may that that's what you want, but they certainly dont let me code how I want to code!
mgaunard 27 minutes ago [-]
There is no C or C++ ways. It's widely known that every codebase is its own dialect.
eru 2 hours ago [-]
> Generally, are there specific domains or applications where C/C++ remain preferable?
Well, anything were your people have more experience in the other language or the libraries are a lot better.
m-schuetz 3 hours ago [-]
Prototyping in any domain. It's nice to do some quick&dirty way to rapidly evaluate ideas and solutions.
eru 2 hours ago [-]
I don't think C nor C++ were ever great languages for prototyping? (And definitely not better than Rust.)
imadr 4 days ago [-]
I haven't used Rust extensively so I can't make any criticism besides that I find compilation times to be slower than C
ost-ing 6 hours ago [-]
I find with C/++ I have to compile to find warnings and errors, while with Rust I get more information automatically due to the modern type and linking systems. As a result I compile Rust significantly less times which is a massive speed increase.
Rusts tooling is hands down better than C/++ which aids to a more streamlined and efficient development experience
bch 5 hours ago [-]
> Rusts tooling is hands down better than C/++ which aids to a more streamlined and efficient development experience
Would you expand on this? What was your C tooling/workflow that was inferior to your new Rust experience?
simonask 3 hours ago [-]
Not the GP, but the biggest one is dependency management. Cargo is just extremely good.
As for the language tooling itself, static and runtime analyzers in C and C++ (and these are table stakes at this point) do not come close to the level of accuracy of the Rust compiler. If you care about writing unsafe code, Miri is orders of magnitude better at detecting UB than any runtime analyzer I've seen for C and C++.
johnisgood 3 hours ago [-]
Pacman is extremely good, too, for C. :)
kazinator 4 hours ago [-]
The popular C compilers are seriously slow, too. Orders of magnitude compared to C compilers of yesteryear.
ykonstant 5 hours ago [-]
I also hear that Async Rust is very bad. I have no idea; if anyone knows, how does async in Rust compare to async in C++?
ViewTrick1002 5 hours ago [-]
> I also hear that Async Rust is very bad.
Not sure where this is coming from.
Async rust is amazing as long as you only mix in one more hard concept. Be it traits, generics or whatever. You can confidently write and refactor heavily multithreaded code without being deathly afraid of race conditions etc. and it is extremely empowering.
The problem comes when trying to write async generic traits in a multithreaded environment.
Then just throwing stuff at the wall and hoping something sticks will quickly lead you into despair.
01HNNWZ0MV43FF 5 hours ago [-]
I am yet to use async in c++, but I did work on a multi threaded c++ project for a few years
Rust is nicer for async and MT than c++ in every way. I am pretty sure.
But it's still mid. If you use Rust async aggressively you will struggle with the borrow checker and the architecture results of channel hell.
If you follow the "one control thread that does everything and never blocks" you can get far, but the language does not give you much help in doing that style neatly.
I have never used Go. I love a lot of Go projects like Forgejo and SyncThing. Maybe Go solved async. Rust did not. C++ did not even add good tagged unions yet.
eru 2 hours ago [-]
Go (at least before generics) was really annoying to use.
Doing anything concurrent in Go is also really annoying (be that async or with threads), because everything is mutable. Not just by default but always. So anything shared is very dangerous.
ykonstant 5 hours ago [-]
Thanks for the info!
mrheosuper 5 hours ago [-]
Rust can do inline ASM, so finding a task Rust "fundamentally cannot handle" is almost impossible.
eru 2 hours ago [-]
That's almost as vacuous as saying that Rust can implement universal Turing machines are that Rust can do FFI?
kazinator 5 hours ago [-]
In C, using uninitialized data is undefined behavior only if:
- it is an automatic variable whose address has not been taken; or
- the uninitialized object' bits are such that it takes on a non-value representation.
IshKebab 2 hours ago [-]
This asserts that UB was deliberately created for optimisation purposes; not to handle implementation differences. It doesn't provide any evidence though and that seems unlikely to me.
The spec even says:
> behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements
No motivation is given that I could find, so the actual difference between undefined and implementation defined behaviour seems to be based on whether the behaviour needs to be documented.
flohofwoe 1 hours ago [-]
I'd say the original intent of UB was not the sort of "optimizer exploits" we see today, but to allow wiggle room for supporting vastly different CPUs without having to compromise runtime performance or increasing compiler complexity to balance performance versus correctness. Basically an escape hatch for compilers. The difference to IB also has always been quite fuzzy.
Also the C spec has always been a pragmatic afterthought, created and maintained to establish at least a minimal common feature set expected of C compilers.
The really interesting stuff still only exists outside the spec in vendor language extensions.
roman_soldier 2 hours ago [-]
Just use Zig, it fixes all this
pizlonator 4 hours ago [-]
I don’t buy the “it’s because of optimization argument”.
And I especially don’t buy that UB is there for register allocation.
First of all, that argument only explains UB of OOB memory accesses at best.
Second, you could define the meaning of OOB by just saying “pointers are integers” and then further state that nonescaping locals don’t get addresses. Many ways you could specify that, if you cared badly enough. My favorite way to do it involves saying that pointers to locals are lazy thunks that create addresses on demand.
j16sdiz 3 hours ago [-]
> First of all, that argument only explains UB of OOB memory accesses at best.
It explains many loop-unroll and integer overflow as well.
gpderetta 2 hours ago [-]
> nonescaping locals don’t get addresses
inlining, interprocedural optimizations.
For example, something as an trivial accessor member function would be hard to optimize.
pjmlp 1 hours ago [-]
Safer languages manage similar optimizations without having to rely on UB.
gpderetta 58 minutes ago [-]
Well, yes, safer languages prevent pointer forging statically, so provenance is trivially enforced.
And I believe that provenance is an issue in unsafe rust.
tialaramex 2 hours ago [-]
> Second, you could define the meaning of OOB by just saying “pointers are integers"
This means losing a lot of optimisations, so in fact when you say you "don't buy" this argument you only mean that you don't care about optimisation. Which is fine, but this does mean the "improved" C isn't very useful in a lot of applications, might as well choose Java.
grougnax 2 hours ago [-]
Worse languages ever.
compiler-guy 1 hours ago [-]
Jack Sparrow: “… but you have heard of them.”
The dustbin of programming languages is jam packed with elegant, technically terrific, languages that never went anywhere.
OskarS 2 hours ago [-]
C and C++ are languages that brought us UNIX, the Linux kernel, macOS and Windows, the interpreters of virtually every other language in the world, powering virtually all software in the world as well as the vast majority of embedded devices.
Fixing the document is worthwhile, and certainly a reminder that WG21's equivalent effort needs to make the list before it can even begin that process on its even longer document, but practical C programmers don't read the document and since this UB was a "ghost" they weren't tripped by it. Removing items from the list this way does not translate to the meaningful safety improvement you might imagine.
There's not a whole lot of movement there towards actually fixing the problem. Maybe it will come later?
I would strongly suspect that C compiler implementers very much do read the document, though. Which, as far as I can see, means "ghosts" could easily become actual UB (and worse, sneaky UB that you wouldn't expect.)
It's like one of those tricky self-referential parlor box statements. "The statement on this box is not true"? Thanks I guess. But that's a game, the puzzles are supposed to be like that, whereas the mission of the ISO document was not to confuse people, so it's good that it is being improved.
But the original article also complains about the number of trivial UB.
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p14...
i.e., the UB already existed, but it was not explicit had to be inferred from the whole text and the boundaries were fuzzy. Remember that anything not explicitly defined by the standard, is implicitly undefined.
Also remember, just because you can legally construct a pointer it doesn't mean it is safe to dereference.
I have an execution environment, Wasm, where doing this is pretty well defined, in fact. So if I want to read the memory at address 12345, which is within bounds of the linear memory (and there's a builtin to make sure), why should it be undefined behavior?
And regarding pointer provenance, why should going through a pointer-to-integer and integer-to-pointer conversions try to preserve provenance at all, and be undefined behavior in situations where that provenance is ambiguous?
The reason I'm using integer (rather than pointer) arithmetic is precisely so I don't have to be bound by pointer arithmetic rules. What good purpose does it serve for this to be undefined (rather than implementation defined) beyond preventing certain programs to be meaningfully written at all?
I'm genuinely curious.
It's not something that exists in the hardware. It's also not a good idea, though trying to steer people away from it proved beyond my politics.
The essential idea of pointer provenance is that it is somehow possible to enumerate all of the uses of a memory location (in a potentially very limited scope). By the time you need to introduce something like "volatile" to indicate to the compiler that there are unknown uses of a variable, you have to concede the point that the compiler needs to be able to track all the known uses within a compiler--and that process, of figuring out known uses, is pointer provenance.
As for optimizations, the primary optimization impacted by pointer provenance is... moving variables from stack memory to registers. It's basically a prerequisite for doing any optimization.
The thing is that traditionally, the pointer provenance model of compilers is generally a hand-wavey "trace dataflow back to the object address's source", which breaks down in that optimizers haven't maintained source-level data dependency for a few decades now. This hasn't been much of a problem in practice, because breaking data dependencies largely requires you to have pointers that have the same address, and you don't really run into a situation where you have two objects at the same address and you're playing around with pointers to their objects in a way that might cause the compiler to break the dependency, at least outside of contrived examples.
This is sort of on the one hand not a meaningful claim, and then on the other hand not even really true if you squint anyway?
Firstly the hardware does not have pointers. It has addresses, and those really are integers. Rust's addr() method on pointers gets you just an address, for whatever that's worth to you, you could write it to a log maybe if you like ?
But the Morello hardware demonstrates CHERI, an ARM feature in which a pointer has some associated information that's not the address, a sort of hardware provenance.
If you don't know the extents of the object pointed to by the char*, using an aligned vector load can reach outside the bounds of the object. Keeping provenance makes that undefined behavior.
Using integer arithmetic, and pointer-to-integer/integer-to-pointer conversions would make this implementation defined, and well defined in all of the hardware platforms where an aligned vector load can never possibly fail.
So you can't do some optimizations to functions where this happens? Great. Do it. What else?
As for why you'd want to do this. C makes strings null-terminated, and you can't know their extents without strlen first. So how do you implement strlen? Similarly your example. Seems great until you're the one implementing malloc.
But I'm sure "let's create undefined behavior for a libc implemented in C" is a fine goal.
I think your example would fall foul of reading beyond the end of an object in addition to pointer provenance. In your case the oob read is harmless as you do not expect any meaningful values for the extra bytes, but generally the compiler would not be able to give any guarantees about the content of the additional memory (or that the memory exists in the first place).
This specific use case could be addressed by the standard, but vectors are already out of the standard, so in practice you use whatever extension you have to use and abide to whatever additional rule the compiler requires (of course this is often underspecified). For example, on GCC simd primitives already have carve-outs for TBAA.
FWIW, a libc implementation in practice already must rely on compiler specific, beyond the standard behaviour anyway.
Regarding provenance, consider this:
If the compiler can track the lifetime of the dynamically allocated int, it can remove the allocation and covert this function to simply It can't if arbitrary code (for example inside bar()) can forge pointers to that memory location. The code can seem silly, but you could end up with something similar after inlining.Yes. It absolutely can. What are you even talking about?
C is not the Windows Start Menu. This habit of thinking it needs to do what it thinks I might expect instead of what I told it is deeply psychotic.
i) "Fil-C is a fanatically compatible memory-safe implementation of C and C++. Lots of software compiles and runs with Fil-C with zero or minimal changes. All memory safety errors are caught as Fil-C panics." "Fil-C only works on Linux/X86_64."
ii) "scpptool is a command line tool to help enforce a memory and data race safe subset of C++. It's designed to work with the SaferCPlusPlus library. It analyzes the specified C++ file(s) and reports places in the code that it cannot verify to be safe. By design, the tool and the library should be able to fully ensure "lifetime", bounds and data race safety." "This tool also has some ability to convert C source files to the memory safe subset of C++ it enforces"
The resulting language doesn't make sense for commercial purposes but there's no reason it couldn't be popular with hobbyists.
Run your test suite and some other workloads under Fil-C for a while, fix any problems report, and if it doesn't report any problems after a while, compile the whole thing with GCC afterwards for your release version.
But the point is also to dispel the conventional wisdom that C/C++ is necessarily intrinsically unsafe. It's a tradeoff between safety, performance and flexibility/compatibility. And you don't necessarily need to jump to a completely different language to get a different tradeoff.
Fil-C sacrifices some performance for safety and compatibility. The traditional compilers sacrifice some safety for performance and flexibility/compatibility. And scpptool aims to provide the option of sacrificing some flexibility for safety and performance. (Along with the other two tradeoffs available in the same program). The claim is that C++ turns out to be expressive enough to accommodate the various tradeoffs. (Though I'm not saying it's always gonna be pretty :)
Including a header that is not in the program, and not in ISO C, is undefined behavior. So is calling a function that is not in ISO C and not in the program. (If the function is not anywhere, the program won't link. But if it is somewhere, then ISO C has nothing to say about its behavior.)
Correct, portable POSIX C programs have undefined behavior in ISO C; only if we interpret them via IEEE 1003 are they defined by that document.
If you invent a new platform with a C compiler, you can have it such that #include <windows.h> reformats all the attached storage devices. ISO C allows this because it doesn't specify what happens if #include <windows.h> successfully resolves to a file and includes its contents. Those contents could be anything, including some compile-time instruction to do harm.
Even if a compiler's documentationd doesn't grant that a certain instance of undefined behavior is a documented extension, the existence of a de facto extension can be inferred empirically through numerous experiments: compiling test code and reverse engineering the object code.
Moreover, the source code for a compiler may be available; the behavior of something can be inferred from studying the code. The code could change in the next version. But so could the documentation; documentation can take away a documented extension the same way as a compiler code change can take away a de facto extension.
Speaking of object code: if you follow a programming paradigm of verifying the object code, then undefined behavior becomes moot, to an extent. You don't trust the compiler anyway. If the machine code has the behavior which implements the requirements that your project expects of the source code, then the necessary thing has been somehow obtained.
True, most compilers have sane defaults in many cases for things that are technically undefined (like take sizeof(void) or do pointer arithmetic on something other than a char). But not all of these cases can be saved by sane defaults.
Undefined behavior means the compiler can replace the code with whatever. So if you e.g. compile optimizing for size, the compiler will rip out the offending code, as replacing it with nothing yields the greatest size optimization.
See also John Regehr's collection of UB-Canaries: https://github.com/regehr/ub-canaries
Snippets of software exhibiting undefined behavior, executing e.g. both the true and the false branch of an if-statement or none etc. UB should not be taken lightly IMO...
Or replacing all you mp3s with a Rick Roll. Technically legal.
(Some old version of GHC had a hilarious bug where it would delete any source code with a compiler error in it. Something like this would technically legal for most compiler errors a C compiler could spot.)
The code change might come in something as innocent as a bug fix to the compiler.
What is this supposed to mean? I can't think of any interpretation that makes sense.
I think ISO C defines the executable program to be something like the compiled translation units linked together. But header files do not have to have any particular correspondence to translation units. For example, a header might declare functions whose definitions are spread across multiple translation units, or define things that don't need any definitions in particular translation units (e.g. enum or struct definitions). It could even play macro tricks which means it declares or defines different things each time you include it.
Maybe you mean it's undefined behaviour to include a header file that declares functions that are not defined in any translation unit. I'm not sure even that is true, so long as you don't use those functions. It's definitely not true in C++, where it's only a problem (not sure if it's undefined exactly) if you ODR-rule use a function that has been declared but not defined anywhere. (Examples of ODR-rule use are calling or taking the address of the function, but not, for example, using sizeof on an expression that includes it.)
Start with a concrete example. A header that is not in our program, or described in ISO C. How about:
Defined behavior or not? How can an implementation respond to this #include while remaining conforming? What are the limits on that response?> But header files do not have to have any particular correspondence to translation units.
A header inclusion is just a mechanism that brings preprocessor tokens into a translation unit. So, what does the standard tell us about the tokens coming from #include <winkle.h> into whatever translation unit we put it into?
Say we have a single file program and we made that the first line. Without that include, it's a standard-conforming Hello World.
I just checked, and if you attempt to include a file that cannot be found (in the include path, though it doesn't use that exact term) then that's a constraint violation and the compiler is required to stop compilation and issue a diagnostic. Not undefined behaviour.
We have zig, Hare, Odin, V too.
Because it never achieved mainstream success?
And Zig for example is very much not memory safe. Which a cursory search for ”segfault” in the Bun repo quickly tells you.
https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...
And with this attitude it never will. With Rust's hype, it would.
Ada would rather be a nice choice, but most hackers love their curly brackets.
They at least fixed this in c++26. No longer UB, but "erroneous behavior". Still some random garbage value (so an uninitialized pointer will likely lead to disastrous results still), but the compiler isn't allowed to fuck up your code, it has to generate code as if it had some value.
In effect if you don't opt out your value will always be initialized but not to a useful value you chose. You can think of this as similar to the (current, defanged and deprecated as well as unsafe) Rust std::mem::uninitialized()
There were earlier attempts to make this value zero, or rather, as many 0x00 bytes as needed, because on most platforms that's markedly cheaper to do, but unfortunately some C++ would actually have worse bugs if the "forgot to initialize" case was reliably zero instead.
Bad guys find the corner case and they can now authenticate as user 0x4C6F6769 which doesn't exist and so that's useless. But - when we upgrade to C++ 26 with the hypothetical zero "fix" now they're root instead!
Access to an uninitialized object defined in automatic storage, whose address is not taken, is UB.
Access to any uninitialized object whose bit pattern is a non-value, likewise.
Otherwise, it's good: the value implied by the bit pattern is obtained and computation goes on its merry way.
Not only you are faced with creating your own wrappers, if no one else has done it already.
The tooling, for IDEs and graphical debuggers, assumes either C or C++, so it won't be there for Rust.
Ideally the day will come where those ecosystems might also embrace Rust, but that is still decades away maybe.
C is a different kind of animal that encourages terseness and economy of expression. When you know what you are doing with C pointers, the compiler just doesn't get in the way.
> When you know what you are doing with C pointers, the compiler just doesn't get in the way.
Alas, it doesn't get in the way of you shooting your own foot off, too.
Rust allows unsafe and other shenanigans, if you want that.
In the most basic cases, yes. It can be used as a more polished switch statement.
It's the whole paradigm of "define an ad-hoc Enum here and there", encoding rigid semantic assumptions about a function's behaviour with ADTs, and pattern matching for control-flow. This feels like a very academic approach and modifying such code to alter its opinionated assumptions isn't funny.
IMHO you can today deal with UB just fine in C if you want to by following best practices, and the reasons given when those are not followed would also rule out use of most other safer languages.
C is portable in the least interesting way, namely that compilers exist for all architectures. But that's where it stops.
A decade earlier I also used Xenix and DG/UX.
That is a nice way to learn how "portable" C happens to be, even between UNIX systems, its birthplace.
> IMHO you can today deal with UB just fine in C if you want to by following best practices
In the other words, short compilation time has been traded off with wetware brainwashing... well, adjustment time, which makes the supposed advantage much less desirable. It is still an advantage, I reckon though.
I mean to say that best practices do help much but learning those best practices take much time as well. So short compilation time is easily offseted by learning time, and C was not even designed to optimize compilation time anyway (C headers can take a lot to parse and discard even when unused!). Your other points do make much more sense and it's unfortunate that first points are destructively interfering each other, hence my comment.
Relevant: https://youtu.be/4t1K66dMhWk?si=dZL2DoVD94WMl4fI
It's also very often not the best way to identify objects, for many reasons, including performance (spatial locality is a big deal).
These problems go away almost completely by simply using `EntityID` and going through `&mut World` for modifications, rather than passing around `EntityPtr`. This pattern gives you a lot of interesting things for free.
Pretty much nobody writing games in C++ uses raw pointers in entities to hold references to other related entities, because entities can be destroyed at any time and there's no simple way for a referring entity to know when a referenced entity is destroyed.
Using some sort of entity ID or entity handle is very common in C++, the problem is that when implementing this sort of system in Rust, developers often end up having to effectively "work around" the borrow checker, and they end up not really gaining anything in terms of correctness over C++, ultimately defeating the purpose of using Rust in the first place, at least for that particular system.
As shitty as C++ is from today's PoV, the entire gaming industry switched over within around 3 years towards the end of the 90s. 6..7 years is a long time, and a single engine (especially when it's more or less just a runtime without editor and robust asset pipeline) won't change the bigger picture that Rust is a pretty poor choice for gamedev.
Did they? What's your evidence? Are you including consoles?
Btw, the alternatives in the 1990s were worse than they are now, so the bar to clear for eg C or C++ were lower.
From what I've seen, around the late mid-90's, C++ usage was still rare, right before 2000 it was already common and most middleware didn't even offer C APIs anymore.
Of course a couple of years later Unity arrived and made the gamedev language choice more complicated again.
That would be 2000, until then Sega, Nintendo and Playstion only had C and Assembly SDKs, even the Playstation Yaroze for hobbists did get released only with C and Assembly support.
PC was naturally another matter, especialy with Watcom C/C++.
You were at most in one place. My question was rather, which corners of the industry are you counting?
However you are right that one of the killer features of C++ was that it provided a pretty simple upgrade path from C to (bad) C++.
It's not just API calls. You can call C APIs from most languages just fine.
id Software was kinda famous for being the last big C holdout, having only switched to C++ with Doom 3, and development of Doom 3 started in late 2000.
C and C++ force you to code in the C and C++ ways. It may that that's what you want, but they certainly dont let me code how I want to code!
Well, anything were your people have more experience in the other language or the libraries are a lot better.
Rusts tooling is hands down better than C/++ which aids to a more streamlined and efficient development experience
Would you expand on this? What was your C tooling/workflow that was inferior to your new Rust experience?
As for the language tooling itself, static and runtime analyzers in C and C++ (and these are table stakes at this point) do not come close to the level of accuracy of the Rust compiler. If you care about writing unsafe code, Miri is orders of magnitude better at detecting UB than any runtime analyzer I've seen for C and C++.
Not sure where this is coming from.
Async rust is amazing as long as you only mix in one more hard concept. Be it traits, generics or whatever. You can confidently write and refactor heavily multithreaded code without being deathly afraid of race conditions etc. and it is extremely empowering.
The problem comes when trying to write async generic traits in a multithreaded environment.
Then just throwing stuff at the wall and hoping something sticks will quickly lead you into despair.
Rust is nicer for async and MT than c++ in every way. I am pretty sure.
But it's still mid. If you use Rust async aggressively you will struggle with the borrow checker and the architecture results of channel hell.
If you follow the "one control thread that does everything and never blocks" you can get far, but the language does not give you much help in doing that style neatly.
I have never used Go. I love a lot of Go projects like Forgejo and SyncThing. Maybe Go solved async. Rust did not. C++ did not even add good tagged unions yet.
Doing anything concurrent in Go is also really annoying (be that async or with threads), because everything is mutable. Not just by default but always. So anything shared is very dangerous.
- it is an automatic variable whose address has not been taken; or
- the uninitialized object' bits are such that it takes on a non-value representation.
The spec even says:
> behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements
No motivation is given that I could find, so the actual difference between undefined and implementation defined behaviour seems to be based on whether the behaviour needs to be documented.
Also the C spec has always been a pragmatic afterthought, created and maintained to establish at least a minimal common feature set expected of C compilers.
The really interesting stuff still only exists outside the spec in vendor language extensions.
And I especially don’t buy that UB is there for register allocation.
First of all, that argument only explains UB of OOB memory accesses at best.
Second, you could define the meaning of OOB by just saying “pointers are integers” and then further state that nonescaping locals don’t get addresses. Many ways you could specify that, if you cared badly enough. My favorite way to do it involves saying that pointers to locals are lazy thunks that create addresses on demand.
It explains many loop-unroll and integer overflow as well.
inlining, interprocedural optimizations.
For example, something as an trivial accessor member function would be hard to optimize.
And I believe that provenance is an issue in unsafe rust.
This means losing a lot of optimisations, so in fact when you say you "don't buy" this argument you only mean that you don't care about optimisation. Which is fine, but this does mean the "improved" C isn't very useful in a lot of applications, might as well choose Java.
The dustbin of programming languages is jam packed with elegant, technically terrific, languages that never went anywhere.
Chill the fuck out.