

Fortunately our technology level is waaaaay off these things actually existing like they’re trying to imply.
Fortunately our technology level is waaaaay off these things actually existing like they’re trying to imply.
That is a very optimistic view! I decided to make a presentation in OpenOffice recently instead of Google Slides. It actually couldn’t even show my bullet points in the right order. It revealed them like 1, 3, 2.
I guess you can make it work and the sovereignty & financial savings for a large number of users is maybe worth the pain, but let’s not pretend the Linux desktop is really close to Windows/Office in terms of quality and reliability.
Gotta agree on the name. Please choose meaningful names especially for low level components like drivers, libraries and CLI tools. It’s fine for end-user facing applications to have unique names like Blender, Krita, Inkscape, Chrome, etc. But nobody wants to have to look up what the name of random system packages is.
Corecursive easily. It’s actually properly produced and very well presented. Not one of those rambling unscripted chats.
This is deliberately not allowed in order to ensure that Linux remains exclusive for nerds.
WebP was the first widely supported format to support lossy transparency. It’s worth it for that alone.
It does kind of feel like they could just set up a Signal account?
They mean measure first, then optimize.
This is also bad advice. In fact I would bet money that nobody who says that actually always follows it.
Really there are two things that can happen:
You are trying to optimise performance. In this case you obviously measure using a profiler because that’s by far the easiest way to find places that are slow in a program. It’s not the only way though! This only really works for micro optimisations - you can’t profile your way to architectural improvements. Nicholas Nethercote’s posts about speeding up the Rust compiler are a great example of this.
Writing new code. Almost nobody measures code while they’re writing it. At best you’ll have a CI benchmark (the Rust compiler has this). But while you’re actually writing the code it’s mostly find just to use your intuition. Preallocate vectors. Don’t write O(N^2) code. Use HashSet
etc. There are plenty of things that good programmers can be sure enough are the right way to do it that you don’t need to constantly second guess yourself.
Do you realize how old assembly language is?
Do you? These instructions were created in 2011.
It predates hard disks by ten years and coincided with the invention of the transistor.
I’m not sure what the very first assembly language has to do with RISC-V assembly?
flawed tests are worse than no tests
I never said you should use flawed tests. You ask AI to write some tests. You READ THEM and probably tweak them a little. You think "this test is basic but better than nothing and it took me 30 seconds. You commit it.
It absolutely is a challenge. Before AI there weren’t any other systems that could do crappy automated testing.
I dunno what you mean by “it’s not AI”. You write the tests using AI. It’s AI.
AI is good at more than just generating stubs, filling in enum fields, etc. I wouldn’t say it’s good at stuff beyond just “boilerplate” - it’s good at stuff that is not difficult but also isn’t so regular that it’s possible to automate using traditional tools like IDEs.
Writing tests is a good example. It’s not great at writing tests, but it is definitely better than the average developer when you take the probability of them writing tests in the first place into account.
Another example would be writing good error context messages (e.g. .with_context()
in Rust). Again, I could write better ones than it does. But like most developers there’s a pretty high chance that I won’t bother at all. You also can’t automate this with an IDE.
I’m not saying you have to use AI, but if you don’t you’re pointlessly slowing yourself down. That probably won’t matter to lots of people - I mean I still see people wasting time searching for symbols instead of just using a proper IDE with go-to-definition.
Assembly is very simple (at least RISC-V assembly is which I mostly work with) but also very tedious to read. It doesn’t help that the people who choose the instruction mnemonics have extremely poor taste - e.g. lb
, lh
, lw
, ld
instead of load8
, load16
, load32
, load64
. Or j
instead of jump
. Who needs to save characters that much?
The over-abbreviation is some kind of weird flaw that hardware guys all have. I wondered if it comes from labelling pins on PCB silkscreens (MISO, CLK etc)… Or maybe they just have bad taste.
I once worked on a chip that had nested acronyms.
I don’t think that’s a surprise to anyone that has actually used them for more than a few seconds.
The evidence is that I have tried writing Python/JavaScript with/without type hints and the difference was so stark that there’s really no doubt in my mind.
You can say “well I don’t believe you”… in which case I’d encourage you to try it yourself (using a proper IDE and use Pyright; not Mypy)… But you can equally say “well I don’t believe you” to scientific studies so it’s not fundamentally different. There are plenty of scientific studies I don’t believe and didn’t believe (e.g. power poses).
Maybe “open question” was too strong of a term.
Yeah I agree. Scientific studies are usually a higher standard of proof. (Though they can also be wrong - remember “power poses”?) So it’s more like we’re 80% sure instead of 90%.
There are plenty of videos that aren’t long-winded and rambling. Look up Applied Science for example. Good luck finding that content anywhere else.
then why isn’t it better to write instead everything in Haskell, which has a stronger type system than Rust?
Because that’s very far from the only difference between Haskell and Rust. It’s other things that make Haskell a worse choice than Rust most of the time.
You are right in that it’s a spectrum from dynamically typed to simple static types (something like Java) to fancy static types (Haskell) then dependent types (Idris) and finally full on formal verification (Lean). And I agree that at some point it can become not worth the effort. But that point is pretty clearly after and mainstream statically typed language (Rust, Go, Typescript, Dart, Swift, Python, etc).
In those languages and time you spend adding static types is easily paid back in not writing tests, debugging, writing docs, searching code, screwing up refactoring. Static types in these languages are a time saver overall.
Almost every good content creator is only on YouTube.
Windows 11 IoT LTSC is anything but bloated and clunky. Best OS I’ve used.