thats why i store everything in a void*, free memory.
`free((void*)mem);`
Remove all that void
When you look into the void, does the void look back at you?
If it’s not stored in write only memory it probably will
Where was it? I’ve seen it somewhere before.
what, you're too good for char?
Good idea! I'll use a character array to store the string "0123456789", and then use a pointer to track the value the counter is up to. Thanks for the suggestion :)
You've got the "do horrible things" part down for developing for embedded systems.
That qualifies as horrible even on full-powered systems where you can afford to waste both the cycles and the bytes.
I think that I'd fail or fire someone who had that in their code.
All of that goes without saying.
In my initial comment, I was making a joke about them using an int, which could be as few as 2 bytes, but is probably 8 on modern systems. They didn't say an integer type, they said int. char is usually one byte, and is perfectly acceptable for most applications, with the one caveat that if you go to print it with std::cout you may either need to change the representation of char or convert it to another type.
Obviously pointers will also be 8 bytes on 64 bit systems, so using them at all would be ridiculous, let alone referencing an array of the ascii characters that represent 0 through 9.
Yeah, I understood your code, mate. I was pointing out what a good job you did of making it horrible.
I'm not sure you're replying to the right comment here.
Damn then you could get the value of it by subtracting the pointer to the start of that array from your value pointer instead of dereferencing it. Then itll work for positive and negative values and even values >10
If we go any farther, we're gonna reinvent signed ints but 100% less efficiently
Don’t tempt me with a good time
I just wanna say that all of you are my people.
make the pointer 64-bit in case of future expansion
Make sure you only increment the counter by one.. you can use one of the many isEven functions in this sub to check against accidentally increasing by two.
Not good enough. You need to future-proof it in case you need more than 10 in the future. Better make it dynamically-sized vector or string. And declare it on the heap so you can pass around a reference to the object for future use anywhere.
If it is always 10 just copy paste the function 10 time :)
I'm more of an uint8_t kinda guy
Which is just going to resolve to a char type, though in a way that more clearly demonstrates intent.
Kinda, unsigned char. But it's by far the most obvious with intent, agreed.
Honestly, when writing for embedded applications I'm a big fan of anything stdint. I try to avoid anything not explicitly size typed.
1-10? We only count to powers of 2. Sounds like a specifications problem.
Alternativly, wasting 4 whole bits when 3.17bits would be enough isn't acceptable either.
Uhm akshully
3.16992500144
Close enough, π bit
You mean π.16992500144, right?
π + x ≈ π = 3 ; x ∈ ℝ
It's the law
As if hardware would give a shit lol. Oops we fucked up and put all the data lines in backwards and we already ordered 10,000 of these boards so you will reverse every bit in the bytes in software coming in and going out.
Every CPU cycle counts!
Months to lose, milliseconds to gain!
T-shirt Worthy quote
True if you use a modern 32 bit MCU, but now the project asks for you using some Padauk for 3 cents per unit. 1kword of flash and 64 byte of memory. Have fun.
Look at Mr Moneybags over here with 28 whole bits to waste!
Finally, some good fucking food programming humor
I had to do this once, scrounging unused bits to fit my sorting algorithm into the memory available. But there weren't quite enough, one shy to be exact.
I was a bit out of sorts.
Please marry me, right now
Low level programmer when you use a general purpose language instead of an assembly language made specifically for the BESM-6 Soviet computer
I actually spend much of my time converting uint8_t
types into uint32_t
to save on code space from 8051 software that's been haphazardly ported to these newfangled ARMs.
Is there not a 16 bit load? Code size should then be the same as 32 bit loads.
Doesn't help when doing math on them. Compiler generates bitmask instructions after every operation to make it as if you're using a sub-register-width type.
I do not miss that at all
We could save space up to 50% if we only stored the 1s.
Use the 0 of the 1 for 100% utilisation.
Sadly in most contexts this kind of fun micro-optimization is almost never appropriate. The compiler is usually much smarter than you, and performance tradeoffs can be very counterintuitive.
Fun enough this type of optimization is SUPER relevant on the GPU where memory isn't the limiting factor but memory bandwidth is. You can save loading a full cache line if you can pack data this tightly.
Memory is the one thing compilers aren’t necessarily smarter than you at. Languages usually have strong definitions of the layout of things in memory, so compilers don’t have much room to shuffle things around. And good memory layouts enable code improvements that can make your code much faster.
I once worked on a project where I had to do realtime analysis of data on the fly as it was dumped in memory at a rate of tens of megabytes per second, and then do post processing on all of it when data collection was done.
First, I thought I would be smart, and program the thing in assembly, using what (I thought) I knew about CPU architecture, memory layout and assembly language. My second attempt was to implement the algorithm in the textbook manner, not skipping intermediate steps or trying to be smart. And then I compiled it with max optimization.
Turns out the 2nd attempt spanked the 1st attempt so much it was funny. Turns out the actual compiler is better at understanding the CPU and the memory architecture than myself :) who knew :D
The compiler is usually much smarter than you
Imagine being usually dumber than a compiler.
Disgusting. Not only you use, at least, 16bits, but you didn't specify it as unsigned. Ugh!
A friend of mine once had a stint doing programming for pinball machines. He said that's when he learned the magic of bitwise operators.
Old sckool pinball programmers optimized their machines by carefully specifying the lengths of the wires.
I think it was either MS Access or SQL Server that, when you had up to 8 Boolean fields defined in the same table, it would store them together in the same byte. I always kind of liked that efficiency.
I'm no embdedded programmer but I'll do the same for my binary blobs.
One of my recent projects ended with 31 bytes of FLASH left.
Each byte matters!
Not only embedded, the retro computer crowd wants to have a word(hehe) with you too.
Even in projects where i purposefully use ye olde BASIC as a challenge i try to squeeze every bit that i can. And if i do machine directly? Oh boy...
Does it run micro python?
Well, of course you're pathetic. Everyone is. None of the processors are good enough either. Also a shame that compilers and all aren't complex enough to manage memory unless they're so wasteful. It's like a miracle anything works.
Really, the most abundant thing is my own magnanimity and gratefulness.
A half-byte (4 bits) is called a nibble.
Int? Better use a size_t for that counter, just in case 32 bits are suddenly not enough for the numbers one to ten.
I still reminisce about storing all of my boolean values in a char.
It is when you use instructions as data.
My favourite bit of code seen on a 64K machine was COMB #0, invert immediate zero followed by a branch if not equal. First time through, one way, next time the other. Real self modifying code.
old folks were the best at it. knowing your SW will run on a machine with 64kb at best, you really get the feel for reusing all kinds of shit.
My boss works on embedded systems and some of our small table business logic has become unnecessarily complex in the DB for these reasons.
Four bits? And you'd waste it on a BCD when you could be using a hex value?
/s (in 99.9% of contexts, anyway)
Indeed. We have 64kb on a uC. Your time is already up. No more offloading things to stack devs.
I mean, 1-10 requires only 4 bits, and you used at least 32 for an int
smartfridge firmware devs shitting their pants rn
Wait, you used an INT instead of LONG??? Are you an idiot... If you are a C++ Programmer long and int are the same, so in that case you HAVE to use long double obviously.
who even stores stuff now? Just do bit shift operations everywhere.
Honestly most of the time it’s probably better off if you’re not packing bits yourself.
Programmers flexing architecture chops in the comments
The real annoyance is that ints are 2 bytes long, and only start on even bytes. I've had systems that wanted ints to start on an odd byte, having to repack the int into two separate byte variable was annoying.
Ints are at least two bytes. They can be longer, 4 bytes is popular.
That's a DInt in ladder logic. Much more popular, but takes up twice the amount of space.
I'm talking about int
in C & C++.
Your code was likely faster than if you had packed those bits somewhere. Don’t let those embedded engineers and their tiny CPU/RAM constraints get to your head.
Embedded performance is rarely the issue, but consistently using proper memory may allow you to go for a chip with less memory down the lane.
If you're not programming for embedded yeah, just use unsigned int or size_t...
I like when I could use the
bit
type in Keil 8051 C for booleans instead of one full byteI find I run out of code space before I run out of variable space, so it’s fine to use chars for booleans, otherwise all that masking and unmasking creates bigger code.
That's not what happens in Keil 8051 C: The
bit
type maps to bit-addressable RAM, and the architecture allows you to individually set and clear bits in one instruction. There's no masking going on in softwareI'm quite sure arm has individual bit manipulations in one instruction too
Arm has one cycle bit manipulation instructions, but to set a bit you need to read, set then store the value back. On the platform the other person is talking about, there are 16(?) bytes whose bits can be individually accessed with special instructions, so to set a bit you only need to write once, without needing to read then modify then write back
Some older arm architectures implement something like that, it's called bit banding. It was implemented a little differently, but the idea is similar, to set a bit in a word you didn't need to read, modify then write, you can just do one write and it didn't touch the other bits
If you do it correctly (with a global function obviously) it should be quite easy to implement it in a handful of bytes. If you’re storing dozens of Booleans or need to access lots or individual bits it will pay off.
There is a pirate software joke here somewhere
Look, it’s programmer humor. In reality, the legacy code I’m using does have masked read and write functions written in assembly that are called frequently. The processor is embedded in an Ethernet IC so there are a ton of shared registers that have to be handled this way. If I really needed the code space, I’d chop out some of the CLI code.
this is why I like Zigs packed struct, bools are a u1 already but then you can treat a struct with name fields as a set of flags like you would anything else. Plus there is a bitset type that adds every functionality you would need while keeping things very streamlined. Not that bitflags are terribly hard or anything, but its very nice that it is very explicit and has a lot more safety. Its been great for embedded work
a bit old but it still holds https://devlog.hexops.com/2022/packed-structs-in-zig/
that seems cool
One
char
packs 8 boolean valuesC++
vector<bool>
peaking its head in the doorwayEmbedded Programmer dies cause of inefficiency
Actually, std::vector<bool> packs 8 true/false in one byte. However, bool is 8 byte if defined outside...
Yes! That was the dream right there. Actually having a proper bit type instead of pretending a char is a boolean. Made you feel like you were really squeezing every last bit of performance out of that tiny chip
std::vector<bool>
KEIL? Omg. How long didn't I hear that cursed name. PTSD intensifies. Was ASM though
oh man, that brings back memories. I used Keil 8051C back in 1991. Fond memories.