|
Foxfire_ posted:Another way to think of it: Everywhere a malloc/free pair appears in the program, the compiler gets to pick an implementation to use. It doesn't have to make the same choice everywhere. I see. So then if the if condition is always false when it uses the never fails malloc() would it always remove the whole if no matter what is in the body?
|
# ? Jul 11, 2020 23:16 |
|
|
# ? Jun 8, 2024 09:36 |
|
Yes. If you have some code to handling malloc failing, and it picks an implementation that never does, that code is unreachable and can be removed. The way this is actually useful is for code like code:
code:
|
# ? Jul 11, 2020 23:58 |
|
I believe when you declare space instead of begging the OS for it, then you will get that space when the program load in memory. And it will be pages marked has data. You program when it load, will use several pages, most of them pages marked as code, but a few will be marked as data. If you declare that way a giant array that don't fit in memory, I guess most OS will just fail to start the program. But a few may use a lot of virtual pages and let the virtual paging system deal with it.
|
# ? Jul 12, 2020 02:57 |
|
But it wouldn't want to do that unless it knew at compile time if sz would never be above a certain value, would it?
|
# ? Jul 12, 2020 02:59 |
|
zergstain posted:But it wouldn't want to do that unless it knew at compile time if sz would never be above a certain value, would it? I remember old applications having size limits for everything. "Hotel management software!! Serve up to 99 services, and 999 rooms, with 128 workers". We don't do that anymore. But it was normal back in the 80's. Probably theres a lot of sofware (maybe some videogames?) where you can build a lot (almost everything) with strict max limit.
|
# ? Jul 12, 2020 03:04 |
|
Tei posted:We don't do that anymore. But it was normal back in the 80's. These days it’s called licensing.
|
# ? Jul 12, 2020 03:17 |
|
Tei posted:I believe when you declare space instead of begging the OS for it, then you will get that space when the program load in memory. And it will be pages marked has data. You program when it load, will use several pages, most of them pages marked as code, but a few will be marked as data. Stack usage for a function happens when the function is called. Think about something recursive; maximum stack depth isn't known beforehand. If you can't map out the entire call stack at compile time (you usually can't, virtual calls make this hard even without recursion), you can't come up with a maximum stack usage. Ideally, programmer choice of stack vs heap for a variable is dictated by its lifetime. Annoyingly, sometimes practical concerns limit this and lead to rules like 'big things in heap'. - Very old operating systems would reserve real memory for some maximum stack size for every process. Every process always uses at least that much RAM, so it has to be small. - Better things use virtual memory for it. When the program touches an address that's off the end of the stack, it page faults. The operating system allocates a physical memory page to those addresses, and retries the faulted instruction. If a process doesn't ever touch a lot of stack, no actual RAM is used. Big stacks are easy to do here. - But if you add multiple threads with multiple stacks, it's hard again. Their virtual addresses have to be contiguous, so you've got to figure out how to arrange them in the address space so they won't bump into each other as they grow. In practice, applications tell the threading routines a max stack size to separate them by. Detecting collisions is also hard since the application code is just manipulating the stack pointer and doesn't know anything about the overall organization. If you do an access like old stack pointer+50MB and that happens to collide with some other address allocation instead of being in unmapped space, it will just run and smash something without triggering a page fault.
|
# ? Jul 12, 2020 05:14 |
|
Foxfire_ posted:Stack usage for a function happens when the function is called. Think about something recursive; maximum stack depth isn't known beforehand. If you can't map out the entire call stack at compile time (you usually can't, virtual calls make this hard even without recursion), you can't come up with a maximum stack usage. Oh, I was wrong, then. so you say when I declare. char ptr[100]; in a function, that will take 100 bytes from the stack?
|
# ? Jul 12, 2020 08:32 |
|
Yep https://godbolt.org/z/daesqG The stack starts at high addresses and grows down towards lower addresses. rsp is a register pointing at the top (lowest address) of the stack. At the start of the function, the sub rsp, 10000 is moving the stack pointer down by 10000 bytes to make room for the local array. Then at the end, it adds 10000 to it to pop the array of the stack
|
# ? Jul 12, 2020 09:53 |
|
Related coding horror: A math/stats colleague that was coding some C routines for Julia complained that the ulimits for stack size wasn't set to unlimited on the HPC cluster. Apparently he thought stack allocated arrays were magically faster and had giant stack-resident float[] everywhere.
|
# ? Jul 12, 2020 19:55 |
|
Beef posted:Related coding horror: A math/stats colleague that was coding some C routines for Julia complained that the ulimits for stack size wasn't set to unlimited on the HPC cluster. Apparently he thought stack allocated arrays were magically faster and had giant stack-resident float[] everywhere. How does one even reach that point? No don't tell me, they googled "how to make code faster" and read a stackoverflow post telling them to use stack allocation
|
# ? Jul 12, 2020 21:25 |
|
I'm going to assume that it won't optimize a malloc() call into a stack allocation unless the size is a constant, and it's below a few kB.
|
# ? Jul 12, 2020 22:17 |
QuarkJets posted:
Likely some advice that was about working inside the CPU cache by keeping data in the stack, and not understanding the causes and limitations.
|
|
# ? Jul 12, 2020 22:52 |
|
nielsm posted:Likely some advice that was about working inside the CPU cache by keeping data in the stack, and not understanding the causes and limitations. Ding ding ding, we have a winner. That and the usual C programmer lore such as 'inline makes a function faster'.
|
# ? Jul 12, 2020 23:08 |
|
Your colleague was right? Things with stack-lifetime ought to be allocated on the stack. It avoids global locks, avoids fragmentation, can't leak, and is generally simpler. The downsides to it I can think of are: - It won't run on systems with low stack limits, which are common for historical reasons - There's no interface for unmapping pages, so each thread will use it's high water mark for usage, even if that only happened briefly. If you're making a numerical program to run on a specific system, it's a perfectly reasonable choice. Like if you need a couple hundred MBs of temp space total scattered across a bunch of functions to compute a timestep, taking it from the stack makes more sense than either doing a bunch of heap calls every step (forcing every thread to contend on a lock) or building some thing to cache memory allocations across timesteps. System administration shouldn't care about anything besides how much RAM you're using, not whether you call it heap or stack. What problem is a low stack limit trying to solve?
|
# ? Jul 13, 2020 00:03 |
|
Embedded isn’t dead and C is by far the most popular language. Multiple kilobytes would be a very generous stack allocation for some systems I’ve worked on. Do compilers have tuning options for when to turn off this optimization that substitutes heap for stack?
|
# ? Jul 13, 2020 01:20 |
|
Whether to allocate something on the stack is an engineering decision. It has to be answered for each use case. I work on a product where if we add an extra local variable to certain hot paths for recursion then we end up causing customers to hit stackoverflow errors. And we are reluctant to try and make some of the code less recursive because it’s easier to make sense of what was happening in the crash dumps when things are on the call stack instead of an explicit stack. So we have resorted to moving some things away from being value types to reference types and so on.
|
# ? Jul 13, 2020 01:33 |
|
Even from just a performance perspective, making larger stack allocations tends to sacrifice some of the locality benefits of the stack. That's rarely going to have enough of an impact to cancel the overheads of actually allocating on the heap, but the choice isn't always between "always allocate on the heap" and "always allocate on the stack", and aggressively stack-allocating huge buffers in an effort to never heap-allocate can definitely be a loss relative to stack-allocating a small buffer and then falling over to a heap allocation when the size gets too high. Also, eager heap-allocation can be beneficial if you're going to move the array elsewhere anyway. Building an array in a stack allocation and then copying it to the heap is less efficient than just building it on the heap to begin with.
|
# ? Jul 13, 2020 03:27 |
|
Foxfire_ posted:Your colleague was right? - You risk clobbering the stack of the next thread. - You also end up with thread-local storage, which may be good or bad. If your stack allocation is so large it requires changing the compiler flags or your pthread_attr_setstacksize-equivalent then it's a good situation for a (possibly thread-specific) custom linear allocator. If the allocation is not that large then you might as well use the free linear allocator that the language provides for you instead.
|
# ? Jul 13, 2020 04:15 |
|
I like how the JVM explicitly allows stack frames to be allocated on the heap, so stack overflows are more an effect of the JVM getting sick of your poo poo than it actually running out of memory.
|
# ? Jul 13, 2020 05:53 |
|
With giant arrays, I meant in the order of 100 GBs. So yeah you're kind of throwing away your locality around that giant stack frame. I'm not sure his intention was to eat a TLB miss every time he wanted to do something with said arrays.
|
# ? Jul 13, 2020 20:15 |
|
I make too many enums and their names all end in "type" or "category" or "kind" if i am feeling saucy
|
# ? Jul 13, 2020 22:25 |
|
Hammerite posted:I make too many enums and their names all end in "type" or "category" what language? java? it's java, isn't it
|
# ? Jul 14, 2020 02:19 |
|
"kind" is a pretty reasonable and common suffix for enums in many languages.
|
# ? Jul 14, 2020 02:24 |
|
brap posted:"kind" is a pretty reasonable and common suffix for enums in many languages. yeah, but three different suffixes combined with a "too many" complaint makes me think of java first
|
# ? Jul 14, 2020 02:28 |
|
brap posted:"kind" is a pretty reasonable and common suffix for enums in many languages. It's not a good description for the compiler, or really computers in general though, so it's not a great choice.
|
# ? Jul 14, 2020 05:40 |
|
brap posted:"kind" is a pretty reasonable and common suffix for enums in many languages. AbstractSingletonProxyFactoryBeanTypeCategoryKind
|
# ? Jul 14, 2020 06:05 |
|
brap posted:"kind" is a pretty reasonable and common suffix for enums in many languages. Congratulations, you've invented reverse Hungarian notation. It's just as useful as regular Hungarian notation.
|
# ? Jul 14, 2020 06:35 |
|
Hammerite posted:I make too many enums and their names all end in "type" or "category" This is not actually a problem. When you have a variable that represents what type or category or so on that something falls into, it's natural to model that as an enum. Other things do not at all fit being modelled by enums, so you will not use enums for them. Ipso facto, most enums will naturally be called "FooType" or whatever.
|
# ? Jul 14, 2020 06:39 |
|
And don't forget to name all your classes "FooClass". I fall into this trap a lot, and have a mental litmus test for whether the name in question is dumb: Remove the suffix "type" or "category" and if it doesn't make sense (given the overarching namespace), it's a bad name.
|
# ? Jul 14, 2020 09:14 |
|
DaTroof posted:what language? java? it's java, isn't it C#
|
# ? Jul 14, 2020 10:04 |
|
SAVE-LISP-AND-DIE posted:And don't forget to name all your classes "FooClass". The worst class name is FooManager. Manager classes are where the single responsibility principle goes to die.
|
# ? Jul 14, 2020 13:15 |
|
"kind" is a bad choice in certain languages.
|
# ? Jul 14, 2020 14:01 |
|
At my current company, there's a common pattern of putting common logic in classes that have a "Helper" suffix, and it REALLY bothers me. Just did a search, we have A LOT of classes like that in our main repo:
|
# ? Jul 14, 2020 14:04 |
|
sunaurus posted:At my current company, there's a common pattern of putting common logic in classes that have a "Helper" suffix, and it REALLY bothers me. Just did a search, we have A LOT of classes like that in our main repo: The horror, everyone knows you should always use the "Utils" suffix for this type of stuff.
|
# ? Jul 14, 2020 15:11 |
|
SAVE-LISP-AND-DIE posted:And don't forget to name all your classes "FooClass".
|
# ? Jul 14, 2020 16:31 |
|
SupSuper posted:Usually the "FooType"s and "FooCategory"s pop up because there's already a "Foo" class so you can't just call it that. That would be solved by calling it "FooClass" instead though. If you need two classes named Foo in the same namespace, you might need to rethink your taxonomy or get a thesaurus or something.
|
# ? Jul 14, 2020 16:39 |
|
More namespaces.
|
# ? Jul 14, 2020 16:43 |
|
Jabor posted:This is not actually a problem. Assuming we're still talking about Java (or C#, or any other modern OO language), using enums to represent types is a bit of a code smell. Your language already has a robust type system, and can easily represent "this type is a subtype of another type." If you're writing enums, then what you're dealing with isn't so dynamic that you have to handle it at runtime. Enums are still useful for more mutable things - state is probably the biggest example. But, if you're not interfacing with a crappy or missing type system (say, because you're deserializing something off the wire) then why not just use the tool you've already got?
|
# ? Jul 14, 2020 17:24 |
|
|
# ? Jun 8, 2024 09:36 |
|
sunaurus posted:At my current company, there's a common pattern of putting common logic in classes that have a "Helper" suffix, and it REALLY bothers me. Just did a search, we have A LOT of classes like that in our main repo: Pffft, call me when every "Helper" class (which is every class that's not directly tied to a UI) has been separated into an interface and implementation class, but each class has the prefix "EF" (Entity Framework) whether it uses EF or not. EFBarHelper, EFFooHelper, EFBazHelper.
|
# ? Jul 14, 2020 18:08 |