|
Xarn posted:So apparently if I don't override base's method in derived type, then passing Derived::method into template that infers the class type gives me Base as the type, see https://godbolt.org/z/K87eExxa6 Would C++ code:
Incidentally, I also tried to see if I could SFINAE the compiler into using the desired type with C++ code:
Xerophyte fucked around with this message at 00:10 on Mar 7, 2022 |
# ? Mar 7, 2022 00:01 |
|
|
# ? Jun 8, 2024 02:09 |
|
Xarn posted:I am not sure I understand what you mean. static_cast<void (Derived::*)()>(&Derived::method) If that’s not workable, then I think you’re stuck.
|
# ? Mar 7, 2022 01:17 |
|
Yeah, I am stuck then. The entry point to this mess looks like MAGIC_MACRO( Derived::method ) and I don't think there is a way to split out "derived" in preprocessor. Oh well, it isn't actually important, just interesting.
|
# ? Mar 8, 2022 13:19 |
Change the macro to take two parameters and do a regex search replace across the code base.
|
|
# ? Mar 8, 2022 13:33 |
|
Make sure to pass -funstable to clang to enable new fun stable features.
|
# ? Mar 8, 2022 18:12 |
|
Xarn posted:Yeah, I am stuck then. The entry point to this mess looks like MAGIC_MACRO( Derived::method ) and I don't think there is a way to split out "derived" in preprocessor. Would you be able to do something like this when creating your classes? https://godbolt.org/z/4x4aW3Y68 EDIT: Actually, that would break Base. Nalin fucked around with this message at 21:34 on Mar 8, 2022 |
# ? Mar 8, 2022 21:21 |
|
I stumbled upon an unlovely quirk of Microsoft's windows C compiler (CL) at work recently. The /wonnnn compiler flag is supposed to "Reports the compiler warning that is specified by nnnn only once." The /wennnn compiler flag is supposed to "Treats the compiler warning that is specified by nnnn as an error." What's weird is that when I switched from /wo4189 to /we4189 I am suddenly getting dozens of new warnings (errors) of the expected type. Does anyone understand why months of clean builds with the /wo4189 flag never reported these "new" warnings (errors). Some of the issues have been in the code base for years. It can't just be that people were ignoring warnings, I added the /WX compiler flag months ago to force all warnings to be treated as errors and we were still missing these warnings somehow. Found lots of new bugs in the product from hunting down these warnings already and I'm less than a third of the way through them. The only thing I can think of is that /wo works differently than I expected. I expected it to report one instance of a warning per clean compile.
|
# ? Mar 16, 2022 16:47 |
|
Is it possible that the compiler is first encountering that warning in some code that isn't covered by WX, and then when it encounters it later the compiler says "I've already seen this one so I'm going to ignore it"? I'll be honest I'm not really sure why you'd use warn-once at the same time as you're using warnings-are-errors.
|
# ? Mar 16, 2022 23:41 |
|
LLSix posted:I stumbled upon an unlovely quirk of Microsoft's windows C compiler (CL) at work recently. I mean, was it reporting the warning only once? If so, why would it report the subsequent warnings? That should work like #pragma warning( once : 4189 ) and emit the warning once. As for /wx, I've gotten burned because that option is both for the linker and the compiler. You sure /wx was passed as a compiler arg, not a linker arg? Bruegels Fuckbooks fucked around with this message at 02:34 on Mar 17, 2022 |
# ? Mar 16, 2022 23:44 |
|
Jabor posted:Is it possible that the compiler is first encountering that warning in some code that isn't covered by WX, and then when it encounters it later the compiler says "I've already seen this one so I'm going to ignore it"? That seems like a plausible theory. Unfortunately, the command line output from make was cl blah blah blah /wo4189 /WX filename filename filename. Make file was setup as CFLAGS = blah blah blah /wo4189 /WX so I'm reasonably confident the warning wasn't being applied somewhere without warnings as errors. I'm not 100% confident because our makefiles are a rabbit warren of nested makefiles and I can't be certain someone didn't manage to slip something past me. I agree that combining warn-once and warnings-are-errors was a mistake and I'm in the process of fixing it (warn-once predated me and I wrongly assumed it was safe to leave as-is). Bruegels Fuckbooks posted:I mean, was it reporting the warning only once? If so, why would it report the subsequent warnings? That should work like #pragma warning( once : 4189 ) and emit the warning once. I expected warn once to work like this: update makefile to treat warnings as errors make clean make get 1 instance of the warning warning becomes an error because of /WX build fails fix the warning make clean make get 1 instance of the warning (a different instance of the warning because there's like, 20 places that trigger the warning) warning becomes an error because of /WX repeat until build passes and then there are no warnings What happened instead is update makefile to treat warnings as errors make clean make no warnings from /wo4189 make clean make no warnings repeat make clean and make cycles with no new warnings for months. Change from warn once (/wo) to warning as error (/we), and suddenly get 20 new instances of warning 4189. Bruegels Fuckbooks posted:As for /wx, I've gotten burned because that option is both for the linker and the compiler. You sure /wx was passed as a compiler arg, not a linker arg?
|
# ? Mar 17, 2022 16:53 |
|
I had asked some Linux kernel module coding stuff in the Linux thread and got nothing, so I think maybe I should ask in here instead. Suppose that I am in one kernel module, and I want to tell if another one is loaded. What would I do? I want to be able to test this to turn on some new code. As it stands, I'd just get a bunch of unresolved symbols if I try to load my module without the other one when it starts trying to do stuff with it. I want to try to do this more elegantly without just assuming the user has inserted the new module yet.
|
# ? Mar 23, 2022 00:14 |
|
modprobe loads the dependencies for you.
|
# ? Mar 23, 2022 00:16 |
|
Can you read /proc/modules?
|
# ? Mar 23, 2022 00:19 |
|
pseudorandom name posted:modprobe loads the dependencies for you. This is the best answer you’ll get I think.
|
# ? Mar 23, 2022 00:23 |
|
This might not be relevant, but you might also want to express the dependency in your module's Kconfig.
|
# ? Mar 23, 2022 21:03 |
|
I'm having some trouble cross-compiling 32-bit C++ code and am not sure what the problem is. I'm using Docker for the build environment, with Rocky Linux as the base since that's what we need to target. Here's a streamlined Dockerfile which exhibits the problem: code:
code:
If I repeat the same test on a Rocky Linux VM instead of Docker, it compiles just fine. And I can't see any differences in terms of header locations or compiler versions. I was hoping to examine the default include path, so I found this: https://stackoverflow.com/questions/17939930/finding-out-what-the-gcc-include-path-is But this only shows me what the include path is for 64-bit targets. I'm not sure how to make it show the 32-bit include path. Any ideas? (Edit: Futher streamlined the Dockerfile. Same result.) Olly the Otter fucked around with this message at 22:32 on Mar 23, 2022 |
# ? Mar 23, 2022 22:24 |
When using the methods in that SO answer, make sure you're calling the appropriate cross-compiler and not the system's general self-targeting compiler.
|
|
# ? Mar 24, 2022 11:15 |
|
Re detecting modules: There's apparently some black magic somebody else in the team knows about where I can test for certain exported symbols, but I haven't seen it yet so it could be a huge myth.
|
# ? Mar 24, 2022 16:37 |
|
Olly the Otter posted:I'm having some trouble cross-compiling 32-bit C++ code and am not sure what the problem is. Not sure why it's different. Adding "libstdc++-devel" to the Dockerfile works for me: code:
|
# ? Mar 27, 2022 17:21 |
|
Rocko Bonaparte posted:Re detecting modules: There's apparently some black magic somebody else in the team knows about where I can test for certain exported symbols, but I haven't seen it yet so it could be a huge myth. The black magic is kallsyms_lookup_name which was disabled in kernels around ~5.7 but the stuff we run still has it enabled. It's far from the least secure thing but it's for testing stuff wherever the testers all have physical access and could technically do even more invasive stuff so I guess I get away with that.
|
# ? Mar 28, 2022 20:09 |
|
CmdrRiker posted:Adding "libstdc++-devel" to the Dockerfile works for me Yeah, you're right, that does fix it. I guess that means it needs both the 64-bit and 32-bit development libraries in order to build a 32-bit target. Seems odd that it would, but now I know. Thanks
|
# ? Mar 28, 2022 21:41 |
|
I'm pulling my hair out here trying to figure out why an application won't build inside of conda, but builds just fine with system libraries. It's a brand new conda environment from the conda-forge channel with nothing in it but python (3.10 by default) and r-devtools. I cannot get the macros in <cinttypes> to work inside of the conda environment: https://www.cplusplus.com/reference/cinttypes/ code:
code:
I've been able to power through this by manually adding -D__STDC_FORMAT_MACROS to the Makevars, which feels wrong, but starts making the macros work when building in Conda. I guess it's also possible that something on the conda include path is including <inttypes.h> before this, and without __STDC_FORMAT_MACROS set then it's not applying the macros, which would explain why when I set it, it works. I didn't see anything including inttypes.h directly when I looked at all of the includes with g++ -H, though!
|
# ? Mar 31, 2022 17:22 |
|
I'm working on some python bindings for the CGAL library, which makes heavy use of boost named parameters. Does anyone know if it's possible to construct named parameters at runtime (i.e., translating python's kwargs), or is it a only-at-compile time template magic sort of thing I'll never really understand?
|
# ? Mar 31, 2022 19:10 |
|
OK, I figured out some of my conda mess! Before #including <cinttypes> , another header in the include chain was including <inttypes.h>, the C version of this header. The system <inttypes.h> on Ubuntu 20.04 looks like this, matching a recent glibc: https://github.com/bminor/glibc/blob/master/stdlib/inttypes.h But the <inttypes.h> included with conda includes this! code:
But when I check conda info, I see that it self-reports that it's using glibc 2.31, which is pretty recent. Is it sane for an environment to have old headers and a new .so for glibc, because it looks like that's what's happening. Also, I have questions about the C99 / C11 standards now, because C99 says that the macros in <inttypes.h> shouldn't work unless __STDC_FORMAT_MACROS is defined, but it looks like C11 reverted that change and now they always work. Wouldn't this make it impossible for a compiler to fully implement C99 now because the headers are not following the C99 spec?
|
# ? Mar 31, 2022 19:54 |
|
As to the last part, you can use preprocessor to check which lang version you are compiling against. For the other parts, you are on your own
|
# ? Mar 31, 2022 20:18 |
|
Zoracle Zed posted:I'm working on some python bindings for the CGAL library, which makes heavy use of boost named parameters. Does anyone know if it's possible to construct named parameters at runtime (i.e., translating python's kwargs), or is it a only-at-compile time template magic sort of thing I'll never really understand? Not sure what you mean by named parameters in terms of C++. If you're talking about template parameters, then you pretty much have to pre-instantiate all possible templates on the C++ side such that the python bindings have something to call. You can get some delightfully large binaries that way, such as TensorFlow's 1.6Gigs.
|
# ? Mar 31, 2022 20:31 |
|
Zoracle Zed posted:I'm working on some python bindings for the CGAL library, which makes heavy use of boost named parameters. Does anyone know if it's possible to construct named parameters at runtime (i.e., translating python's kwargs), or is it a only-at-compile time template magic sort of thing I'll never really understand? Trying to do this automatically is not going to end well. Boost.parameter shouldn't really make writing bindings worse than if the functions just took normal arguments, but it's definitely not going to make it easier.
|
# ? Mar 31, 2022 21:23 |
|
Can someone help me understand how to tell qmake to use a particular C++ standard when compiling? I created a new project in Qt Creator using the Qt Console Application template, so it gave me this stock .pro file for use with qmake: code:
code:
code:
code:
code:
|
# ? Apr 5, 2022 21:28 |
|
We had to write a cmake project for the brief bit of code that needed to be C++98 in our codebase. I also couldn’t figure out how to make Qt stop using -O2 in all cases so a bunch of stuff wasn’t -O3 because using both flags results in -O2. ¯\_(ツ)_/¯ edit: which, I guess I should say, is probably the correct solution anyway since even Qt is moving away from qmake. vote_no fucked around with this message at 01:55 on Apr 6, 2022 |
# ? Apr 6, 2022 01:47 |
|
Olly the Otter posted:Can someone help me understand how to tell qmake to use a particular C++ standard when compiling? I don’t know what QT is doing, but it sounds like it’s not handling the -= behavior as you’d expect. Instead of fiddling around with that, why don’t you just set QMAKE_CXXFLAGS directly? code:
The other thing you could do is message the QT people directly. Back when I had a licensed copy of QT they were eager to help and responded within a few days both times I messaged them.
|
# ? Apr 7, 2022 17:04 |
|
I have some gross poo poo in the Linux kernel with extern inline. I'm having to sanitize this, but let's say I am working against a header in the Linux kernel arch path with no associated .c file (and I don't think I can make one):code:
A wrinkle to this is that the Linux kernel has been built according to C89 forever, but there is a push to switch to C11 in 5.18, which is what I'm toying with here. I'm too stupid to know how I could tell, but the confusion over an extern inline is giving off that smell. I understand particularly starting in C99 that extern inline was treated differently, and multiple redefinitions can result, but I have to admit I don't really understand the details, what may be applicable here, and how I could work around it. Then there's the effect __always_inline has to all this.
|
# ? Apr 8, 2022 05:14 |
|
Rocko Bonaparte posted:I have some gross poo poo in the Linux kernel with extern inline. I'm having to sanitize this, but let's say I am working against a header in the Linux kernel arch path with no associated .c file (and I don't think I can make one): extern inline in gnu89 doesn't generate an externally visible symbol, but it does in c99 and beyond. You probably want static inline or __attribute__((gnu_inline)) (or add __attribute__((gnu_inline)) to the __always_inline macro)
|
# ? Apr 8, 2022 05:25 |
|
Do you know what cascade of errors would lead to the multiple definition problem here? I'm going to try your stuff tomorrow but I'm trying to come to terms with what the hell happened. A speculative theory out of my own butt: A C89 build these two places would generate the symbol but it wouldn't be something exported in linkage. However, a C99 compiler rolls through and now they're effectively publicly visible (not the C++ kind of public, talking C here).
|
# ? Apr 8, 2022 06:57 |
|
Linux redefines inline to include __attribute__((gnu_inline))
|
# ? Apr 8, 2022 07:11 |
|
I thought I'd at least update that I actually see -std=gnu11 being set in the root Makefile these days for the kernel. I guess they actually did it.
|
# ? Apr 12, 2022 00:24 |
|
What is the correct way to tell vcpkg to reuse prebuilt libraries when building a project in manifest mode? Background: I have a project that I'm building with cmake and using vcpkg.json, by specifying -DCMAKE_TOOLCHAIN_FILE parameter to cmake. The project is built in a container that I prepare beforehand, which contains all the build tools and vcpkg installed and bootstrapped. Since this container will only be used to build this particular project, I am also using vcpkg install to install the libraries that my project depends on, in an attempt to reduce my application's build time. However ... it, of course, doesn't work that way. From what I could see there are different options for caching: VCPKG_BINARY_SOURCES as an env variable before invoking cmake, paired with --binarysource=<source> at install time Pass in -DVCPKG_INSTALLED_DIR to cmake to point to the installed folder of container-wide vcpkg Does it matter which method is used? What's the difference between the 2? Would any of them work just fine? All that I want is for the prebuilt libraries to be used when my application is building.
|
# ? Apr 16, 2022 17:53 |
|
vcpkg install is completely distinct from manifest mode If your filesystem is persistent, vcpkg should reuse locally built binaries on its own AS LONG AS THE ABI IS THE SAME. Note that this means vcpkg's ABI, which is a hash of the package version, package portfile, vcpkg's triplet, compiler binary, and couple other things. If it isn't, then you need either a remote cache, or some manual extra steps to prepare the cache before build.
|
# ? Apr 16, 2022 20:31 |
|
Xarn posted:vcpkg install is completely distinct from manifest mode The filesystem was persistent, as in at the container's creation, vcpkg is cloned, bootstrapped and is told to install a set of libraries, the container is committed and saved into the registry. That's what I thought those " manual extra steps to prepare the cache before build" would mean. Then, at application's build time, the image is pulled from the registry, the git repo is cloned and the application is built, everything goes away after the build artifact is saved. The compiler, standard libs, everything is the same so I would have assumed that the ABI hasn't changed, why would it? However, without specifying VCPKG_BINARY_SOURCES and/or -DVCPKG_INSTALLED_DIR it was happening that vcpkg would sometimes try to build the libraries that the application depended on, completely ignoring the prebuilt ones. I'm not sure what could cause this as the underlying container/system did not change. At the moment I'm specifying both and it seems to work, as everything is reused.
|
# ? Apr 16, 2022 21:27 |
|
Did gcc's array bounds checker go nutso around the 10.x time frame? It's highlighting just about every kernel-level list_entry call I have here as being out of bounds. As far as I can tell, it thinks a list_head being initialized in global scope might be NULL. I infer this because I willy-nilly did NULL tests around the reference and that seemed to make it happy, but the code is grosser than putting it in a #pragma sandwich with an essay about what the hell is going on.
|
# ? Apr 21, 2022 19:35 |
|
|
# ? Jun 8, 2024 02:09 |
|
Are there any objects in the C++20 std library that guarantee the functionality std::vector<bool> had in previous versions? It used to guarantee to specialize to one bit per element, but in C++20 it's left as an optional implementation-specific choice to do so. I'm aware of std::bitset, but that has a compile-time size; I need it to be runtime sized It probably wouldn't be too hard to make my own but I'm not confident in my ability to write an operator[] that won't have noticeable slowdown when accessed three billion times or so e: I was hoping to avoid a dependency on Boost, but it does look like it has exactly what I want. Maybe I'll just take a peek at their implementation to make sure I don't do anything stupid in my own version cheetah7071 fucked around with this message at 19:36 on Apr 22, 2022 |
# ? Apr 22, 2022 19:23 |