Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Xerophyte
Mar 17, 2008

This space intentionally left blank

Xarn posted:

So apparently if I don't override base's method in derived type, then passing Derived::method into template that infers the class type gives me Base as the type, see https://godbolt.org/z/K87eExxa6

Is there a way to get this to infer Derived? Keep in mind that this is a very boiled down reproducer, the actual code involves multiple layers of preprocessor :v: so it isn't practical to pass the Derived type separately.

Would
C++ code:
int main() {
    makeTestInvoker<Derived>(&Derived::vmethod);
    makeTestInvoker<Derived>(&Derived::another_vmethod);
}
be acceptable? E: I guess not if there's a bunch of preprocessor between you and the operating end of this.

Incidentally, I also tried to see if I could SFINAE the compiler into using the desired type with
C++ code:
template<typename C,
         typename std::enable_if<!std::is_base_of<C, Base>::value>::type* = nullptr>
void makeTestInvoker( void (C::*testAsMethod)() ) {
    std::cout << typeid(C).name() << '\n';
}
to ban the template from being instantiated if C is equal to Base. The conclusion to that was 1: nope and 2: I still hate SFINAE. (E: on rereading this post I realize this should use std::is_same, I just started thinking of the check in terms of base classes due to the initial problem statement. Oh well, no difference here, and I hate SFINAE in either case).

Xerophyte fucked around with this message at 00:10 on Mar 7, 2022

Adbot
ADBOT LOVES YOU

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

Xarn posted:

I am not sure I understand what you mean.

static_cast<void (Derived::*)()>(&Derived::method)

If that’s not workable, then I think you’re stuck.

Xarn
Jun 26, 2015
Yeah, I am stuck then. The entry point to this mess looks like MAGIC_MACRO( Derived::method ) and I don't think there is a way to split out "derived" in preprocessor.

Oh well, it isn't actually important, just interesting.

nielsm
Jun 1, 2009



Change the macro to take two parameters and do a regex search replace across the code base.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Make sure to pass -funstable to clang to enable new fun stable features.

Nalin
Sep 29, 2007

Hair Elf

Xarn posted:

Yeah, I am stuck then. The entry point to this mess looks like MAGIC_MACRO( Derived::method ) and I don't think there is a way to split out "derived" in preprocessor.

Oh well, it isn't actually important, just interesting.

Would you be able to do something like this when creating your classes?

https://godbolt.org/z/4x4aW3Y68

EDIT: Actually, that would break Base.

Nalin fucked around with this message at 21:34 on Mar 8, 2022

LLSix
Jan 20, 2010

The real power behind countless overlords

I stumbled upon an unlovely quirk of Microsoft's windows C compiler (CL) at work recently.

The /wonnnn compiler flag is supposed to "Reports the compiler warning that is specified by nnnn only once."

The /wennnn compiler flag is supposed to "Treats the compiler warning that is specified by nnnn as an error."

What's weird is that when I switched from
/wo4189
to
/we4189

I am suddenly getting dozens of new warnings (errors) of the expected type.

Does anyone understand why months of clean builds with the /wo4189 flag never reported these "new" warnings (errors). Some of the issues have been in the code base for years. It can't just be that people were ignoring warnings, I added the /WX compiler flag months ago to force all warnings to be treated as errors and we were still missing these warnings somehow. Found lots of new bugs in the product from hunting down these warnings already and I'm less than a third of the way through them.

The only thing I can think of is that /wo works differently than I expected. I expected it to report one instance of a warning per clean compile.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Is it possible that the compiler is first encountering that warning in some code that isn't covered by WX, and then when it encounters it later the compiler says "I've already seen this one so I'm going to ignore it"?

I'll be honest I'm not really sure why you'd use warn-once at the same time as you're using warnings-are-errors.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

LLSix posted:

I stumbled upon an unlovely quirk of Microsoft's windows C compiler (CL) at work recently.

The /wonnnn compiler flag is supposed to "Reports the compiler warning that is specified by nnnn only once."

The /wennnn compiler flag is supposed to "Treats the compiler warning that is specified by nnnn as an error."

What's weird is that when I switched from
/wo4189
to
/we4189

I am suddenly getting dozens of new warnings (errors) of the expected type.

Does anyone understand why months of clean builds with the /wo4189 flag never reported these "new" warnings (errors). Some of the issues have been in the code base for years. It can't just be that people were ignoring warnings, I added the /WX compiler flag months ago to force all warnings to be treated as errors and we were still missing these warnings somehow. Found lots of new bugs in the product from hunting down these warnings already and I'm less than a third of the way through them.

The only thing I can think of is that /wo works differently than I expected. I expected it to report one instance of a warning per clean compile.

I mean, was it reporting the warning only once? If so, why would it report the subsequent warnings? That should work like #pragma warning( once : 4189 ) and emit the warning once.

As for /wx, I've gotten burned because that option is both for the linker and the compiler. You sure /wx was passed as a compiler arg, not a linker arg?

Bruegels Fuckbooks fucked around with this message at 02:34 on Mar 17, 2022

LLSix
Jan 20, 2010

The real power behind countless overlords

Jabor posted:

Is it possible that the compiler is first encountering that warning in some code that isn't covered by WX, and then when it encounters it later the compiler says "I've already seen this one so I'm going to ignore it"?

I'll be honest I'm not really sure why you'd use warn-once at the same time as you're using warnings-are-errors.

That seems like a plausible theory. Unfortunately, the command line output from make was
cl blah blah blah /wo4189 /WX filename filename filename.

Make file was setup as
CFLAGS = blah blah blah /wo4189 /WX

so I'm reasonably confident the warning wasn't being applied somewhere without warnings as errors. I'm not 100% confident because our makefiles are a rabbit warren of nested makefiles and I can't be certain someone didn't manage to slip something past me.

I agree that combining warn-once and warnings-are-errors was a mistake and I'm in the process of fixing it (warn-once predated me and I wrongly assumed it was safe to leave as-is).


Bruegels Fuckbooks posted:

I mean, was it reporting the warning only once? If so, why would it report the subsequent warnings? That should work like #pragma warning( once : 4189 ) and emit the warning once.
It isn't reporting the warning anywhere as far as I can tell.

I expected warn once to work like this:

update makefile to treat warnings as errors
make clean
make

get 1 instance of the warning
warning becomes an error because of /WX
build fails

fix the warning

make clean
make

get 1 instance of the warning (a different instance of the warning because there's like, 20 places that trigger the warning)
warning becomes an error because of /WX
repeat until build passes and then there are no warnings

What happened instead is

update makefile to treat warnings as errors
make clean
make

no warnings from /wo4189

make clean
make

no warnings

repeat make clean and make cycles with no new warnings for months. Change from warn once (/wo) to warning as error (/we), and suddenly get 20 new instances of warning 4189.

Bruegels Fuckbooks posted:

As for /wx, I've gotten burned because that option is both for the linker and the compiler. You sure /wx was passed as a compiler arg, not a linker arg?
This is very good advice. I made exactly this mistake for our linux build, applying -werror to the linker instead of the compiler. This is the first time I've tried to make warnings errors by directly modifying make files, so I could be doing something (else) wrong, but I think CFLAGS is getting implicitly applied to the compiler, not the linker stage.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I had asked some Linux kernel module coding stuff in the Linux thread and got nothing, so I think maybe I should ask in here instead.

Suppose that I am in one kernel module, and I want to tell if another one is loaded. What would I do? I want to be able to test this to turn on some new code. As it stands, I'd just get a bunch of unresolved symbols if I try to load my module without the other one when it starts trying to do stuff with it. I want to try to do this more elegantly without just assuming the user has inserted the new module yet.

pseudorandom name
May 6, 2007

modprobe loads the dependencies for you.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
Can you read /proc/modules?

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

pseudorandom name posted:

modprobe loads the dependencies for you.

This is the best answer you’ll get I think.

Beef
Jul 26, 2004
This might not be relevant, but you might also want to express the dependency in your module's Kconfig.

Olly the Otter
Jul 22, 2007
I'm having some trouble cross-compiling 32-bit C++ code and am not sure what the problem is.

I'm using Docker for the build environment, with Rocky Linux as the base since that's what we need to target. Here's a streamlined Dockerfile which exhibits the problem:

code:
FROM rockylinux:8

RUN yum makecache && yum update -y
RUN yum -y install gcc make gcc-c++ libstdc++-devel.i686 glibc-devel.i686

WORKDIR /src
RUN echo '#include <stdlib.h>' >test1.cpp
RUN g++ -m32 -c test1.cpp
When I try to build the docker image, I get this on the last step:

code:
Step 6/6 : RUN g++ -m32 -c test1.cpp
 ---> Running in 322b9e3072e0
In file included from /usr/include/c++/8/stdlib.h:36,
                 from test1.cpp:1:
/usr/include/c++/8/cstdlib:41:10: fatal error: bits/c++config.h: No such file or directory
 #include <bits/c++config.h>
          ^~~~~~~~~~~~~~~~~~
compilation terminated.
The command '/bin/sh -c g++ -m32 -c test1.cpp' returned a non-zero code: 1
The file it's looking for is indeed present in /usr/include/c++/8/i686-redhat-linux/bits/c++config.h, installed as part of libstdc++-devel.i686. So, something must be out of whack with the default 32-bit include paths for g++.

If I repeat the same test on a Rocky Linux VM instead of Docker, it compiles just fine. And I can't see any differences in terms of header locations or compiler versions.

I was hoping to examine the default include path, so I found this: https://stackoverflow.com/questions/17939930/finding-out-what-the-gcc-include-path-is But this only shows me what the include path is for 64-bit targets. I'm not sure how to make it show the 32-bit include path.

Any ideas?

(Edit: Futher streamlined the Dockerfile. Same result.)

Olly the Otter fucked around with this message at 22:32 on Mar 23, 2022

nielsm
Jun 1, 2009



When using the methods in that SO answer, make sure you're calling the appropriate cross-compiler and not the system's general self-targeting compiler.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Re detecting modules: There's apparently some black magic somebody else in the team knows about where I can test for certain exported symbols, but I haven't seen it yet so it could be a huge myth.

CmdrRiker
Apr 8, 2016

You dismally untalented little creep!

Olly the Otter posted:

I'm having some trouble cross-compiling 32-bit C++ code and am not sure what the problem is.

I'm using Docker for the build environment, with Rocky Linux as the base since that's what we need to target. Here's a streamlined Dockerfile which exhibits the problem:

code:
FROM rockylinux:8

RUN yum makecache && yum update -y
RUN yum -y install gcc make gcc-c++ libstdc++-devel.i686 glibc-devel.i686

WORKDIR /src
RUN echo '#include <stdlib.h>' >test1.cpp
RUN g++ -m32 -c test1.cpp
When I try to build the docker image, I get this on the last step:

code:
Step 6/6 : RUN g++ -m32 -c test1.cpp
 ---> Running in 322b9e3072e0
In file included from /usr/include/c++/8/stdlib.h:36,
                 from test1.cpp:1:
/usr/include/c++/8/cstdlib:41:10: fatal error: bits/c++config.h: No such file or directory
 #include <bits/c++config.h>
          ^~~~~~~~~~~~~~~~~~
compilation terminated.
The command '/bin/sh -c g++ -m32 -c test1.cpp' returned a non-zero code: 1
The file it's looking for is indeed present in /usr/include/c++/8/i686-redhat-linux/bits/c++config.h, installed as part of libstdc++-devel.i686. So, something must be out of whack with the default 32-bit include paths for g++.

If I repeat the same test on a Rocky Linux VM instead of Docker, it compiles just fine. And I can't see any differences in terms of header locations or compiler versions.

I was hoping to examine the default include path, so I found this: https://stackoverflow.com/questions/17939930/finding-out-what-the-gcc-include-path-is But this only shows me what the include path is for 64-bit targets. I'm not sure how to make it show the 32-bit include path.

Any ideas?

(Edit: Futher streamlined the Dockerfile. Same result.)

Not sure why it's different. Adding "libstdc++-devel" to the Dockerfile works for me:

code:
FROM rockylinux:8

RUN yum makecache && yum update -y
RUN yum -y install gcc make gcc-c++ libstdc++-devel.i686 glibc-devel.i686 libstdc++-devel

WORKDIR /src
RUN echo '#include <stdlib.h>' >test1.cpp
RUN g++ -m32 -c test1.cpp
Tried this because I noticed that "g++ -c test1.cpp" didn't work even without "-m32".

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

Rocko Bonaparte posted:

Re detecting modules: There's apparently some black magic somebody else in the team knows about where I can test for certain exported symbols, but I haven't seen it yet so it could be a huge myth.

The black magic is kallsyms_lookup_name which was disabled in kernels around ~5.7 but the stuff we run still has it enabled. It's far from the least secure thing but it's for testing stuff wherever the testers all have physical access and could technically do even more invasive stuff so I guess I get away with that.

Olly the Otter
Jul 22, 2007

CmdrRiker posted:

Adding "libstdc++-devel" to the Dockerfile works for me

Yeah, you're right, that does fix it. I guess that means it needs both the 64-bit and 32-bit development libraries in order to build a 32-bit target. Seems odd that it would, but now I know. Thanks

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
I'm pulling my hair out here trying to figure out why an application won't build inside of conda, but builds just fine with system libraries. It's a brand new conda environment from the conda-forge channel with nothing in it but python (3.10 by default) and r-devtools.

I cannot get the macros in <cinttypes> to work inside of the conda environment: https://www.cplusplus.com/reference/cinttypes/

code:
#include <cinttypes>
...
error: expected ')' before 'PRId64'
out += std::sprintf(out, "%" PRId64, v);
It compiles just fine against system libraries, so I'm sitting here trying to figure out what's different about condas set of libraries and headers. It ships its own complete C++ environment, including all of the standard headers. Conda's <cinttypes> looks like this, and so does the system one. When I compile with the system compiler and libraries on Ubuntu 20.04, the macros work and life is good.

code:
#ifndef _GLIBCXX_CINTTYPES
#define _GLIBCXX_CINTTYPES 1

#pragma GCC system_header

#if __cplusplus < 201103L
# include <bits/c++0x_warning.h>
#else

#include <cstdint>

// For 27.9.2/3 (see C99, Note 184)
#if _GLIBCXX_HAVE_INTTYPES_H
# ifndef __STDC_FORMAT_MACROS
#  define _UNDEF__STDC_FORMAT_MACROS
#  define __STDC_FORMAT_MACROS
# endif
# include <inttypes.h>
# ifdef _UNDEF__STDC_FORMAT_MACROS
#  undef __STDC_FORMAT_MACROS
#  undef _UNDEF__STDC_FORMAT_MACROS
# endif
#endif
I can see that _GLIBCXX_HAVE_INTTYPES_H should be defined in c++config.h, but when I examine my whole include path with g++ -H , i don't see c++config.h on it at any point. Should c++config.h be automatically included by the compiler? I haven't found much discussion around this, other than this: https://github.com/nfrechette/sjson-cpp/issues/15. I did find c++config.h way down inside the conda environment in x86_64-conda-linux-gnu/include/c++/9.4.0/x86_64-conda-linux-gnu/bits/c++config.h , and it has #define _GLIBCXX_HAVE_INTTYPES_H 1 .

I've been able to power through this by manually adding -D__STDC_FORMAT_MACROS to the Makevars, which feels wrong, but starts making the macros work when building in Conda.

I guess it's also possible that something on the conda include path is including <inttypes.h> before this, and without __STDC_FORMAT_MACROS set then it's not applying the macros, which would explain why when I set it, it works. I didn't see anything including inttypes.h directly when I looked at all of the includes with g++ -H, though!

Zoracle Zed
Jul 10, 2001
I'm working on some python bindings for the CGAL library, which makes heavy use of boost named parameters. Does anyone know if it's possible to construct named parameters at runtime (i.e., translating python's kwargs), or is it a only-at-compile time template magic sort of thing I'll never really understand?

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
OK, I figured out some of my conda mess!

Before #including <cinttypes> , another header in the include chain was including <inttypes.h>, the C version of this header. The system <inttypes.h> on Ubuntu 20.04 looks like this, matching a recent glibc: https://github.com/bminor/glibc/blob/master/stdlib/inttypes.h

But the <inttypes.h> included with conda includes this!
code:
/* The ISO C99 standard specifies that these macros must only be
   defined if explicitly requested.  */
#if !defined __cplusplus || defined __STDC_FORMAT_MACROS
It looks like conda is shipping headers that are from before this change where that code was removed!: https://sourceware.org/git/?p=glibc.git;a=commit;h=1ef74943ce2f114c78b215af57c2ccc72ccdb0b7

But when I check conda info, I see that it self-reports that it's using glibc 2.31, which is pretty recent. Is it sane for an environment to have old headers and a new .so for glibc, because it looks like that's what's happening.

Also, I have questions about the C99 / C11 standards now, because C99 says that the macros in <inttypes.h> shouldn't work unless __STDC_FORMAT_MACROS is defined, but it looks like C11 reverted that change and now they always work. Wouldn't this make it impossible for a compiler to fully implement C99 now because the headers are not following the C99 spec?

Xarn
Jun 26, 2015
As to the last part, you can use preprocessor to check which lang version you are compiling against.

For the other parts, you are on your own :v:

Beef
Jul 26, 2004

Zoracle Zed posted:

I'm working on some python bindings for the CGAL library, which makes heavy use of boost named parameters. Does anyone know if it's possible to construct named parameters at runtime (i.e., translating python's kwargs), or is it a only-at-compile time template magic sort of thing I'll never really understand?

Not sure what you mean by named parameters in terms of C++. If you're talking about template parameters, then you pretty much have to pre-instantiate all possible templates on the C++ side such that the python bindings have something to call. You can get some delightfully large binaries that way, such as TensorFlow's 1.6Gigs.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Zoracle Zed posted:

I'm working on some python bindings for the CGAL library, which makes heavy use of boost named parameters. Does anyone know if it's possible to construct named parameters at runtime (i.e., translating python's kwargs), or is it a only-at-compile time template magic sort of thing I'll never really understand?

Trying to do this automatically is not going to end well. Boost.parameter shouldn't really make writing bindings worse than if the functions just took normal arguments, but it's definitely not going to make it easier.

Olly the Otter
Jul 22, 2007
Can someone help me understand how to tell qmake to use a particular C++ standard when compiling?

I created a new project in Qt Creator using the Qt Console Application template, so it gave me this stock .pro file for use with qmake:

code:
QT -= gui

CONFIG += c++11 console
CONFIG -= app_bundle

# You can make your code fail to compile if it uses deprecated APIs.
# In order to do so, uncomment the following line.
#DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000    # disables all the APIs deprecated before Qt 6.0.0

SOURCES += \
        main.cpp

# Default rules for deployment.
qnx: target.path = /tmp/$${TARGET}/bin
else: unix:!android: target.path = /opt/$${TARGET}/bin
!isEmpty(target.path): INSTALLS += target
And here's how it runs the compiler when building:

code:
g++ -c -pipe -g -std=gnu++1z -Wall -Wextra -D_REENTRANT -fPIC -DQT_QML_DEBUG -DQT_CORE_LIB -I../test2 -I. -I../../Qt/6.0.2/gcc_64/include -I../../Qt/6.0.2/gcc_64/include/QtCore -I. -I../../Qt/6.0.2/gcc_64/mkspecs/linux-g++ -o main.o ../test2/main.cpp
But I don't want -std=gnu++1z, I want -std=c++11 for reasons. The internet tells me to try adding something like this to the .pro file:

code:
QMAKE_CXXFLAGS -= -std=gnu++1z
QMAKE_CXXFLAGS += -std=c++11
But that doesn't work, it just does this:

code:
g++ -c -pipe -std=c++11 -g -std=gnu++1z -Wall -Wextra -D_REENTRANT -fPIC -DQT_QML_DEBUG -DQT_CORE_LIB -I../test2 -I. -I../../Qt/6.0.2/gcc_64/include -I../../Qt/6.0.2/gcc_64/include/QtCore -I. -I../../Qt/6.0.2/gcc_64/mkspecs/linux-g++ -o main.o ../test2/main.cpp
I don't really understand where the "-std=gnu++1z" is coming from. I do see this in ~/Qt/6.0.2/gcc_64/mkspecs/common/g++-base.conf:

code:
...
QMAKE_CXXFLAGS_GNUCXX1Z = -std=gnu++1z
...
But that just begs the question... what's telling it to use QMAKE_CXXFLAGS_GNUCXX1Z?

vote_no
Nov 22, 2005

The rush is on.
We had to write a cmake project for the brief bit of code that needed to be C++98 in our codebase. I also couldn’t figure out how to make Qt stop using -O2 in all cases so a bunch of stuff wasn’t -O3 because using both flags results in -O2. ¯\_(ツ)_/¯

edit: which, I guess I should say, is probably the correct solution anyway since even Qt is moving away from qmake.

vote_no fucked around with this message at 01:55 on Apr 6, 2022

LLSix
Jan 20, 2010

The real power behind countless overlords

Olly the Otter posted:

Can someone help me understand how to tell qmake to use a particular C++ standard when compiling?

I created a new project in Qt Creator using the Qt Console Application template, so it gave me this stock .pro file for use with qmake:

code:
QT -= gui

CONFIG += c++11 console
CONFIG -= app_bundle

# You can make your code fail to compile if it uses deprecated APIs.
# In order to do so, uncomment the following line.
#DEFINES += QT_DISABLE_DEPRECATED_BEFORE=0x060000    # disables all the APIs deprecated before Qt 6.0.0

SOURCES += \
        main.cpp

# Default rules for deployment.
qnx: target.path = /tmp/$${TARGET}/bin
else: unix:!android: target.path = /opt/$${TARGET}/bin
!isEmpty(target.path): INSTALLS += target
And here's how it runs the compiler when building:

code:
g++ -c -pipe -g -std=gnu++1z -Wall -Wextra -D_REENTRANT -fPIC -DQT_QML_DEBUG -DQT_CORE_LIB -I../test2 -I. -I../../Qt/6.0.2/gcc_64/include -I../../Qt/6.0.2/gcc_64/include/QtCore -I. -I../../Qt/6.0.2/gcc_64/mkspecs/linux-g++ -o main.o ../test2/main.cpp
But I don't want -std=gnu++1z, I want -std=c++11 for reasons. The internet tells me to try adding something like this to the .pro file:

code:
QMAKE_CXXFLAGS -= -std=gnu++1z
QMAKE_CXXFLAGS += -std=c++11
But that doesn't work, it just does this:

code:
g++ -c -pipe -std=c++11 -g -std=gnu++1z -Wall -Wextra -D_REENTRANT -fPIC -DQT_QML_DEBUG -DQT_CORE_LIB -I../test2 -I. -I../../Qt/6.0.2/gcc_64/include -I../../Qt/6.0.2/gcc_64/include/QtCore -I. -I../../Qt/6.0.2/gcc_64/mkspecs/linux-g++ -o main.o ../test2/main.cpp
I don't really understand where the "-std=gnu++1z" is coming from. I do see this in ~/Qt/6.0.2/gcc_64/mkspecs/common/g++-base.conf:

code:
...
QMAKE_CXXFLAGS_GNUCXX1Z = -std=gnu++1z
...
But that just begs the question... what's telling it to use QMAKE_CXXFLAGS_GNUCXX1Z?

I don’t know what QT is doing, but it sounds like it’s not handling the -= behavior as you’d expect. Instead of fiddling around with that, why don’t you just set QMAKE_CXXFLAGS directly?

code:
QMAKE_CXXFLAGS = -std=c++11 <copy everything else in there right now but the std flag you don’t like>
One of the main reasons to use QT is multi platform support, and doing the direct assignment might break that, so I don’t love this approach, but any method of picking your own std flag is putting you on risky ground already.

The other thing you could do is message the QT people directly. Back when I had a licensed copy of QT they were eager to help and responded within a few days both times I messaged them.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I have some gross poo poo in the Linux kernel with extern inline. I'm having to sanitize this, but let's say I am working against a header in the Linux kernel arch path with no associated .c file (and I don't think I can make one):

code:
extern inline int butt(void);
extern __always_inline int butt(void)
{
   return 1;
}

I am getting a multiple definition error against butt in an unrelated .c file. I suspect a chain of includes gets me here but it's not a single degree of separation. I don't include this header directly.

A wrinkle to this is that the Linux kernel has been built according to C89 forever, but there is a push to switch to C11 in 5.18, which is what I'm toying with here. I'm too stupid to know how I could tell, but the confusion over an extern inline is giving off that smell. I understand particularly starting in C99 that extern inline was treated differently, and multiple redefinitions can result, but I have to admit I don't really understand the details, what may be applicable here, and how I could work around it.

Then there's the effect __always_inline has to all this.

b0lt
Apr 29, 2005

Rocko Bonaparte posted:

I have some gross poo poo in the Linux kernel with extern inline. I'm having to sanitize this, but let's say I am working against a header in the Linux kernel arch path with no associated .c file (and I don't think I can make one):

code:
extern inline int butt(void);
extern __always_inline int butt(void)
{
   return 1;
}

I am getting a multiple definition error against butt in an unrelated .c file. I suspect a chain of includes gets me here but it's not a single degree of separation. I don't include this header directly.

A wrinkle to this is that the Linux kernel has been built according to C89 forever, but there is a push to switch to C11 in 5.18, which is what I'm toying with here. I'm too stupid to know how I could tell, but the confusion over an extern inline is giving off that smell. I understand particularly starting in C99 that extern inline was treated differently, and multiple redefinitions can result, but I have to admit I don't really understand the details, what may be applicable here, and how I could work around it.

Then there's the effect __always_inline has to all this.

extern inline in gnu89 doesn't generate an externally visible symbol, but it does in c99 and beyond. You probably want static inline or __attribute__((gnu_inline)) (or add __attribute__((gnu_inline)) to the __always_inline macro)

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Do you know what cascade of errors would lead to the multiple definition problem here? I'm going to try your stuff tomorrow but I'm trying to come to terms with what the hell happened.

A speculative theory out of my own butt: A C89 build these two places would generate the symbol but it wouldn't be something exported in linkage. However, a C99 compiler rolls through and now they're effectively publicly visible (not the C++ kind of public, talking C here).

pseudorandom name
May 6, 2007

Linux redefines inline to include __attribute__((gnu_inline))

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I thought I'd at least update that I actually see -std=gnu11 being set in the root Makefile these days for the kernel. I guess they actually did it.

Volguus
Mar 3, 2009
What is the correct way to tell vcpkg to reuse prebuilt libraries when building a project in manifest mode?
Background:
I have a project that I'm building with cmake and using vcpkg.json, by specifying -DCMAKE_TOOLCHAIN_FILE parameter to cmake. The project is built in a container that I prepare beforehand, which contains all the build tools and vcpkg installed and bootstrapped.
Since this container will only be used to build this particular project, I am also using vcpkg install to install the libraries that my project depends on, in an attempt to reduce my application's build time. However ... it, of course, doesn't work that way. From what I could see there are different options for caching:

VCPKG_BINARY_SOURCES as an env variable before invoking cmake, paired with --binarysource=<source> at install time
Pass in -DVCPKG_INSTALLED_DIR to cmake to point to the installed folder of container-wide vcpkg

Does it matter which method is used? What's the difference between the 2? Would any of them work just fine? All that I want is for the prebuilt libraries to be used when my application is building.

Xarn
Jun 26, 2015
vcpkg install is completely distinct from manifest mode

If your filesystem is persistent, vcpkg should reuse locally built binaries on its own AS LONG AS THE ABI IS THE SAME. Note that this means vcpkg's ABI, which is a hash of the package version, package portfile, vcpkg's triplet, compiler binary, and couple other things. If it isn't, then you need either a remote cache, or some manual extra steps to prepare the cache before build.

Volguus
Mar 3, 2009

Xarn posted:

vcpkg install is completely distinct from manifest mode

If your filesystem is persistent, vcpkg should reuse locally built binaries on its own AS LONG AS THE ABI IS THE SAME. Note that this means vcpkg's ABI, which is a hash of the package version, package portfile, vcpkg's triplet, compiler binary, and couple other things. If it isn't, then you need either a remote cache, or some manual extra steps to prepare the cache before build.

The filesystem was persistent, as in at the container's creation, vcpkg is cloned, bootstrapped and is told to install a set of libraries, the container is committed and saved into the registry. That's what I thought those " manual extra steps to prepare the cache before build" would mean. Then, at application's build time, the image is pulled from the registry, the git repo is cloned and the application is built, everything goes away after the build artifact is saved. The compiler, standard libs, everything is the same so I would have assumed that the ABI hasn't changed, why would it?

However, without specifying VCPKG_BINARY_SOURCES and/or -DVCPKG_INSTALLED_DIR it was happening that vcpkg would sometimes try to build the libraries that the application depended on, completely ignoring the prebuilt ones. I'm not sure what could cause this as the underlying container/system did not change.
At the moment I'm specifying both and it seems to work, as everything is reused.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Did gcc's array bounds checker go nutso around the 10.x time frame? It's highlighting just about every kernel-level list_entry call I have here as being out of bounds. As far as I can tell, it thinks a list_head being initialized in global scope might be NULL. I infer this because I willy-nilly did NULL tests around the reference and that seemed to make it happy, but the code is grosser than putting it in a #pragma sandwich with an essay about what the hell is going on.

Adbot
ADBOT LOVES YOU

cheetah7071
Oct 20, 2010

honk honk
College Slice
Are there any objects in the C++20 std library that guarantee the functionality std::vector<bool> had in previous versions? It used to guarantee to specialize to one bit per element, but in C++20 it's left as an optional implementation-specific choice to do so.

I'm aware of std::bitset, but that has a compile-time size; I need it to be runtime sized

It probably wouldn't be too hard to make my own but I'm not confident in my ability to write an operator[] that won't have noticeable slowdown when accessed three billion times or so

e: I was hoping to avoid a dependency on Boost, but it does look like it has exactly what I want. Maybe I'll just take a peek at their implementation to make sure I don't do anything stupid in my own version

cheetah7071 fucked around with this message at 19:36 on Apr 22, 2022

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply