|
I guess the funny thing about aliasing is that the most technically correct solution to forcing the compiler to alias a block of memory without doing an actual copy is to memmove the data to the same address and pray it just deletes the memmove.
|
# ? Jan 26, 2018 04:04 |
|
|
# ? Jun 7, 2024 10:45 |
|
Jabor posted:The fact that the "reinterpret the opaque sequence of bytes as a different type" function is called "memcpy" is an unfortunate historical oddity, but it still does what you're actually trying to accomplish in most cases.
|
# ? Jan 26, 2018 05:28 |
|
memcpy isn't even needed for most netcode. If you need both struct access and byte-array access, instead of:C++ code:
C++ code:
Also, seems a lot easier to verify the compiler is optimizing the relatively few places where an memcpy-to-avoid-aliasing occurs, than to measure how much performance is lost throughout the program by killing a class of optimizations. OneEightHundred posted:I guess the funny thing about aliasing is that the most technically correct solution to forcing the compiler to alias a block of memory without doing an actual copy is to memmove the data to the same address and pray it just deletes the memmove. I don't think that works. It can't change the type of the object in memory, so it doesn't do anything to avoid strict aliasing violations. C++ can do something similar with placement new, which does work, but that's harder to ensure it's a nop.
|
# ? Jan 26, 2018 05:52 |
|
It is 2018, compilers will optimize out any redundant memcpy used to bypass aliasing.
|
# ? Jan 26, 2018 06:03 |
|
eth0.n posted:memcpy isn't even needed for most netcode. If you need both struct access and byte-array access, instead of: Your second code block breaks my abstraction more than the first one does. My fast path reading bytes doesn't know what the underlying protocol is - it hands off a pointer to another module which handles the specific protocol, perhaps with one struct or another and perhaps some other way. The reasons for wanting to mix byte and struct access are odd perhaps but dictated by the protocols in question. Fair point on it being easier to verify the impact - definitely not something I can take for granted.
|
# ? Jan 26, 2018 07:25 |
Jeffrey of YOSPOS posted:Your second code block breaks my abstraction more than the first one does. My fast path reading bytes doesn't know what the underlying protocol is - it hands off a pointer to another module which handles the specific protocol, perhaps with one struct or another and perhaps some other way. The reasons for wanting to mix byte and struct access are odd perhaps but dictated by the protocols in question. Isn't this just asking for UB from alignment issues (assuming your structs are more-strictly aligned than an array of chars)? Or do I still not understand alignment? I thought I had it understood this time...
|
|
# ? Jan 26, 2018 09:01 |
|
VikingofRock posted:Isn't this just asking for UB from alignment issues (assuming your structs are more-strictly aligned than an array of chars)? Or do I still not understand alignment? I thought I had it understood this time...
|
# ? Jan 26, 2018 09:21 |
|
In Visual Studio 2015, when I create a new project, the system creates Debug and Release configurations. From what I understand, I can customize these quite a bit through the project properties. Any recommendations for creating build targets that are good midpoints between those extremes? As in, it could be good to have "Release with debugging", or "Debug with math optimizations" or other exotic combinations. There are a lot of options in the project properties, and I'm sure many of them could be set to midpoints for a more progressive building experience, but there are so many that it's a bit overwhelming. On that note, is there a way to make a "super-fast" build that's even more agressive in its optimization than Release? I hope I'm being clear. Maybe some people here have created such builds? Thanks!
|
# ? Jan 27, 2018 06:22 |
|
Colonel J posted:In Visual Studio 2015, when I create a new project, the system creates Debug and Release configurations. From what I understand, I can customize these quite a bit through the project properties.
|
# ? Jan 27, 2018 06:37 |
|
eth0.n posted:I don't think that works. It can't change the type of the object in memory, so it doesn't do anything to avoid strict aliasing violations.
|
# ? Jan 27, 2018 08:41 |
|
Colonel J posted:I hope I'm being clear. Maybe some people here have created such builds? Thanks! I have, although not in Visual Studio. I have worked on projects that use lots of external libraries, compiled from source. Some of these libraries - think codecs and other compression libraries - are math-heavy and aren't designed to be compiled without optimizations, ever, so I personalized their debug configurations to enable optimizations, because they would run unacceptably slow without, and because it's extremely unlikely you'll actually step through their code in a debugger. In a scenario like this, though, you don't want to slow down the build, so you'll keep support for incremental builds enabled and support for global optimizations disabled: the configuration will still fundamentally be Debug, just with some (not all) optimizations enabled What I want you to understand by giving you this example is not just what a hybrid configuration looks like, but also that you create such a configuration out of a specific need Colonel J posted:On that note, is there a way to make a "super-fast" build that's even more agressive in its optimization than Release? I don't think it's possible. The most aggressive optimization available is whole program optimization/link-time code generation, and it involves merging your entire program into a single "source" file and recompiling it all again a second time. That takes more time by definition
|
# ? Jan 27, 2018 14:21 |
|
I assume they meant fast as in runtime rather than fast to compile. And yes, IIRC VS's release config doesn't enable all optimisations by default so you probably want to go and enable /Ox, /GL and so on (do some profiling). For debug builds, I just enable /Ob1 (partial inlining) which gives a huge speed boost without hurting debugability much. If anything, it helps not having to step into and back out of every little accessor.
|
# ? Jan 27, 2018 17:59 |
|
Can someone explain LevelDB to me? It has 12,000 stars on Github, so it must be awesome, but I can't figure out why. Specifically what I don't get are the limitations:quote:This is not a SQL database. It does not have a relational data model, it does not support SQL queries, and it has no support for indexes. No index support? Why would you even use a database if it can't make indexes? I come from a Python background where we have Postgres, which has everything you'd ever want. My rule of thumb is that if I need an index, I'll use a database. If I don't need an index, then I'll just write the data to a file. A database that doesn't have index support seems very pointless to me.
|
# ? Jan 29, 2018 15:07 |
|
School of How posted:Can someone explain LevelDB to me? It has 12,000 stars on Github, so it must be awesome, but I can't figure out why. Specifically what I don't get are the limitations: Except instead of writing that data to a file using your own protocol, you do it using LevelDBs protocol. What you get is a fast key-value storage. That is, when you lookup-up key Foo it is faster to retrieve the value Bar than your own hand-rolled algorithm when you have millions of keys in that file. If you only have 3 keys, writing them to a file of your your choosing is definitely fine. Edit: And, why do you specifically need an index for?
|
# ? Jan 29, 2018 15:13 |
|
Volguus posted:Except instead of writing that data to a file using your own protocol, you do it using LevelDBs protocol. What you get is a fast key-value storage. That is, when you lookup-up key Foo it is faster to retrieve the value Bar than your own hand-rolled algorithm when you have millions of keys in that file. If you only have 3 keys, writing them to a file of your your choosing is definitely fine. Maybe I'm mistaken, but if there is no index, then in order to retrieve a key, you have to do a full table scan. An index prevents the need to do a full table scan, as all you need to do is scan the index, which is orders of magnitude smaller and so it scans much faster. quote:What you get is a fast key-value storage. School of How fucked around with this message at 16:02 on Jan 29, 2018 |
# ? Jan 29, 2018 16:00 |
|
You can structure storage such that you don’t need a separate index to make it fast. Consider an in-memory hashtable. I think the LevelDB whitepaper details how it accomplishes this.
|
# ? Jan 29, 2018 16:20 |
|
School of How posted:What makes it fast if there is no index?
|
# ? Jan 29, 2018 16:28 |
|
Indexes are a potential solution to a problem, not a feature that is inherently useful that you would specifically use a DB for.
|
# ? Jan 29, 2018 18:14 |
|
In Qt, I'm trying to hook up a QLocalSocket to a local Win32 named pipe server running in PIPE_TYPE_MESSAGE mode. The pipe connects fine but readyRead() never gets called and it doesn't look like the messages I'm sending are making it either. The MSDN docs are a little vague - it doesn't really say how messages are delimited if at all and if there is any handshaking required to make this work. Is there any more indepth documentation on how PIPE_TYPE_MESSAGE? QLocalSocket doesn't have it as an option - is that going to be difficult and should I just write raw WIN32 code?
|
# ? Jan 29, 2018 19:52 |
|
Plorkyeran posted:Indexes are a potential solution to a problem, not a feature that is inherently useful that you would specifically use a DB for. I disagree. In my experience querying a database by something other than the primary key is a very common operation.
|
# ? Jan 30, 2018 00:15 |
|
School of How posted:I disagree. In my experience querying a database by something other than the primary key is a very common operation.
|
# ? Jan 30, 2018 00:30 |
|
Don't let the "db" in the name fool you. LevelDB is not a relational database, and if you want a database you don't want leveldb. If you want a big blob of THINGS that you can look up by key though, it just might be right for you.
|
# ? Jan 30, 2018 00:35 |
|
You can argue all you want but you're not going to change the fact these things have been called databases since at least 1979 when dbm was first released.
|
# ? Jan 30, 2018 00:45 |
|
fankey posted:The pipe connects fine but readyRead() never gets called and it doesn't look like the messages I'm sending are making it either. It's been a while since I wrote Qt code for Windows but are you sending more data than can fit in the pipe's buffer? QLocalSocket won't handle the error and won't see anything to read in the pipe.
|
# ? Jan 30, 2018 00:54 |
|
Nippashish posted:Don't let the "db" in the name fool you. LevelDB is not a relational database, and if you want a database you don't want leveldb. If you want a big blob of THINGS that you can look up by key though, it just might be right for you. Not all databases are relational. LevelDB is a database just like Cassandra and memcached. Databases existed before Codd. Don’t be fooled by Big SQL.
|
# ? Jan 30, 2018 01:27 |
|
I think that School of How just got confused about the "database" name. According to google:quote:da·ta·base A text file with records stored line by line fits that definition. It is a structured set of data (one record per line), held in a computer (a file on the disk), accessible in various ways (via my program). Anything else is just cherry on top: relations, tables, users, permissions, server that listens on a network and can server many users at the same time, indexes, primary keys, all kinds of other features of modern databases. As you can see, LevelDB is very much a database, as it fits the definition of a database. It is one that only provides 2 things: store and retrieve key-value pairs and it provides access via a library. And is promising to do that very fast.
|
# ? Jan 30, 2018 03:54 |
|
School of How posted:I disagree. In my experience querying a database by something other than the primary key is a very common operation. Yes, exactly. The functionality you actually want is fast lookups on things other than primary keys. Indexes are a mechanism for achieving that, not an end in themselves.
|
# ? Jan 30, 2018 04:07 |
|
Embeddable key-value stores are great and have a long tradition. BerkeleyDB, LMDB, etc etc. If all you have are large lists of things that you want to access by just one key each, and you're only concerned about a single process, Postgres is a bloated nightmare by comparison. Think of it like just storing your data in a regular file, except with tons of great features like transactionality and fast random inserts.
|
# ? Jan 30, 2018 08:17 |
|
Subjunctive posted:Not all databases are relational. LevelDB is a database just like Cassandra and memcached. Databases existed before Codd. Don’t be fooled by Big SQL. Yes, sorry this is what I was trying to say (but not very clearly it seems). Don't expect that having DB in the name means it will do all the SQL things.
|
# ? Jan 30, 2018 10:17 |
|
How fast is levelDB compared to sqlite anyway? Because sqlite owns for embedding.
|
# ? Jan 30, 2018 10:55 |
|
Xarn posted:How fast is levelDB compared to sqlite anyway? Because sqlite owns for embedding. Depends on what you’re doing, and whether you need compression. http://www.lmdb.tech/bench/microbench/benchmark.html (but it’s old) Some kv stores are optimized for different media, like RocksDB for SSDs/flash.
|
# ? Jan 30, 2018 11:48 |
|
Plorkyeran posted:Yes, exactly. The functionality you actually want is fast lookups on things other than primary keys. Indexes are a mechanism for achieving that, not an end in themselves. What other ways are there to have fast lookups to non-primary key columns not using an index?
|
# ? Jan 30, 2018 18:04 |
|
School of How posted:What other ways are there to have fast lookups to non-primary key columns not using an index? Could maintain a second table which maps that secondary key to the primary key of the main table. But that's besides the point of the post you're replying to. They're just saying that if you don't need fast lookups on non-primary keys, then perhaps indexes aren't needed for you, and using a lower-overhead DB that doesn't support them might be a good idea.
|
# ? Jan 30, 2018 18:25 |
An index is just another table, sorted on the index key, pointing to the primary key. You can construct a relational database using a key-value store.
|
|
# ? Jan 30, 2018 18:27 |
|
Hi guys, Apologies if this is a dumb question. I'm a bit out of my depth here. I'm trying to compile SDL2_gfx for x64. I'm using MSVC. I'm having an issue because this library uses inline assembly, which causes errors. I can get it to build on x86 with no problems. It looks like my options are as follows:
My application is written in Rust, not C/C++, if that matters. I'm trying to use dynamic linking. I do have the rest of SDL2 working.
|
# ? Feb 5, 2018 18:10 |
Looking through the source, it looks like not defining USE_MMX when compiling will skip all the inline assembly, and just fall back to pure C implementations. Maybe you will need to edit the project files to avoid defining that, but I don't think you need to patch the code itself.
|
|
# ? Feb 5, 2018 18:35 |
|
nielsm posted:Looking through the source, it looks like not defining USE_MMX when compiling will skip all the inline assembly, and just fall back to pure C implementations. Maybe you will need to edit the project files to avoid defining that, but I don't think you need to patch the code itself. code:
But if you are using Rust.. why aren't you using one of the crates that already has SDL2, etc built for you with bindings setup? Or better yet something like: https://crates.io/crates/gfx xgalaxy fucked around with this message at 21:32 on Feb 5, 2018 |
# ? Feb 5, 2018 21:26 |
|
xgalaxy posted:
Fergus Mac Roich posted:I'm using MSVC.
|
# ? Feb 5, 2018 21:31 |
|
xgalaxy posted:
I'm using this: https://github.com/Rust-SDL2/rust-sdl2 It still requires you to have the .lib and .dll files. As I said the base SDL2 does work. I was also able to get the other extensions working. I'm going to explore my options again when I get home today. I don't suppose I could disable MMX if I just switched to the mingw toolchain?
|
# ? Feb 5, 2018 22:10 |
|
|
# ? Jun 7, 2024 10:45 |
|
Just remove USE_MMX from the list of preprocessor defines in the project settings or the configuration header file? And maybe file a bug report for SDL_gfx?
|
# ? Feb 5, 2018 22:24 |