|
the dude who wrote that and his company are complete retards, but you'd never know it if if you didn't do this for a living from how well the blog is presented.
|
# ? Oct 16, 2019 02:31 |
|
|
# ? Jun 10, 2024 10:43 |
|
Ugh one of our critical software vendors is just starting to use Octopus to handle upgrades and deployments for their customers. This sure instills a lot of confidence!
|
# ? Oct 16, 2019 02:46 |
SQL code:
pre:Msg 195, Level 15, State 10, Line 1 'CONCAT' is not a recognized built-in function name. It even autocompletes CONCAT in the SSMS editor. MS SQL Server 10.50.6000. Edit: Apparently it was only added in SQL Server 2012 and this is 2008 R2. I thought CONCAT was supposed to be ancient ANSI SQL.
|
|
# ? Oct 16, 2019 12:12 |
|
nielsm posted:
let this be your first lesson on the true meaning of "ansi sql"
|
# ? Oct 16, 2019 12:55 |
|
Nth Doctor posted:It's this and never making indices Or covering everything in them 'for speed'
|
# ? Oct 16, 2019 22:08 |
|
Or their exposure to SQL is MySQL.
|
# ? Oct 16, 2019 22:40 |
|
PhantomOfTheCopier posted:Or their exposure to SQL is MySQL. This is stupid; there is no more reason to avoid joining tables in MySQL than there is with any other database product.
|
# ? Oct 16, 2019 23:21 |
Hammerite posted:This is stupid; there is no more reason to avoid joining tables in MySQL than there is with any other database product. MySQL had horrendous performance with joins for a long time.
|
|
# ? Oct 17, 2019 00:57 |
|
can mysql do hash joins yet?
|
# ? Oct 17, 2019 01:13 |
|
redleader posted:can mysql do hash joins yet? in 8.0 (released 2018) mysql supports hash joins. https://dev.mysql.com/doc/refman/8.0/en/nested-loop-joins.html prior to 5.3 mysql join performance with large datasets was pretty bad because it only did naive nested loop joins, so joining large datasets could be really slow. after 5.3 it got more and more sophisticated about optimizing joins. I think oracle could do hash joins in 1992.
|
# ? Oct 17, 2019 02:55 |
|
e: n/m, pedantic bs But if you're avoiding joins because you think they are slow you are dumb. McGlockenshire fucked around with this message at 19:45 on Oct 17, 2019 |
# ? Oct 17, 2019 19:42 |
|
I work in a place that avoids relational databases because they think they are slow. In point of actual fact, they just think that "infinite scalability" of cloud nosquirrel is a better operational model than stupid devs causing an outage by deploying relational databases with default settings. They offered no indication of how to maintain relational consistency in their improved operational model. Apparently devs that can't setup a database can design protocols and data safety features atop distributed keystores.
|
# ? Oct 17, 2019 22:24 |
The art of the data structure is being lost in mainstream development I feel. If I were working on anything new, if I remotely cared about consistency/safety of my data I'm using an RDBMS (personal preference Postgres). The rest is just a crapshoot of which superfast thing you can use as a state bucket for a load balanced thing where it's not the end of the world if the data goes poof. While software engineering is a relatively young discipline compared to other engineering fields, there's still some things I think are now in a state of "correct" and don't need further messing with. Modern RDBMS as a fast reliable and consistent data store when used properly is one of them.
|
|
# ? Oct 17, 2019 23:45 |
|
But how is that going to help our capture rate? It’s vital we leverage a scalable keystore solution to enable high-velocity workflows. This lets our devs work closer to the grain vis a vis data, giving us a better operational model and higher agility in our core market space. Do you understand?
|
# ? Oct 18, 2019 00:50 |
|
I have a MySQL variable issue that is driving me nuts. I have a DOS script that iterates over a set of files in a directory and runs a SQL script against each file (basically loading and post-processing of each file) and as part of that process, the filename and timestamp is captured and inserted into a different table. @externalstring is the current file name. There are four steps to post-processing so I collect five timestamps: code:
quote:Illegal mix of collations (latin1_swedish_ci,IMPLICIT) and (cp850_general_ci,IMPLICIT) for operation '=' All my tables are set to latin1_swedish_ci so for whatever reason @externalstring is being recognized as cp850_general_ci during the updates. I tried setting the table to be cp850 but either I did it wrong (likely) or this wasn't the problem. Help, please?
|
# ? Oct 18, 2019 04:31 |
Agrikk posted:All my tables are set to latin1_swedish_ci so for whatever reason @externalstring is being recognized as cp850_general_ci during the updates. I tried setting the table to be cp850 but either I did it wrong (likely) or this wasn't the problem. Try the CONVERT(a USING b) function to convert your input string to the database's encoding.
|
|
# ? Oct 18, 2019 08:11 |
Looks like the client you're running in the script is defaulting to cp850 on the connection, you should configure client charset and collation to match the tables if memory serves.
|
|
# ? Oct 18, 2019 10:09 |
|
Agrikk posted:I have a MySQL variable issue that is driving me nuts. You don't just have to worry about the tables' (columns') character sets, you also have the character sets used by the connection to worry about. Are you telling it to use latin1_swedish_ci for the connection character set as well? SET NAMES latin1 COLLATE latin1_swedish_ci; BTW It's probably better to use utf8_mb4 than latin1, or ascii if you are certain you'll only store ascii data.
|
# ? Oct 18, 2019 10:14 |
|
Ruggan posted:But how is that going to help our capture rate? It’s vital we leverage a scalable keystore solution to enable high-velocity workflows. This lets our devs work closer to the grain vis a vis data, giving us a better operational model and higher agility in our core market space. Do you understand?
|
# ? Oct 18, 2019 22:02 |
|
imagine still using mysql in the year of our lord jesus christ two thousand nineteen
|
# ? Oct 18, 2019 22:58 |
|
PhantomOfTheCopier posted:Imagine driving down the highway knowing where only 80% of the other drivers are at any given time. Now take that but randomly switch which 20% are missing every second. Now go read about eventual consistency. ButtcoinDB
|
# ? Oct 19, 2019 00:40 |
DELETE CASCADE posted:imagine still using mysql in the year of our lord jesus christ two thousand nineteen Yeah I'm stuck with it at work on this particular product but gently caress me I wouldn't put anything new on it. Not even latest/greatest MariaDB. Sure it works to a point but there's no compelling reason to use it over anything else, certainyly not Postgres.
|
|
# ? Oct 19, 2019 02:03 |
|
DELETE CASCADE posted:imagine still using mysql in the year of our lord jesus christ two thousand nineteen There's been a push at my company to force all new services to use MySQL to get everyone on the same database, even though our MySQL DBs are consistently the ones that eat poo poo all the time and piss off our customers.
|
# ? Oct 19, 2019 03:30 |
|
DELETE CASCADE posted:imagine still using mysql in the year of our lord jesus christ two thousand nineteen I’m curious about this sentiment. What should be used instead? Postgres? MariaDB? Hadoop? Does your feeling carry over to AWS Aurora MySQL vs Aurora Postgres?
|
# ? Oct 19, 2019 15:38 |
The problem is with MySQL and derivatives. While improving, a lot of decisions made early on still plague MySQL and lead to scenarios where unexpected things happen with data. Only until recently, default behaviour on a not null char column would simply insert an empty string if you passed null. This is just one example of many. Only recently I logged a bug report with a stable version of Maria where you can make it reliably crash with one of the query optimizers enabled. Up until very recently the table cache and its iterator was still single threaded which runs into performance walls very quickly on a dbms with many tables. Character set and collation handling remain a total mess. The query planner still makes incredibly odd decisions and it along with innodb still have a googolplex of configuration options and none of the defaults are particularly useful in anything outside a toy scenario. Point in time snapshots are still hugely time consuming with official tools and all rely on crash recovery to restore. JSON data type in Maria is a thin veneer around longtext, and MySQL, Percona and Maria all use their own takes on InnoDB. InnoDB still has arbitrary row size restrictions and it remains impossible to truly predict how much RAM it will consume based on config/data set. It's a loving pig of a DBMS and there's no reason to pick it over Postgres in 2019 and I'm sure there's other better open source alternatives too these days. e: Also because of weird defaults in the behaviour that were in place for literal decades, programmers became accustomed to such behaviour which opens up a whole other can of worms when the defaults change/options become available to tighten that stuff up. It's nowhere near my list if I'm considering a database for any new project and serious questions should be asked of anyone saying "we should use MySQL/Percona/MariaDB for this". NinpoEspiritoSanto fucked around with this message at 16:35 on Oct 19, 2019 |
|
# ? Oct 19, 2019 16:26 |
|
Bundy posted:
On the last point, developers I swear. "But it's what everyone uses" is the extent of their knowledge and then they ignore every reason you've given to choose a more appropriate database, and here I mean the specific reasons that a different DB solves their particular problem. The other great one is "but no one here is familiar with PostgreSQL". I hate to break it to you, but you aren't really familiar with MySQL either.
|
# ? Oct 19, 2019 16:50 |
|
Agrikk posted:I’m curious about this sentiment. Postgres or MS SQL Server. The latter is expensive though.
|
# ? Oct 20, 2019 12:59 |
|
"All the developers here have used MySQL before and are used to it, and none of them have used PostgreSQL before" is a very strong reason to use MySQL for a new project. Mere technical superiority in multiple aspects of the software is irrelevant.
|
# ? Oct 20, 2019 15:44 |
|
Bundy posted:It's a loving pig of a DBMS and there's no reason to pick it over Postgres in 2019 and I'm sure there's other better open source alternatives too these days. "AWS charges less for MySQL/Maria instances." Glad I don't work there anymore. Hammerite posted:"All the developers here have used MySQL before and are used to it, and none of them have used PostgreSQL before" is a very strong reason to use MySQL for a new project. Mere technical superiority in multiple aspects of the software is irrelevant. The level of familiarity most devs have with any particular relational database is superficial enough to transfer to any other, with the possible exception of old versions of MySQL where you'd have to do very silly things to work around it's shortcomings, in which case they really need to unlearn those habits.
|
# ? Oct 21, 2019 15:05 |
|
I tell you right now that I’ve been doing A/B comparisons between Aurora Postgres and Aurora MySQL and insert performance on MySQL is garbage. I’m trying to get MySQL to work because I want to play with multi-region replication but I can’t get 2200 CSV files to load in anything resembling reasonable performance expectations.
|
# ? Oct 21, 2019 15:47 |
Even with load data infile?
|
|
# ? Oct 21, 2019 16:36 |
|
Yeah. I’m loading 2200 CSV files [each with 1.5 million rows of data consisting of five columns with a total character width of no more than 86] into a single table with an auto-incrementing rowID and an index on three of the five columns. I recognize that I could drop the index and recreate it after, but that won’t help me in the long run since I’ll be adding data to this table every night and the table has to be useable as fresh data is added.
|
# ? Oct 22, 2019 00:06 |
|
a bit of a doozy here. i have a table A that contains a computer, its ip address, and various other fields. i have another table B that contains an ip range, in the form of a.b.c.d/e, and fields about that ip range. i am best trying to join these two tables on the ip address. basically, if an ip address in table A falls into an ip range defined in table B, join the rows. so for instance, let's say i have comp1 at ip 1.2.3.4 in table A. if in table B there's a entry for ip range 1.2.3.0/24, these two should join. the service i am using is called snowflake. it allows json objects within a normal sql row, so to speak. it also has a function called PARSE_IP, which when i run that on the ip ranges in table B, gives me an object with the start and end ip address for that range, among other information. i can also use a function GET to extract each of those fields within the object. so given those options, i thought i'd write a join like this, knowing full well it would likely be slow: code:
so next thought was to flatten table B into, essentially, a lookup table. therefore, in the example i wrote above, where 1 row contained ipAddressRange = 1.2.3.0/24 and the other fields, you'd have 256 rows: 1.2.3.0 | other fields for that range 1.2.3.1 | other fields for that range 1.2.3.2 | other fields for that range ... then perform this on the entire table B, producing a huge lookup, which would then clean that JOIN up into something way simpler and more efficient. coming from straight sql land where i don't have to deal with inserted json objects (like the idea, but boy howdy is it slow in this case), this feels like the natural solution. however, i'm wondering if any of you can think of a better way to join these given what i've specified. i'm not entirely sure how to 'flatten' the object like that, especially given i can't write functions in this drat thing. i think i would have to convert the addresses to binary or decimal, then tell it to fill rows from a to b somehow. i don't know, snowflake is a mess. like would it make more sense to separate the octets into their own columns, and create a join on all four columns? snowflake doesn't use indexing like normal sql. it uses its own clustering ability, which is loving godawful. so i'm trying to battle the inefficiency of their function and their lack of true indexing. any thoughts are appreciated. abelwingnut fucked around with this message at 17:40 on Oct 23, 2019 |
# ? Oct 23, 2019 17:28 |
|
Have you tried treating the IP's as 32-bit integers? As in, to check if 1.2.3.4 is in the range 1.2.3.0/24 you mask off the lowest 8 bits by binary AND'ing the IP with a mask with the 24 highest bits set and then compare that with the range IP, so like (0x01020304 & (1 << 24)) == 0x01020300 (C-like syntax, << is the bit shift left operator). May not actually be faster if you have to convert the addresses from strings, but if they're stored as 4-byte binary values it's probably worth a shot.
TheFluff fucked around with this message at 19:15 on Oct 23, 2019 |
# ? Oct 23, 2019 19:01 |
|
Who owns the schema? Do you have the ability to make structural changes to tables? Edit: Prob not, rereading your post. What database? Ruggan fucked around with this message at 21:36 on Oct 23, 2019 |
# ? Oct 23, 2019 21:29 |
|
i can create tables at will, and i might have some leeway in altering existing tables, but probably not much. yea, like when i mentioned flattening table B, i meant making a new table C that had one row for every ip within all ranges found in table B. flattening might not be the right word, but uhh...unnesting? squashing? not sure but i hope you catch my drift. ideally i could add columns with the address stored in binary, decimal, hex in those tables and just compare without having to call the PARSE_IP. that would have to be so much faster. but yea, i don't think i can add those. too many other things feed into them and that'd be a massive rewrite of some scripts. abelwingnut fucked around with this message at 21:37 on Oct 23, 2019 |
# ? Oct 23, 2019 21:31 |
|
Yeah. What database system?
|
# ? Oct 23, 2019 21:36 |
|
it's on this online service snowflake. it's its own thing. https://www.snowflake.com/ its sql language is largely similar to t-sql or mysql, but you're VERY limited beyond basic dml and basic ddl. like i can't create functions or sprocs or anything like that. if i go the route of flattening/unnesting/squashing that i mentioned, i'm reading about doing a rowcount cte combined with a number table. i...think i'm wrapping my head around it, at least. more info here: https://dba.stackexchange.com/questions/59309/how-to-dynamically-generate-rows-based-on-value-in-a-column abelwingnut fucked around with this message at 21:52 on Oct 23, 2019 |
# ? Oct 23, 2019 21:40 |
|
abelwingnut posted:it's on this online service snowflake. it's its own thing. https://www.snowflake.com/ man that product name is loving on the nose
|
# ? Oct 23, 2019 21:46 |
|
|
# ? Jun 10, 2024 10:43 |
|
Creating a whole range of rows for every combo sounds very messy though. You could instead take your example of 1.2.3.0/24 and represent it with three columns: FirstThreeOctets: 1.2.3 FourthOctetFrom: 0 FourthOctetTo: 24 Can you create persisted calculated columns in your tables that represent it in this way? Or even fully separate the octets? This way you can continue to store it the way you do today but facilitate better joins.
|
# ? Oct 23, 2019 21:55 |