Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Janitor Prime posted:

Maybe it's just the nerd in me being overly strict, but I have zero tolerance for MySQL's default behavior. I don't know if this has changed, but the last time I used it with a complicated GROUP BY query that included some columns that weren't part of the GROUP BY clause it just happily returned some random data from the matching rows instead of throwing an error like Postgres and Oracle do. This lead to a subtle bug in production that wasn't caught during testing because our test data always happened to return the same rows and nothing seemed amiss.

Another stupid behavior I ran into was where date columns that contained NULL values would return 0000-00-00 as the value instead of NULL and breaking our ORM mapping. You can fix it by setting a property on the connection string or turning on Strict mode in the server config, but this is something that's so rear end backwards it boggles my mind. Then there's all the historically broken Unicode support that it shipped with.

The final nail in the coffin was all the licensing hassle that Oracle tried to impose on us. Oracle licenses the JDBC drivers under the GPL and when we tried to purchase an enterprise license to ship the driver with our product their legal team went crazy claiming that the way we used the product would require us to be a 3rd party reseller and wanted us to have minimum commitments and other crap like that.

This makes it sound like the PHP of databases.

Adbot
ADBOT LOVES YOU

TheFluff
Dec 13, 2006

FRIENDS, LISTEN TO ME
I AM A SEAGULL
OF WEALTH AND TASTE
It pretty much is. It also used to have some pretty glaring holes in the query language until fairly recently (IIRC it lacked both CTE's and window functions until, like, last year?). It's been improved, but if you actually want a fully featured relational database with actual relational integrity and support for a wide variety of datatypes, you should really just use Postgres. Or, if you're in a pure Microsoft shop, MSSQL, which has pretty much the same feature set but is more expensive. You can actually run MSSQL on Linux these days though.

Pardot
Jul 25, 2001




https://grimoire.ca/mysql/choose-something-else

quote:

Considering MySQL? Use something else. Already on MySQL? Migrate. For every successful project built on MySQL, you could uncover a history of time wasted mitigating MySQL's inadequacies, masked by a hard-won, but meaningless, sense of accomplishment over the effort spent making MySQL behave.

kiwid
Sep 30, 2013

Ruggan posted:

Doesn’t MSSQL integrate just fine with PHP and Python? I think there are drivers for both.

Yeah but then I'd have to run a Windows Server.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

kiwid posted:

Yeah but then I'd have to run a Windows Server.
I mean, technically, they have MSSQL on Linux now. But I wouldn't get started with that unless you're familiar with MSSQL and need to use Linux for some reason.

karms
Jan 22, 2006

by Nyc_Tattoo
Yam Slacker

Janitor Prime posted:

Maybe it's just the nerd in me being overly strict, but I have zero tolerance for MySQL's default behavior. I don't know if this has changed, but the last time I used it with a complicated GROUP BY query that included some columns that weren't part of the GROUP BY clause it just happily returned some random data from the matching rows instead of throwing an error like Postgres and Oracle do. This lead to a subtle bug in production that wasn't caught during testing because our test data always happened to return the same rows and nothing seemed amiss.

I don't mind it personally but version 8 adheres to the standard, finally.

Janitor Prime posted:

Another stupid behavior I ran into was where date columns that contained NULL values would return 0000-00-00 as the value instead of NULL and breaking our ORM mapping. You can fix it by setting a property on the connection string or turning on Strict mode in the server config, but this is something that's so rear end backwards it boggles my mind. Then there's all the historically broken Unicode support that it shipped with.

You can turn this behaviour off with sql mode NO_ZERO_DATE (and STRICT_ALL_TABLES / STRICT_TRANS_TABLES) since forever. This is deprecated and will be rolled in as one of the behaviours of strict mode somewhere in the future.

Janitor Prime posted:

The final nail in the coffin was all the licensing hassle that Oracle tried to impose on us. Oracle licenses the JDBC drivers under the GPL and when we tried to purchase an enterprise license to ship the driver with our product their legal team went crazy claiming that the way we used the product would require us to be a 3rd party reseller and wanted us to have minimum commitments and other crap like that.
Good thing we are a dumb web company then. :)

karms
Jan 22, 2006

by Nyc_Tattoo
Yam Slacker

8 has cte's and window functions, so this list needs updating.I did not read any further since I know nothing about 8 that has not been said in a seminar i attended a few weeks ago. Kinda poo poo that he did not put a date on it, but oh well.

I wish we ran on a better database but it's not the end of the world.

Busy Bee
Jul 13, 2004
Have been messing around with Google Big Query recently and have a quick question. There are two columns titled Unix 1 and Unix 2. When I pull the data, I want to create a third column titled Unix_Difference that subtracts Unix 1 from Unix 2. However, some of the time, both columns are null.

How should I go about this? Use some sort of safe cast?

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Busy Bee posted:

Have been messing around with Google Big Query recently and have a quick question. There are two columns titled Unix 1 and Unix 2. When I pull the data, I want to create a third column titled Unix_Difference that subtracts Unix 1 from Unix 2. However, some of the time, both columns are null.

How should I go about this? Use some sort of safe cast?

You mean like https://cloud.google.com/dataprep/docs/html/COALESCE-Function_57344756 ?

kiwid
Sep 30, 2013

karms posted:

Kinda poo poo that he did not put a date on it, but oh well.

2013

https://github.com/grimoire-ca/bliki/blob/master/wiki/mysql/choose-something-else.md

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





mysql is okayish for really basic relational data but if you have any queries that use like CTEs, `INSERT INTO ... SELECT ... FROM ...` or any kind of transactions postgres is about a million times better

Moonwolf
Jun 29, 2004

Flee from th' terrifyin' evil of "NHS"!



Nah, he wants https://cloud.google.com/dataprep/docs/html/IFNULL-Function_66194064 IFNULL on each column, which mirrors more convention SQL NVL.

So IFNULL(Unix1, 'Empty') - IFNULL(Unix2, 'Empty'), or however the actual transform works in bigquery, COALESCE appears to be purely for columns, and doesn't look like it lets you fake it with a string literal.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Moonwolf posted:

Nah, he wants https://cloud.google.com/dataprep/docs/html/IFNULL-Function_66194064 IFNULL on each column, which mirrors more convention SQL NVL.

So IFNULL(Unix1, 'Empty') - IFNULL(Unix2, 'Empty'), or however the actual transform works in bigquery, COALESCE appears to be purely for columns, and doesn't look like it lets you fake it with a string literal.

IFNULL sounds like ISNULL from T-SQL, which I do prefer over COALESCE so that's cool, but it sounds like the OP was doing math on the columns (version numbers?) so I'm not sure why you'd default to a string literal?

Moonwolf
Jun 29, 2004

Flee from th' terrifyin' evil of "NHS"!


You're right, I'm not sure why I picked that, going to blame heat. Either way I think from the looks of the bigquery coalesce it doesn't allow non-column values, where ifnull does. Might just be a gap in my reading of the docs though, bigquery does a bunch of stuff quite different from classic SQL style from their own examples.

Sad Panda
Sep 22, 2004

I'm a Sad Panda.
I've got data in MySQL 8.0.11 now. I've decided that as I'm going to use Django I would rather move to PostgreSQL. This is proving super fun.

I've tried using pgloader to directly move across but get the error documented here https://github.com/dimitri/pgloader/issues/782 even having changed the authentication method as the dev suggested.
My idea now is to use mysqldump, which works fine, until I try to use --compatabile option which is documented https://dev.mysql.com/doc/refman/5.5/en/mysqldump.html#option_mysqldump_compatible which to my mind is clearly lies. I decided to test them all out, and it turns out that only ansi works. Or I''m a moron and doing it wrong.

code:
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=ansi
Usage: mysqldump [OPTIONS] database [tables]
OR     mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...]
OR     mysqldump [OPTIONS] --all-databases [OPTIONS]
For more options, use mysqldump --help
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=mysql323
Invalid mode to --compatible: mysql323
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=mysq40
Invalid mode to --compatible: mysq40
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=postgresql
Invalid mode to --compatible: postgresql
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=oracle
Invalid mode to --compatible: oracle
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=mssql
Invalid mode to --compatible: mssql
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=db2
Invalid mode to --compatible: db2
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=maxdb
Invalid mode to --compatible: maxdb
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=no_key_options
Invalid mode to --compatible: no_key_options
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=no_table
Invalid mode to --compatible: no_table
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=no_table_options
Invalid mode to --compatible: no_table_options
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=no_field_options
Invalid mode to --compatible: no_field_options
Suggestions? The tables themselves are quite simple so I might just hack through them and use that as a way of learning about the differences between the two.

Mark Larson
Dec 27, 2003

Interesting...
Any reason to not just export to CSV and then import via pg_dump?

Sad Panda
Sep 22, 2004

I'm a Sad Panda.

Mark Larson posted:

Any reason to not just export to CSV and then import via pg_dump?

I didn't know it was possible.

I ended up resolving it by getting rid of MySQL 8, installing MySQL 5.7, re-creating the structure there and then pgloader worked perfectly.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



Sad Panda posted:

I didn't know it was possible.

I ended up resolving it by getting rid of MySQL 8, installing MySQL 5.7, re-creating the structure there and then pgloader worked perfectly.

:wtc: lol MySQL

kiwid
Sep 30, 2013

What is the best GUIs for postgres out there?

Shy
Mar 20, 2010

DataGrip is nice.

kiwid
Sep 30, 2013

Shy posted:

DataGrip is nice.

I've been eyeing this for a while. Guess I'll check it out.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Also, at least some Jetbrains IDEs have a lot (some?, a little? I don't know) of Datagrips functionality built-in. PyCharm for example.

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe

kiwid posted:

What is the best GUIs for postgres out there?

Work pays for Navicat which I like and works for multiple DBs

Sad Panda
Sep 22, 2004

I'm a Sad Panda.

Yup. It's like how the regular version of MySQLWorkbench still can't connect to a MySQL 8 database because they changed encryption format but it doesn't support it or something. All nonsense.

I've nearly finished migrating. PostgreSQL seems to treat my byte strings differently. Reading them in Python they read as bytes when I did it in MySQL but in PostgreSQL it reads them as MemoryView because ??.

b'gAAAAABbWL0EpMokI1IbCKcew==' is my byte string that I'm trying to put in the database. I'm storing it in a field of type bytes.
This SQL is failing although it worked just fine on MySQL. Not quite worked out why.

SQL code:
UPDATE Banks SET LoginInformation = "b\\'gAAAAABbWL0EpMokI1IbCKceww==\\'" WHERE ScriptName = 'banks.paypal';
If I run that, it says the byte string doesn't exist as a column (of course) but then I've not worked out how exactly it wants it formatting.

If I try to use

SQL code:
UPDATE Banks SET LoginInformation = b'gAAAAABbWL0EpM_cew==' WHERE ScriptName = 'banks.paypal';
It doesn't work because it says that g is not a valid binary digit.

Sad Panda fucked around with this message at 19:36 on Jul 25, 2018

Gatac
Apr 22, 2008

Fifty Cent's next biopic.
Wild guess from a Oracle user, but use single quotes around the entire string and escape by doubling them up?

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

Sad Panda posted:

Yup. It's like how the regular version of MySQLWorkbench still can't connect to a MySQL 8 database because they changed encryption format but it doesn't support it or something. All nonsense.

I've nearly finished migrating. PostgreSQL seems to treat my byte strings differently. Reading them in Python they read as bytes when I did it in MySQL but in PostgreSQL it reads them as MemoryView because ??.

b'gAAAAABbWL0EpMokI1IbCKcew==' is my byte string that I'm trying to put in the database. I'm storing it in a field of type bytes.
This SQL is failing although it worked just fine on MySQL. Not quite worked out why.

SQL code:
UPDATE Banks SET LoginInformation = "b\\'gAAAAABbWL0EpMokI1IbCKceww==\\'" WHERE ScriptName = 'banks.paypal';
If I run that, it says the byte string doesn't exist as a column (of course) but then I've not worked out how exactly it wants it formatting.

If I try to use

SQL code:
UPDATE Banks SET LoginInformation = b'gAAAAABbWL0EpM_cew==' WHERE ScriptName = 'banks.paypal';
It doesn't work because it says that g is not a valid binary digit.
Did you mean the datatype is bytea? You probably need to convert your base64 string into hex or binary first; b'string' notation is for bit-strings, and expects a binary string.

Sad Panda
Sep 22, 2004

I'm a Sad Panda.

anthonypants posted:

Did you mean the datatype is bytea? You probably need to convert your base64 string into hex or binary first; b'string' notation is for bit-strings, and expects a binary string.

Yes, my table has it storing in the bytea format. It was in Blob format under MySQL.

Python uses some crypto to generate a token and outputs in bytes format. The way that it needs it back is b'gAAAAABbWMemnqAyGm2nJqU-gBqmAxPc2wfKNw' else it won't decode and give me my password.

I managed to get the following SQL to execute and it stored in a way that could be decoded without problem.

SQL code:
UPDATE Banks SET LoginInformation = E'b\'gAAAAABbWMemnqAyGm2nJqU-gBqmAxPc2wfKNw==\'' WHERE ScriptName = 'banks.paypal';'
However, Python doesn't have those 2 escape characters in it so it's not working. Oh joy.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Sad Panda posted:

Yes, my table has it storing in the bytea format. It was in Blob format under MySQL.

Python uses some crypto to generate a token and outputs in bytes format. The way that it needs it back is b'gAAAAABbWMemnqAyGm2nJqU-gBqmAxPc2wfKNw' else it won't decode and give me my password.

I managed to get the following SQL to execute and it stored in a way that could be decoded without problem.

SQL code:
UPDATE Banks SET LoginInformation = E'b\'gAAAAABbWMemnqAyGm2nJqU-gBqmAxPc2wfKNw==\'' WHERE ScriptName = 'banks.paypal';'
However, Python doesn't have those 2 escape characters in it so it's not working. Oh joy.

I don't know the extent or complexity of your data, but I think I'd create your Django models for your data and then just write a script to read the data from your old database and create your Django models from that.

That way you circumvent all the stuff you're going through here.

(It's possible you'll have other, different issues on this path, but I don't think that's going to be the case)

Mark Larson
Dec 27, 2003

Interesting...

kiwid posted:

What is the best GUIs for postgres out there?

After wrestling with the abortion that is current pgAdmin, I decided to run a docker container of Metabase for querying needs. Redash works too.

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe
Thermopyle is right, you just need to store the thing as a BinaryField and then if you need to represent that as a BASE64 encoded string you can do that in a separate getter method for in your object.

e: You could also just change it to a TEXT type and store it as that Base64 encoded string from the start

Sad Panda
Sep 22, 2004

I'm a Sad Panda.

Thermopyle posted:

I don't know the extent or complexity of your data, but I think I'd create your Django models for your data and then just write a script to read the data from your old database and create your Django models from that.

That way you circumvent all the stuff you're going through here.

(It's possible you'll have other, different issues on this path, but I don't think that's going to be the case)

The data all moved across fine and is being read into my Django models too. I've got a supremely ugly frontend that just displays a picture and balance from the database for each of my accounts. That part seems to be working well enough now. I need to write more functionality, but that's for the Django thread.

That SELECT statement is used to update login information. This script only has ~15 logins so I use it to get teething things working before I move the same logic over to my other project which uses a lot of the same logic but a lot more logins. As you and Janitor Prime are saying, I need to maybe consider storing it in a different format. It's just frustrating, because the current logic worked perfectly with MySQL before I decided to use PostgreSQL instead.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Sad Panda posted:

The data all moved across fine and is being read into my Django models too. I've got a supremely ugly frontend that just displays a picture and balance from the database for each of my accounts. That part seems to be working well enough now. I need to write more functionality, but that's for the Django thread.

That SELECT statement is used to update login information. This script only has ~15 logins so I use it to get teething things working before I move the same logic over to my other project which uses a lot of the same logic but a lot more logins. As you and Janitor Prime are saying, I need to maybe consider storing it in a different format. It's just frustrating, because the current logic worked perfectly with MySQL before I decided to use PostgreSQL instead.

I feel like I'm not understanding you completely (which isn't surprising...I'm feeling kinda thick today).

While I use raw SQL statements in Django fairly frequently, I don't see any reason you need to be writing any SQL here...you can just use the Django ORM via Python.

Sad Panda
Sep 22, 2004

I'm a Sad Panda.

Thermopyle posted:

I feel like I'm not understanding you completely (which isn't surprising...I'm feeling kinda thick today).

While I use raw SQL statements in Django fairly frequently, I don't see any reason you need to be writing any SQL here...you can just use the Django ORM via Python.

I'm probably just explaining myself badly.

A quick overview of the project might help. It's a piece of software that scrapes account balances and stores in a database. There are 2 relevant tables in the database. One for information about each bank, and one for each account. Each account has its login information stored in the database in an encrypted form using Python's cryptography library and a Fernet key that's stored on my computers keyring (hashing not an option). Instead of encrypting each of the important items individually, I created a list which is formatted as such.
Python code:
['Account name', ('username, password'), 'script name']
I pickle that list and encode the pickle using f.encrypt from https://cryptography.io/en/latest/fernet/ which outputs this base64 encoded message which Python stores as b'whatever'.

Passwords change, so I have a method that passes that list through the process and write the b'whatever' to my database. That is what the SQL command earlier was trying to do. This project exists outside Django, and I'd prefer for the core functionality like that to function outside of the Django ORM. The method that I was trying to use is...

Python code:
def update_login(cur, login_information, script_name):
    """ Executes the SQL to insert logins """
    insert_SQL = f"UPDATE Banks SET LoginInformation = {login_information} WHERE ScriptName = '{script_name}';"
    logging.info('Updating new login')
    cur.execute(insert_SQL)
The problem that PostgreSQL has that MySQL doesn't, is that with MySQL I could use double quotes to encapsulate my b'whatever' and it would be stored just fine. PostgreSQL seems to not like double quotes (something about column names).

That got a bit long, but hopefully it makes sense. I'm a coding novice so there are almost certainly have been a number of bad choices along the way baffling decisions in the logic that I just explained.


edit - Writing it out, I then started looking at the docs and think that instead of storing b'whatever', if I store token.decode() to store 'whatever' and then use str.encode(token) as an extra step then it should resolve the issues.

Sad Panda fucked around with this message at 22:12 on Jul 25, 2018

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

You'll find you have an easier time if you structure your database around Django.

If you want to be able to access your data outside of Django, you can read the tables and stuff that Django creates with raw SQL any time you want.

The way you're doing it now, you'll have a bunch of fragile SQL that breaks as soon as you take advantage of stuff like Django's migrations.

If you add a field or a model or reorganize your data model in any way, you have to go around and edit a bunch of SQL strings everywhere.

If you're using Django's ORM and you add a column, rename a model, change the type of a column...Django will automatically update your database schema with migrations, and your IDE will automatically refactor your code.

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

Long time no post; I wouldn't call this an urgent issue since I know that I can get something out of it, but I thought I'd check how you guys would approach this. I basically have a bunch denormalized data in two columns. The columns are:
code:
Product | Component | Quantity
The first column is a product, the second column are components used to build that product, and the third column is how much of that component is required. So something like:
code:
Product | Component | Quantity
Car         | Tires             | 4
Car         | Windshield  | 1
Car         | GasCap        | 1
Car         | Battery         | 1
Except, further on down the table you might see one of the components also having its parts broken out:
code:
Product | Component | Quantity
Battery  | Wires            | 5
Battery  | Acid              | 10
Battery  | Casing          | 1
And then even further down one of those components (Say, Casing) can also have sub parts and so and so on. What I need to get out of the data is is to explode out every unique entry in the Product Column with all of its parts, including sub parts. So like:
code:
Car
 -> Tires
 -> Windshield
 -> GasCap
 -> Battery
     -> Wires
     -> Acid
     -> Casing
          -> Walls
          -> Lid
          -> Terminals
             -> Red Terminal
             -> Black Terminal
I'm thinking either a crazy set of self joins or a CTE but figured I'd pick your guys' brains before going too far down any one path.

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!

Scaramouche posted:

So like:
code:
Car
 -> Tires
 -> Windshield
 -> GasCap
 -> Battery
     -> Wires
     -> Acid
     -> Casing
          -> Walls
          -> Lid
          -> Terminals
             -> Red Terminal
             -> Black Terminal
I'm thinking either a crazy set of self joins or a CTE but figured I'd pick your guys' brains before going too far down any one path.

Recursive CTE. It takes a little bit of playing; eg you'll add a few fields to track the "overall path" and/or depth level depending on your aggregation/reporting needs.

Data Graham
Dec 28, 2009

📈📊🍪😋



Sad Panda posted:

I've got data in MySQL 8.0.11 now. I've decided that as I'm going to use Django I would rather move to PostgreSQL. This is proving super fun.

I've tried using pgloader to directly move across but get the error documented here https://github.com/dimitri/pgloader/issues/782 even having changed the authentication method as the dev suggested.
My idea now is to use mysqldump, which works fine, until I try to use --compatabile option which is documented https://dev.mysql.com/doc/refman/5.5/en/mysqldump.html#option_mysqldump_compatible which to my mind is clearly lies. I decided to test them all out, and it turns out that only ansi works. Or I''m a moron and doing it wrong.

code:
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=ansi
Usage: mysqldump [OPTIONS] database [tables]
OR     mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...]
OR     mysqldump [OPTIONS] --all-databases [OPTIONS]
For more options, use mysqldump --help
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=mysql323
Invalid mode to --compatible: mysql323
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=mysq40
Invalid mode to --compatible: mysq40
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=postgresql
Invalid mode to --compatible: postgresql
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=oracle
Invalid mode to --compatible: oracle
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=mssql
Invalid mode to --compatible: mssql
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=db2
Invalid mode to --compatible: db2
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=maxdb
Invalid mode to --compatible: maxdb
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=no_key_options
Invalid mode to --compatible: no_key_options
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=no_table
Invalid mode to --compatible: no_table
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=no_table_options
Invalid mode to --compatible: no_table_options
My-MacBook-Pro:Downloads sadpanda$ mysqldump --compatible=no_field_options
Invalid mode to --compatible: no_field_options
Suggestions? The tables themselves are quite simple so I might just hack through them and use that as a way of learning about the differences between the two.

For future reference you should also look at the built-in dumpdata and loaddata scripts in Django. Their whole purpose is to export/import in json in a backend-agnostic way so you can move from one DB type to another seamlessly.

Sad Panda
Sep 22, 2004

I'm a Sad Panda.

Thermopyle posted:

You'll find you have an easier time if you structure your database around Django.

If you want to be able to access your data outside of Django, you can read the tables and stuff that Django creates with raw SQL any time you want.

The way you're doing it now, you'll have a bunch of fragile SQL that breaks as soon as you take advantage of stuff like Django's migrations.

If you add a field or a model or reorganize your data model in any way, you have to go around and edit a bunch of SQL strings everywhere.

If you're using Django's ORM and you add a column, rename a model, change the type of a column...Django will automatically update your database schema with migrations, and your IDE will automatically refactor your code.

What do you mean about structuring my database around Django? At the moment the way it is structured, I created a database called project_scrape and then there is a schema called banking which contains the imported data from the previous incarnation and all the things that Django created. I used inspectdb to auto-generate the models based on those tables.

As far as my earlier problem, I wrote a script to convert all my passwords from binary format to text which hopefully makes this better. My plan from here on is to integrate my original code into Django as at the moment all my Django does is read from the database and display some of the information prettily. Any advice on the best way to integrate it?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Sad Panda posted:

What do you mean about structuring my database around Django? At the moment the way it is structured, I created a database called project_scrape and then there is a schema called banking which contains the imported data from the previous incarnation and all the things that Django created. I used inspectdb to auto-generate the models based on those tables.

As far as my earlier problem, I wrote a script to convert all my passwords from binary format to text which hopefully makes this better. My plan from here on is to integrate my original code into Django as at the moment all my Django does is read from the database and display some of the information prettily. Any advice on the best way to integrate it?

For some reason I thought you were not using the ORM. If that's not the case, there's no benefit for you to write raw SQL like you did in that function you shared.

Adbot
ADBOT LOVES YOU

Scaramouche
Mar 26, 2001

SPACE FACE! SPACE FACE!

PhantomOfTheCopier posted:

Recursive CTE. It takes a little bit of playing; eg you'll add a few fields to track the "overall path" and/or depth level depending on your aggregation/reporting needs.

Hmm, I'm not having a lot of luck with this but I'm kind of half-assing it between other calls/tasks. It seems like if there is only layer of embed it just returns the Parent -> Components, but if there's multiple layers (e.g. one of the components is also a parent further on down the list) I run into the Maximum Recursion problems. So this:
code:
;WITH CTE
AS
(SELECT  ParentSKU,ChildSKU,CAST('thing' AS VARCHAR(255)) as thing
    FROM    BOM
    UNION ALL
   SELECT  C.ParentSKU,C.ChildSKU,CAST(C.ParentSKU + '->' + C.ChildSKU AS VARCHAR(255)) as tthing
   FROM    CTE AS  C
        INNER JOIN  BOM  AS  T   ON  T.ChildSKU =   C.ParentSKU
)
SELECT * FROM CTE where parentsku='SIMPLEPRODUCT'
Works (but my path is only 'thing'), but if SIMPLEPRODUCT is changed to COMPLEXPRODUCT it quickly Maxes out Recursion. I've upped OPTION (MAXRECURSION xxxxxx) to as high as it'll go but still the same. I'm not sure if I'm messing up the CTE or there are infinite recursions present in the data.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply