|
Paul MaudDib posted:Are there any open source/freeware schema modeling/migration tools available? I'd be looking at Postgres primarily. DBeaver has some schema modelling built into it and it's free. Are you looking for schema migration or data migration? For data migration I'd use an ETL tool like Pentaho Data Integration. I think Pentaho has some schema modelling tools as well.
|
# ¿ Feb 25, 2022 22:52 |
|
|
# ¿ May 6, 2024 06:50 |
|
I would simply not have security holes in the first place, then my ids don't matter.
|
# ¿ Mar 10, 2022 05:51 |
|
Hate being forced to work with MySQL. It's such a garbage db. Has it improved? Sure. Is it good now? No.
|
# ¿ Mar 10, 2022 19:37 |
|
CmdrRiker posted:I'm still pissed at the previous versions of mysql that allowed you to store a null value in a NOT NULL column if you simply weren't running it in strict mode. Slightly better than the silent data truncation it used to do.
|
# ¿ Mar 11, 2022 01:33 |
|
abelwingnut posted:why postgres over mysql or any other rdbms? Reliability, performance, features, data types, extensibility and plugin availability, etc... One of the big features is that it can be used as document store similar to MongoDB and others along with all the standard functionality of a real relational database, which those others don't generally offer. e: I've worked with Postgres, MySQL, MSSQL, Oracle, Sybase and other older poo poo nobody cares about, MySQL is the only ones of those that really irritates me for multiple reasons. RandomBlue fucked around with this message at 01:42 on Mar 11, 2022 |
# ¿ Mar 11, 2022 01:40 |
|
ORMs are fine until they aren't.
|
# ¿ Mar 11, 2022 14:47 |
|
22 years later and I'm still mad about the DBA that would not let us use surrogate keys AT ALL in a telephone billing system built to process millions of rows per day. At the deepest level of a given relationship the PK was 18 columns and ~250 chars in TYOOL 2000. It took about 8 hours to process the data and writing queries was SO loving PAINFUL. Didn't matter how many articles I sent her or how I pled my case and when I asked why her response was "because of my 8 years of DBA experience". Closest I've come to just going ballistic at work, had to walk away before blew up. I will never not be mad about it.
|
# ¿ Mar 11, 2022 21:42 |
|
Unless I misread, he said the larger result set was fast and the 77k result set is slow even though the field is indexed. e: though it may just be the first chunk of results are fast from the larger result and the entire results weren't actually fully loaded.
|
# ¿ Mar 27, 2022 14:08 |
|
Why are you trying to paginate though 8 million rows, 10 at a time, (or even 77k)? Seems like you should have some other indexed column to filter that further because that's an unusable amount of data in a UI.
RandomBlue fucked around with this message at 14:33 on Mar 27, 2022 |
# ¿ Mar 27, 2022 14:30 |
|
I take it you're ordering the result set properly so paginating works correctly. Do you have an index that includes the ordered columns first and any other filtered columns after so it can just use the index for ordering? That should allow it to just return the first page very quickly. e: Make sure the columns in the index are ordered (ASC/desc) the way they are in your query. RandomBlue fucked around with this message at 14:46 on Mar 27, 2022 |
# ¿ Mar 27, 2022 14:42 |
|
Hammerite posted:I don't like EXISTS and NOT EXISTS. I prefer to use a join, because it's less complicated. They don't serve the same purpose. Sure, you can use join to work like exists but it's not the same and the performance isn't the same and if the table you're joining to has multiple results but you're not selecting from that table you'll have to distinct the output of the query. e: assuming you weren't joking
|
# ¿ Jun 18, 2022 19:10 |
|
abelwingnut posted:exists also handles NULL fields better. loving NULL
|
# ¿ Jun 18, 2022 19:56 |
|
For your many to many relationship between video and genre you need a junction table, not a FK to the genre table as that would only allow for one genre per video. Also, you're combining singular and plural table names, pick either singular or plural. Personally I prefer singular because every table can contain multiple rows/entities and saying "video table" already indicates plurality, so making the table name plural is just unnecessary IMO. "Video_Library" is also just another form of pluralizing the table name. I'd name them: video genre video_genre age_rating rating (or review_score, review_rating, contact_rating) contact e: also to avoid age_rating.age_rating I'd change that column name to `name` or `description`. RandomBlue fucked around with this message at 16:27 on Jun 24, 2022 |
# ¿ Jun 24, 2022 16:23 |
|
Junction tables are a case where using a composite PK is generally acceptable. Typically you wouldn't have a separate PK there, you'd have a composite PK made up of `genre_id` and `video_id` that are also FK's to the other tables.
|
# ¿ Jun 24, 2022 17:59 |
|
Hughmoris posted:Question about SQL Server and Transactions, as I seek to better understand them... They're part of the same transaction, so that select query reads the new state. Everything between BEGIN TRANSACTION and COMMIT or ROLLBACK TRANSACTION shares that transaction and the associated changes.
|
# ¿ Aug 10, 2022 00:51 |
|
Seventh Arrow posted:I could be totally wrong on this, but wouldn't you also have to do a nolock on the SELECT query? Nope.
|
# ¿ Aug 10, 2022 03:46 |
|
Condimentalist posted:Kind of random question: Has anyone taken the oracle database foundations 1z0-006 exam? I am trying to test out of some college classes; I have been using ANSI sql and t-sql for ten years but I haven’t touched oracle. Until you get to custom stored procs, functions and packages it's not drastically different from other SQL. Or wasn't when I worked with it for almost 10 years in the 90's and 00's and I doubt they've gotten much worse in that regard. You might take a look at their basic syntax for those and do a trial test if there is one. I haven't taken their exam though.
|
# ¿ Aug 10, 2022 04:37 |
|
Condimentalist posted:I don’t think I said it did? You are misunderstanding how COUNT(*) works vs COUNT(col). https://learnsql.com/blog/difference-between-count-distinct/ RandomBlue fucked around with this message at 05:06 on Feb 5, 2023 |
# ¿ Feb 5, 2023 05:02 |
|
Seventh Arrow posted:If you want to break into data engineering, the things you should prioritize the most are: python? you mean the money laundering language?
|
# ¿ Jul 31, 2023 20:35 |
|
redleader posted:"can you hop on a call? a client is having problems connecting to the api" only the best code for SA
|
# ¿ Nov 6, 2023 19:28 |
|
teen phone cutie posted:When I remove the last outer join, the query goes from 2 seconds to 100ms, so I'm 100% certain it's the problem. Unless, you're suggesting I try something else? How does that last query in the outer join run by itself for a single poster id or all posters since you're using it that way? Have you run an explain plan on that? e: Oh, just read what that lateral join does, that's likely the problem. Don't see why you can't just use a standard outer join here with grouping, that should still work with pagination. RandomBlue fucked around with this message at 02:45 on Jan 15, 2024 |
# ¿ Jan 15, 2024 02:40 |
|
kiwid posted:Can someone help me understand what is happening in this WHERE clause? Yes, though my guess is the left side of all those NULL comparisons is supposed to be a column name, maybe some parameter that was supposed to be passed in and it's being passed in as null instead of a string and you get this. Either that or ORM generated nonsense, though there aren't any mangled aliases in there that you usually get with ORM generated queries. RandomBlue fucked around with this message at 22:41 on Jan 31, 2024 |
# ¿ Jan 31, 2024 22:07 |
|
mortarr posted:I just want to say I have done all this converting from datetime to datetimeoffset and feel your pain. I was the timezone "expert" for a couple of companies in the past, and I hope I never have to dick about with timezones ever again, they are by far the most complex and error-prone kind of calculations I've ever had to work with. I loving hate working with dates and times because there's always some new way poo poo gets hosed up. There are a shitload of formats even if timezones aren't an issues but they always are. Storing it in UTC is the easiest part. If you control the input and can limit things there it's not so bad but I work on a product that works with user provided CSVs, spreadsheets, DB sources, CRMs, etc.. and of course we work with international customers so all that poo poo has to work together seamlessly.
|
# ¿ Mar 10, 2024 23:34 |
|
|
# ¿ May 6, 2024 06:50 |
|
e: misread that post like an idiot and created 12 new datetime bugs e: One of the more fun ones I ran into was Java parsing a dd-mm-yyyy value as mm-dd-yyyy and when the moth value was > 12 instead of throwing an error it would just wrap the year and subtract 12 from the month. RandomBlue fucked around with this message at 00:29 on Mar 12, 2024 |
# ¿ Mar 12, 2024 00:27 |