|
Select user_id from join_table a Left join user_table b on a.user_id = b.user_id Where b.user_id is null That is t the intent of what you want? All the IDs in join table that are not in the original table? Add a distinct if needed
|
# ? Apr 25, 2017 11:59 |
|
|
# ? Jun 4, 2024 18:22 |
|
PhantomOfTheCopier posted:LIKE is not a regular expression operator at all; it's a wildcard-based, pattern literal equivalence operator. It always matches the entire string, supports matching any single character in a given position, or a sequence of any characters, and otherwise matches every character individually and exactly. Yeah LIKE to me was always "oh it's like DOS wildcards" and not "oh it's regex". Did that ever improve for MSSQL? I remember hacking in regex-like functionality by converting data types to xml so many times.
|
# ? Apr 25, 2017 17:59 |
|
Scaramouche posted:Did that ever improve for MSSQL? I remember hacking in regex-like functionality by converting data types to xml so many times. Not to my knowledge.
|
# ? Apr 25, 2017 19:25 |
|
Roundboy posted:Select user_id from join_table a Sorta did this but instead of using a join I basically used CodeIgniter to make variables out of the the SQL Searches needed. It took far too long to figure out what was causing the PHP backend to spaz out (Turns out Codeignites uses mutidmential arrays as it's output which causes no end of fun)
|
# ? Apr 25, 2017 20:42 |
|
Scaramouche posted:Yeah LIKE to me was always "oh it's like DOS wildcards" and not "oh it's regex". T-SQL still doesn't have native regexes, but a less hacky option would be to use the CLR integration to do regexes or whatever you need in your query. I imagine it would be several orders of magnitude slower than LIKE, of course.
|
# ? Apr 25, 2017 22:21 |
|
Student SQL project: Write a script that converts a regular expression into a collection of optimized SQL statements using Substr and Like.
PhantomOfTheCopier fucked around with this message at 00:03 on Apr 26, 2017 |
# ? Apr 26, 2017 00:00 |
|
NihilCredo posted:T-SQL still doesn't have native regexes, but a less hacky option would be to use the CLR integration to do regexes or whatever you need in your query. I imagine it would be several orders of magnitude slower than LIKE, of course. Yeah, been down that road. Unless it's improved the CLR mix-in was dog slow, though I haven't done it since MSSQL 2008R2.
|
# ? Apr 26, 2017 19:08 |
|
Oracle has regex support that I never had issue with until the other day when I needed to write a regex with a [] expression that matched a ]. []abc], that's how you match a character that is a,b,c or a ]. The ] has to be the very first character. And if you need to match a hyphen it must be the first or last character. I've never encountered that convention anywhere else and really didn't understand why.
|
# ? Apr 26, 2017 20:06 |
|
Sounds like POSIX regex, a kind of extension on the standard: http://www.regular-expressions.info/posixbrackets.html Pretty standard in most Unix-based implementations. You'll generally see non-POSIX in programming language regex implementations.
|
# ? Apr 26, 2017 20:38 |
|
Ha, I'm a big dummy.
|
# ? Apr 26, 2017 20:58 |
|
I'm trying to design a table that's a list words, grouped by whether they are synonyms. So I have a field for the word itself, and a field for an integer that identifies the group. Words which are synonyms will have the same group number. The problem I'm having is that, when you need to add a word that isn't a synonym for any of the words already in the table, you need to get a new group number, and I don't know how to do this automatically. Obviously, I could get all the unique values from the group number field and then look at the highest one and just add 1 to it. But that seems like a lot of work for something that is so simple conceptually. Is there a much more efficient way to accomplish this?
|
# ? Apr 30, 2017 20:09 |
|
Snak posted:I'm trying to design a table that's a list words, grouped by whether they are synonyms.
|
# ? Apr 30, 2017 20:23 |
|
Kuule hain nussivan posted:You could have a helper table for the group-ids, but that honestly seems like an even bigger waste to me. Yeah, that's what I thought, too. That might be the only good way to do it, though. edit: I mean, I guess it doesn't to be an int or be human readable. I could just make it a varchar and make the group-id be the the first word added to it. I don't want to do that though, because there is a possible case where a word later gets removed after synonyms were added. Then, if the word was re-added, it would end up automatically going back into the group, regardless of whether it was intended to. That's... really sloppy. edit2: It looks like I could just have an extra field... And autoincrementing primary key, the word field, and the group-id. When a word is entered into an existing word-group, set the group-id to the existing group-id, if it's a new word-group, set the group-id to the newly-formed auto-increment (I could do this in a second query in my PHP logic). Now, in InnoDB, there could still be a problem if a highnumber primary key gets deleted and a word with a lower ID somehow gets moved into that group. But that won't be an issue with MyISAM. And I can also just make it so that groups are never re-assigned and re-assignment is only ever done by removing the entry and then making a new one. Snak fucked around with this message at 20:48 on Apr 30, 2017 |
# ? Apr 30, 2017 20:36 |
|
Snak posted:I'm trying to design a table that's a list words, grouped by whether they are synonyms. If you don't already have a synonyms/groups/whatever table: CREATE TABLE dbo.Synonyms ( ID INT PRIMARY KEY IDENTITY(1,1) , WordType nvarchar(50) , AddedDttm datetime ); GO Then somewhere in your sproc: IF NOT EXISTS (SELECT 1 FROM dbo.Synonyms WHERE WordType = @wordType) BEGIN DECLARE @newIdentity int INSERT INTO dbo.Synonyms (WordType, AddedDttm) SELECT @wordType, current_timestamp SET @newIdentity = SCOPE_IDENTITY END INSERT INTO dbo.Words (Word, synonymID, AddedDttm) SELECT 'Word', @newIdentity, current_timestamp Something like that is what I would do
|
# ? Apr 30, 2017 20:46 |
|
Snak posted:edit: I mean, I guess it doesn't to be an int or be human readable. If you want to be lazy you could make the synonym key a UUID and generate a new one when you add a new non-synonym. NihilCredo fucked around with this message at 20:49 on Apr 30, 2017 |
# ? Apr 30, 2017 20:47 |
|
Hey all, hopefully a simple question: What's the best method to return two values from the same column from the same table, the second row related by a key pulled from the first? I'm dealing with Oracle SQL more and more and I'm sure this is simple but I haven't done it yet. Basically we have a table that has default object properties and then sub tables for object-specific info and we need to return the "name" of a user object related to another object "name" by the user who modified it.
|
# ? May 4, 2017 23:30 |
|
PierreTheMime posted:Hey all, hopefully a simple question: What's the best method to return two values from the same column from the same table, the second row related by a key pulled from the first? I'm dealing with Oracle SQL more and more and I'm sure this is simple but I haven't done it yet. I'm no Oracle expert, but if I were using SQL Server I'd probably use Lead or Lag. It looks like Oracle has something similar. https://oracle-base.com/articles/misc/lag-lead-analytic-functions.
|
# ? May 4, 2017 23:41 |
|
PierreTheMime posted:Hey all, hopefully a simple question: What's the best method to return two values from the same column from the same table, the second row related by a key pulled from the first? I'm dealing with Oracle SQL more and more and I'm sure this is simple but I haven't done it yet. I don't know precisely what you're looking to do here. If the second row is offset from the first by some fixed amount as can be worked out in the window of an analytic funtion's window, then yeah, LEAD or LAG could do the trick. If there's a column in the first row that references a second row, you're probably looking for hierarchical queries.
|
# ? May 5, 2017 07:43 |
|
Gatac posted:I don't know precisely what you're looking to do here. If the second row is offset from the first by some fixed amount as can be worked out in the window of an analytic funtion's window, then yeah, LEAD or LAG could do the trick. If there's a column in the first row that references a second row, you're probably looking for hierarchical queries. This is for a system where all possible functions are objects. A script to execute is an object, a user is an object, a calendar is an object, etc. The main table contains all default object properties such as "object name" and an identifying number to use to cross reference with other tables for more specific information such as "user first name" which is stored in a separate user table. What I'm trying to do is search for an object type and find what user modified it last. The problem being that the object type and the user both need to get the name from the same table from the same column, just a different row based on the "moduserid" which is a value on the row of the object type. I don't know how to select the same column from the same table twice and get different results based on a value on a secondary column from the first object. I'll edit in an example query and the result when I get to my desk.
|
# ? May 5, 2017 13:13 |
|
PierreTheMime posted:It's definitely not a specific offset, unfortunately. If I'm understanding correctly it sounds like you need something like (assuming there is some sort of modifiedtime field in your table) code:
|
# ? May 5, 2017 13:49 |
|
PierreTheMime posted:It's definitely not a specific offset, unfortunately. code:
|
# ? May 5, 2017 15:11 |
|
Here's an example of what I have and what else I'm trying to retrieve:code:
code:
code:
code:
code:
|
# ? May 5, 2017 15:49 |
|
Does Oracle SQL have CTEs? Sounds like you need to assign a rank to each object based on modifieddate and select the top 1 from each and join it to your user table Something like code:
e: missed a from clause, oops kumba fucked around with this message at 22:08 on May 5, 2017 |
# ? May 5, 2017 16:22 |
|
PierreTheMime posted:[snip] code:
code:
code:
Jethro fucked around with this message at 22:01 on May 5, 2017 |
# ? May 5, 2017 21:57 |
|
I'm trying to set up a one-way one-to-one database replication process, but I don't really know what I'm doing and our preferred layout isn't totally standard. We have a live database on our production server, and we need to sync data from a remote server to a new database that'll be on the production server. Both the fresh remote and local syncing databases have no tables or data yet. When I save the configuration file on the production server with the intention for it to only affect the slave database, the live one appears like it's trying to connect to the remote database. A quick rollback undoes this, but of course there's no replication. I've followed guides like these and tried to search my issue but had no luck. The documentation mentions ignore filters, however it's vague. Which server's config should which option be in? In the context of each option, am I supposed to specify a database from the local or remote server? Each attempt I've tried has failed. To summarise: SQL code:
Is what I'm trying to do possible? Should I be trying to set this up some other way entirely? Edit: Decided to abandon this approach and try another method. All good now. Mush Man fucked around with this message at 05:41 on May 12, 2017 |
# ? May 11, 2017 12:06 |
|
Hey guys, i have a bit of a abstract question which im hoping you can help. At my new job the DBA guy is saying that running Analyze/Optimize on all tablea is impossible because it takes almost two full days. From what I can tell they have a relativly small architecture (12 schemas with less than 150 tables total). They however have a few tables with billions of rows. Even with that said it strikes me as odd especially knowing that the master server has 1tb of ram and lots of cpu. Im thinking that they might be overdoing indexes, but other than that is there anything else I should explore?
|
# ? May 11, 2017 21:45 |
|
How do you know the indexes are overloaded? Are there a bunch that don't get hit often in queries?
|
# ? May 11, 2017 22:06 |
|
rt4 posted:How do you know the indexes are overloaded? Are there a bunch that don't get hit often in queries? I don't know yet but it s what came to mind when he told me they never do optimize the tables nor update the tables statistics because it takes too long. Im wondering if there are other things i should potentially investigate since ill most likely be tasked with it.
|
# ? May 11, 2017 22:10 |
|
Tell them to move to a real database where this isn't a problem? If you've no option to update statistics, you'll have to check the performance tables, index reads, hits, cache reads, and so forth. If you can't find a cause, and maybe even if you can, you'll have to redesign some stuff. But yeah moving off grep is the first step.
|
# ? May 11, 2017 23:31 |
|
Are there any good free entity relationship diagramming programs out there? Specifically trying to improve / completely replace an existing schema that an ex-employee created. It is terrible.
|
# ? May 12, 2017 13:01 |
|
Does anyone have a suggestion for decent MySQL tutorials that don't have an insufferable sounding teacher?
|
# ? May 12, 2017 18:05 |
|
Condimentalist posted:Are there any good free entity relationship diagramming programs out there? Out of all these I've only used MySQL Workbench, which is p cool, but obviously limited to MySQL.
|
# ? May 12, 2017 18:07 |
Huge noob Postgres question inbound:SQL code:
|
|
# ? May 14, 2017 01:52 |
|
I don't know the technical details of VACUUM, but if it needs to write any bytes before deleting, it's gonna fail until you can free up some space. Does Heroku let you add space incrementally?
|
# ? May 14, 2017 22:46 |
|
I believe the full vacuuming needs to make a copy of the database to clean unused bits. So if you got zero space that will be a problem
|
# ? May 15, 2017 01:09 |
|
That's a question for Heroku. Contact them to request more storage for your database so you can continue cleanup. You're unlikely to recover space with VACUUM because auto-vacuum is the default. It's possible space is being taken by extensive logging, but that's a property of the configuration, not the data. Surely you got an alarm indicating you were running out of space. vvv Still on it? Well you have to get access to the hosting server. Otherwise the only safe way to reclaim space will be to drop indexes and such, but I can't guarantee that will flush or that your app won't fill it all back up in ten seconds. Last time I had to deal with this (four weeks ago) I built a larger server as a replica and migrated, but yeah... cleanup is ongoing. PhantomOfTheCopier fucked around with this message at 23:32 on May 15, 2017 |
# ? May 15, 2017 07:18 |
I opened a ticket. The startup I'm with now is really poor so we can't afford to pay for premium support lol
|
|
# ? May 15, 2017 07:55 |
|
I've got a table with historical data where each time someone edits a row in the Flask app, a new row is added to the table instead of editing it in place. Here is what the entire table looks like:code:
code:
|
# ? May 23, 2017 18:43 |
|
code:
|
# ? May 23, 2017 19:16 |
|
|
# ? Jun 4, 2024 18:22 |
|
Hammerite posted:
Impressive, thanks!
|
# ? May 23, 2017 22:14 |